Data Provider Frameworks

Current Frameworks

The following Data Provider frameworks support importing all kinds of data into Squore. Whether you choose one or the other depends on the ability of your script or executable to produce CSV or XML data. Note that these frameworks are recommended over the legacy frameworks described in Legacy Frameworks, which are deprecated as of Squore 18.0.

csv_import Reference

==============
= csv_import =
==============

The csv_import framework allows you to create Data Providers that produce CSV files that the framework will translate into XML files that can be imported in your analysts results. This framework is useful if writing XML files directly from your script is not practical.

Using csv_import, you can import metrics, findings (including relaxed findings), textual information, and links between artefacts (including to and from source code artefacts).
This framework replaces all the legacy frameworks that wrote CSV files in previous versions.

Note that this framework can be called by your Data Provider simply by creating an exec-tool phase that calls the part of the framework located in the configuration folder:
<exec-tool name="csv_import">
	<param key="csv" value="${getOutputFile(output.csv)}" />
	<param key="separator" value=";" />
	<param key="delimiter" value="&quot;" />
</exec-tool>

For a full description of all the parameters that can be used, consult the section called "CSV Import" in the "Data Providers" chapter of this manual.


============================================
= CSV format expected by the data provider =
============================================

- Line to define an artefact (like a parent artefact for instance):
Artefact

- Line to add n metrics to an artefact:
Artefact;(MetricId;Value)*

- Line to add n infos to an artefact:
Artefact;(InfoId;Value)*

- Line to add a key to an artefact:
Artefact;Value

- Line to add a finding to an artefact:
Artefact;RuleId;Message;Location

- Line to add a relaxed finding to an artefact:
Artefact;RuleId;Message;Location;RelaxStatus;RelaxMessage

- Line to add a link between artefacts:
Artefact;LinkId;Artefact

where:
- MetricId is the id of the metric as declared in the Analysis Model
- InfoId is the id of the information to import
- Value is the value of the metric or the information or the key to import (a key is a UUID used to reference an artefact)
- RuleId is the id of the rule violated as declared in the Analysis Model
- Message is the message of the finding, which is displayed after the rule description
- Location is the location of the finding (a line number for findings attached source code artefacts, a url for findings attached to any other kind of artefact)
- RelaxStatus is one of DEROGATION, FALSE_POSITIVE or LEGACY and defines the relaxation stat of the imported finding
- RelaxMessage is the justification message for the relaxation state of the finding
- LinkId is the id of the link to create between artefacts, as declared in the Analysis Model

==========================
= Manipulating Artefacts =
==========================

The following functions are available to locate and manipulate source code artefacts in the project:
- ${artefact(type,path)} ==> Identify an artefact by its type and full path
- ${artefact(type,path,uid)} ==> Identify an artefact by its type and full path and assign it the unique identifier uid
- ${uid(value)} ==> Identify an artefact by its unique identifier (value)
- ${file(path)} ==> Tries to find a source code file matching the "path" in the project
- ${function(fpath,line)} ==> Tries to find a source code function at line "line" in file matching the "fpath" in the project
- ${function(fpath,name)} ==> Tries to find a source code function whose name matches "name" in the file matching the "fpath" in the project
- ${class(fpath,line)} ==> Tries to find a source code class at line "line" in the file matching the "fpath" in the project
- ${class(fpath,name)} ==> Tries to find a source code class whose name matches "name" in the file matching the "fpath" in the project

Note: In the above definitions if "name" contains either '(' or ',' characters then the full name has to be between simple quote
e.g. ${function(main.c,'main(int argc, char* argv)')}

===============
= Input Files =
===============

The data provider accepts the following files:
Metrics file accepts:
    Artefact definition line
    Metrics line

Findings file accepts:
    Artefact definition line
	Findings line

Keys file accepts:
	Artefact definition line
	Keys line

Information file accepts:
	Artefact definition line
	Information line

Links file accepts:
	Artefact definition line
	Links line

It is also possible to mix every kind of line in a single csv file, as long as each line is prefixed with the kind of data it contains.
In this case, the first column must contain one of:
DEFINE (or D): when the line is used to define an artefact
METRIC (or M): to add a metric
INFO (or I): to add an information
KEY (or K): to add a key
FINDING (or F): to add a finding, relaxed or not
LINK (or L): to add link between artefacts

The following is an example of a csv file containing mixed lines:
D;${artefact(CR_FOLDER,/CRsCl)}
M;${artefact(CR,/CRsCl/cr2727,2727)};NB;2
M;${artefact(CR,/CRsCl/cr1010,1010)};NB;4
I;${uid(1010)};NBI;Bad weather
K;${artefact(CR,/CRsCl/cr2727,2727)};#CR2727
I;${artefact(CR,/CRsCl/cr2727,2727)};NBI;Nice Weather
F;${artefact(CR,/CRsCl/cr2727,2727)};BAD;Malformed
M;${uid(2727)};NB_EXT;3
I;${uid(2727)};NBI_EXT;Another Info
F;${uid(2727)};BAD_EXT;Badlyformed
F;${uid(2727)};BAD_EXT1;Badlyformed1;;FALSE_POSITIVE;Everything is in the title]]>
F;${function(machine.c,41)};R_GOTO;"No goto; neither togo;";41
F;${function(machine.c,42)};R_GOTO;No Goto;42;LEGACY;Was done a long time ago
F;${function(main.c,'main(int argc, char* argv)')};R_GOTO;No Goto;42
L;${uid(1010)};CR2CR;${uid(2727)}
L;${uid(2727)};CR2CR;${uid(1010)}

xml Reference

=======
= xml =
=======

The xml framework is an implementation of a data provider that allows to import an xml file, potentially after an xsl transformation. The transformed XML file is expected to follow the syntax expected by other data providers (see input-data.xml specification).

This framework can be extended like the other frameworks, by creating a folder for your data provider in your configuration/tools folder and creating a form.xml. Following are three examples of the possible uses of this framework.

Example 1 - User enters an xml path and an xsl path, the xml is transformed using the xsl and then imported
=========
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="xml">
  <tag type="text" key="xml" />
    <tag type="text" key="xslt" />

    <exec-phase id="add-data">
      <exec name="java" failOnError="true" failOnStdErr="true">
        <arg value="${javaClasspath(groovy,xml-resolver-1.2.jar)}"/>
        <arg value="groovy.lang.GroovyShell" />
        <arg value="xml.groovy" />
        <arg value="${outputDirectory}" />
        <arg tag="xml"/>
        <arg tag="xsl" />
      </exec>
	</exec-phase>
</tags>

Example 2 - The user enter an xml path, the xsl file is predefined (input-data.xsl) and present in the same directory as form.xml
=========
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="xml">
  <tag type="text" key="xml" />

  <exec-phase id="add-data">
    <exec name="java" failOnError="true" failOnStdErr="true">
      <arg value="${javaClasspath(groovy,xml-resolver-1.2.jar)}"/>
      <arg value="groovy.lang.GroovyShell" />
      <arg value="xml.groovy" />
      <arg value="${outputDirectory}" />
      <arg tag="xml" />
      <arg value="${getToolConfigDir(input-data.xsl)}" />
    </exec>
  </exec-phase>
</tags>

Example 3 - The user enter an xml path of a file already in the expected format
=========
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="xml">
  <tag type="text" key="xml" />

  <exec-phase id="add-data">
    <exec name="java" failOnError="true" failOnStdErr="true">
      <arg value="${javaClasspath(groovy,xml-resolver-1.2.jar)}"/>
      <arg value="groovy.lang.GroovyShell" />
      <arg value="xml.groovy" />
      <arg value="${outputDirectory}" />
      <arg tag="xml" />
    </exec>
  </exec-phase>
</tags>

Legacy Frameworks

ALL DP framework capabilities
Figure 1. Legacy Data Provider frameworks and their capabilities
  1. Csv

    The Csv framework is used to import metrics or textual information and attach them to artefacts of type Application or File. While parsing one or more input CSV files, if it finds the same metric for the same artefact several times, it will only use the last occurrence of the metric and ignore the previous ones. Note that the type of artefacts you can attach metrics to is limited to Application and File artefacts. If you are working with File artefacts, you can let the Data Provider create the artefacts by itself if they do not exist already. Refer to the full Csv Reference section for more information.

  2. csv_findings

    The csv_findings framework is used to import findings in a project and attach them to artefacts of type Application, File or Function. It takes a single CSV file as input and is the only framework that allows you to import relaxed findings directly. Refer to the full csv_findings Reference section for more information.

  3. CsvPerl

    The CsvPerl framework offers the same functionality as Csv, but instead of dealing with the raw input files directly, it allows you to run a perl script to modify them and produce a CSV file with the expected input format for the Csv framework. Refer to the full CsvPerl Reference section for more information.

  4. FindingsPerl

    The FindingsPerl framework is used to import findings and attach them to existing artefacts. Optionally, if an artefact cannot be found in your project, the finding can be attached to the root node of the project instead. When launching a Data Provider based on the FindingsPerl framework, a perl script is run first. This perl script is used to generate a CSV file with the expected format which will then be parsed by the framework. Refer to the full FindingsPerl Reference section for more information.

  5. Generic

    The Generic framework is the most flexible Data Provider framework, since it allows attaching metrics, findings, textual information and links to artefacts. If the artefacts do not exist in your project, they will be created automatically. It takes one or more CSV files as input (one per type of information you want to import) and works with any type of artefact. Refer to the full Generic Reference section for more information.

  6. GenericPerl

    The GenericPerl framework is an extension of the Generic framework that starts by running a perl script in order to generate the metrics, findings, information and links files. It is useful if you have an input file whose format needs to be converted to match the one expected by the Generic framework, or if you need to retrieve and modify information exported from a web service on your network. Refer to the full GenericPerl Reference section for more information.

  7. ExcelMetrics

    The ExcelMetrics framework is used to extract information from one or more Microsoft Excel files (.xls or .xslx). A detailed configuration file allows defining how the Excel document should be read and what information should be extracted. This framework allows importing metrics, findings and textual information to existing artefacts or artefacts that will be created by the Data Provider. Refer to the full xref:appendix:app-dp-frameworks.adoc#ref_ExcelMetrics>> section for more information.

After you choose the framework to extend, you should follow these steps to make your custom Data Provider known to Squore:

  1. Create a new configuration tools folder to save your work in your custom configuration folder: MyConfiguration/configuration/tools.

  2. Create a new folder for your Data Provider inside the new tools folder: CustomDP. This folder needs to contain the following files:

    • form.xml defines the input parameters for the Data Provider, and the base framework to use

    • form_en.properties contains the strings displayed in the web interface for this Data Provider

    • config.tcl contains the parameters for your custom Data Provider that are specific to the selected framework

    • CustomDP.pl is the perl script that is executed automatically if your custom Data Provider uses one of the *Perl frameworks.

  3. Edit Squore Server’s configuration file to register your new configuration path, as described in the Installation and Administration Guide.

  4. Log into the web interface as a Squore administrator and reload the configuration.

Your new Data Provider is now known to Squore and can be triggered in analyses. Note that you may have to modify your Squore configuration to make your wizard aware of the new Data Provider and your model aware of the new metrics it provides. Refer to the relevant sections of the Configuration Guide for more information.

Csv Reference

=======
= Csv =
=======

The Csv framework is used to import metrics or textual information and attach them to artefacts of type Application, File or Function. While parsing one or more input CSV files, if it finds the same metric for the same artefact several times, it will only use the last occurrence of the metric and ignore the previous ones. Note that the type of artefacts you can attach metrics to is limited to Application, File and Function artefacts. If you are working with File artefacts, you can let the Data Provider create the artefacts by itself if they do not exist already.

============
= form.xml =
============

You can customise form.xml to either:
- specify the path to a single CSV file to import
- specify a pattern to import all csv files matching this pattern in a directory

In order to import a single CSV file:
=====================================
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="Csv" needSources="true">
	<tag type="text" key="csv" defaultValue="/path/to/mydata.csv" />
</tags>

Notes:
- The csv key is mandatory.
- Since Csv-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project.

In order to import all files matching a pattern in a folder:
===========================================================
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="Csv" needSources="true">
	<!-- Root directory containing Csv files to import-->
	<tag type="text" key="dir" defaultValue="/path/to/mydata" />
	<!-- Pattern that needs to be matched by a file name in order to import it-->
	<tag type="text" key="ext" defaultValue="*.csv" />
	<!-- search for files in sub-folders -->
	<tag type="booleanChoice" defaultValue="true" key="sub" />
</tags>

Notes:
- The dir and ext keys are mandatory
- The sub key is optional (and its value set to false if not specified)


==============
= config.tcl =
==============

Sample config.tcl file:
=======================
# The separator used in the input CSV file
# Usually \t or ;
set Separator "\t"

# The delimiter used in the input CSV file
# This is normally left empty, except when you know that some of the values in the CSV file
# contain the separator itself, for example:
# "A text containing ; the separator";no problem;end
# In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4.
# To include the delimiter itself in a value, you need to escape it by duplicating it, for example:
# "A text containing "" the delimiter";no problemo;end
# Default: none
set Delimiter \"

# ArtefactLevel is one of:
#      Application: to import data at application level
#      File: to import data at file level. In this case ArtefactKey has to be set
#             to the value of the header (key) of the column containing the file path
#			  in the input CSV file.
#      Function : to import data at function level, in this case:
#                 ArtefactKey has to be set to the value of the header (key) of the column containing the path of the file
#                 FunctionKey has to be set to the value of the header (key) of the column containing the name and signature of the function
# Note that the values are case-sensitive.
set ArtefactLevel File
set ArtefactKey File

# Should the File paths be case-insensitive?
# true or false (default)
# This is used when searching for a matching artefact in already-existing artefacts.
set PathsAreCaseInsensitive "false"

# Should file artefacts declared in the input CSV file be created automatically?
# true (default) or false
set CreateMissingFile "true"

# FileOrganisation defines the layout of the input CSV file and is one of:
#     header::column: values are referenced from the column header
#     header::line: NOT AVAILABLE
#     alternate::line: lines are a sequence of {Key Value}
#     alternate::column: columns are a sequence of {Key Value}
# There are more examples of possible CSV layouts later in this document
set FileOrganisation header::column

# Metric2Key contains a case-sensitive list of paired metric IDs:
#     {MeasureID KeyName [Format]}
# where:
#   - MeasureID is the id of the measure as defined in your analysis model
#   - KeyName, depending on the FileOrganisation, is either the name of the column or the name
#      in the cell preceding the value to import as found in the input CSV file
#   - Format is the optional format of the data, the only accepted format
#      is "text" to attach textual information to an artefact, for normal metrics omit this field
set Metric2Key {
	{BRANCHES Branchs}
	{VERSIONS Versions}
	{CREATED Created}
	{IDENTICAL Identical}
	{ADDED Added}
	{REMOV Removed}
	{MODIF Modified}
	{COMMENT Comment text}
}


==========================
= Sample CSV Input Files =
==========================

Example 1:
==========
 FileOrganisation : header::column
 ArtefactLevel : File
 ArtefactKey   : Path

Path	Branchs	Versions
./foo.c	15		105
./bar.c	12		58

Example 2:
==========
 FileOrganisation : alternate::line
 ArtefactLevel : File
 ArtefactKey   : Path

Path	./foo.c	Branchs	15	Versions	105
Path	./bar.c	Branchs	12	Versions	58

Example 3:
==========
 FileOrganisation : header::column
 ArtefactLevel : Application

ChangeRequest	Corrected	Open
27				15			11

Example 4:
==========
 FileOrganisation : alternate::column
 ArtefactLevel : Application

ChangeRequest	15
Corrected		11

Example 5:
==========
 FileOrganisation : alternate::column
 ArtefactLevel : File
 ArtefactKey   : Path

Path	./foo.c
Branchs	15
Versions	105
Path	./bar.c
Branchs	12
Versions	58

Example 6:
==========
 FileOrganisation : header::column
 ArtefactLevel : Function
 ArtefactKey   : Path
 FunctionKey   : Name

Path	Name	Decisions Tested
./foo.c	end_game(int*,int*)	15		3
./bar.c	bar(char)	12		6

Working With Paths:
===================

- Path seperators are unified: you do not need to worry about handling differences between Windows and Linux
- With the option PathsAreCaseInsensitive, case is ignored when searching for files in the Squore internal data
- Paths known by Squore are relative paths starting at the root of what was specified in the repository connector durign the analysis. This relative path is the one used to match with a path in a csv file.

Here is a valid example of file matching:
  1. You provide C:\A\B\C\D as the root folder in a repository connector
  2. C:\A\B\C\D contains E\e.c then Squore will know E/e.c as a file

  3. You provide a csv file produced on linux and containing
    /tmp/X/Y/E/e.c as path, then Squore will be able to match it with the known file.

Squore uses the longest possible match.
In case of conflict, no file is found and a message is sent to the log.

csv_findings Reference

================
= csv_findings =
================

The csv_findings data provider is used to import findings (rule violations) and attach them to artefacts of type Application, File or Function.
The format of the csv file given as parameter has to be:

FILE;FUNCTION;RULE_ID;MESSAGE;LINE;COL;STATUS;STATUS_MESSAGE;TOOL

where:
=====
FILE : is the full path of the file where the finding is located
FUNCTION : is the name of the function where the finding is located
RULE_ID : is the Squore ID of the rule which is violated
MESSAGE : is the specific message of the violation
LINE: is the line number where the violation occurs
COL: (optional, leave empty if not provided) is the column number where the violation occurs
STATUS: (optional, leave empty if not provided) is the staus of the relaxation if the violation has to be relaxed (DEROGATION, FALSE_POSITIVE, LEGACY)
STATUS_MSG: (optional, leave empty if not provided) is the message for the relaxation when relaxed
TOOL: is the tool providing the violation

The header line is read and ignored (it has to be there)
The separator (semicolon by default) can be changed in the config.tcl file (see below)
The delimiter (no delimiter by default) can be changed in the config.tcl (see below)

==============
= config.tcl =
==============

Sample config.tcl file:
=======================
# The separator used in the input CSV file
# Usually ; or \t
set Separator \;

# The delimiter used in the CSV input file
# This is normally left empty, except when you know that some of the values in the CSV file
# contain the separator itself, for example:
# "A text containing ; the separator";no problem;end
# In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4.
# To include the delimiter itself in a value, you need to escape it by duplicating it, for example:
# "A text containing "" the delimiter";no problemo;end
# Default: none
set Delimiter \"

# You can add some patterns to avoid new findings when some strings in the finding message changes
# i.e. Unreachable code Default switch clause is unreachable. switch-expression at line 608 (column 12).
# In this case we do not want the line number to be part of the signagture of the finding,
# to achieve this user will add a pattern as shown below (patterns are TCL regex patterns):
lappend InconstantFindingsPatterns {at line [0-9]+}

CsvPerl Reference

===========
= CsvPerl =
===========

The CsvPerl framework offers the same functionality as Csv, but instead of dealing with the raw input files directly, it allows you to run a perl script to modify them and produce a CSV file with the expected input format for the Csv framework.


============
= form.xml =
============

In your form.xml, specify the input parameters you need for your Data Provider.
Our example will use two parameters: a path to a CSV file and another text parameter:

<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="CsvPerl" needSources="true">
	<tag type="text" key="csv" defaultValue="/path/to/csv" />
	<tag type="text" key="param" defaultValue="MyValue" />
</tags>

- Since Csv-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project.


==============
= config.tcl =
==============

Refer to the description of config.tcl for the Csv framework.

For CsvPerl one more option is possible:

# The variable NeedSources is used to request the perl script to be executed once for each
# repository node of the project. In that case an additional parameter is sent to the
# perl script (see below for its position)
#set ::NeedSources 1


==========================
= Sample CSV Input Files =
==========================

Refer to the examples for the Csv framework.


===============
= Perl Script =
===============

The perl scipt will receive as arguments:
 - all parameters defined in form.xml (as -${key} $value)
 - the input directory to process (only if ::NeedSources is set to 1 in the config.tcl file)
 - the location of the output directory where temporary files can be generated
 - the full path of the csv file to be generated

For the form.xml we created earlier in this document, the command line will be:
 perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.csv

Example of perl script:
======================
#!/usr/bin/perl
use strict;
use warnings;
$|=1 ;

($csvKey, $csvValue, $paramKey, $paramValue, $output_folder, $output_csv) = @ARGV;

	# Parse input CSV file
	# ...

	# Write results to CSV
	open(CSVFILE, ">" . ${output_csv}) || die "perl: can not write: $!\n";
	binmode(CSVFILE, ":utf8");
	print CSVFILE  "ChangeRequest;15";
	close CSVFILE;

exit 0;

Generic Reference

===========
= Generic =
===========

The Generic framework is the most flexible Data Provider framework, since it allows attaching metrics, findings, textual information and links to artefacts. If the artefacts do not exist in your project, they will be created automatically. It takes one or more CSV files as input (one per type of information you want to import) and works with any type of artefact.


============
= form.xml =
============

In form.xml, allow users to specify the path to a CSV file for each type of data you want to import.
You can set needSources to true or false, depending on whether or not you want to require the use of a repository connector when your custom Data Provider is used.

Example of form.xml file:
=========================
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="Generic" needSources="false">
	<!-- Path to CSV file containing Metrics data -->
	<tag type="text" key="csv" defaultValue="mydata.csv" />
	<!-- Path to CSV file containing Findings data: -->
	<tag type="text" key="fdg" defaultValue="mydata_fdg.csv" />
	<!-- Path to CSV file containing Information data: -->
	<tag type="text" key="inf" defaultValue="mydata_inf.csv" />
	<!-- Path to CSV file containing Links data: -->
	<tag type="text" key="lnk" defaultValue="mydata_lnk.csv" />
</tags>

Note: All tags are optional. You only need to specify the tag element for the type of data you want to import with your custom Data Provider.


==============
= config.tcl =
==============

Sample config.tcl file:
=======================
# The separator used in the input csv files
# Usually \t or ; or ,
# In our example below, a space is used.
set Separator " "

# The delimiter used in the input CSV file
# This is normally left empty, except when you know that some of the values in the CSV file
# contain the separator itself, for example:
# "A text containing ; the separator";no problem;end
# In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4.
# To include the delimiter itself in a value, you need to escape it by duplicating it, for example:
# "A text containing "" the delimiter";no problemo;end
# Default: none
set Delimiter \"

# The path separator in an artefact's path
# in the input CSV file.
# Note that artefact is spellt with an "i"
# and not an "e" in this option.
set ArtifactPathSeparator "/"

# If the data provider needs to specify a different toolName (optional)
set SpecifyToolName 1

# Metric2Key contains a case-sensitive list of paired metric IDs:
#     {MeasureID KeyName [Format]}
# where:
#   - MeasureID is the id of the measure as defined in your analysis model
#   - KeyName is the name in the cell preceding the value to import as found in the input CSV file
#   - Format is the optional format of the data, the only accepted format
#      is "text" to attach textual information to an artefact. Note that the same result can also
#       be achieved with Info2Key (see below). For normal metrics omit this field.
set Metric2Key {
	{CHANGES Changed}
}

# Finding2Key contains a case-sensitive list of paired rule IDs:
#     {FindingID KeyName}
# where:
#   - FindingID is the id of the rule as defined in your analysis model
#   - KeyName is the name in the finding name in the input CSV file
set Finding2Key {
	{R_NOTLINKED NotLinked}
}

# Info2Key contains a case-sensitive list of paired info IDs:
#     {InfoID KeyName}
# where:
#   - InfoID is the id of the textual information as defiend in your analysis model
#   - KeyName is the name of the information name in the input CSV file
set Info2Key
	{SPECIAL_LABEL Label}
}

# Ignore findings for artefacts that are not part of the project (orphan findings)
# When set to 1, the findings are ignored
# When set to 0, the findings are imported and attached to the APPLICATION node
# (default: 1)
set IgnoreIfArtefactNotFound 1

# If data in csv concerns source code artefacts (File, Class or Function), the way to
# match file paths can be case-insensitive
# true or false (default)
# This is used when searching for a matching artefact in already-existing artefacts.
set PathsAreCaseInsensitive "false"

# Save the total count of orphan findings as a metric at application level
# Specify the ID of the metric to use in your analysys model
# to store the information
# (default: empty)
set OrphanArteCountId NB_ORPHANS

====================
=  CSV File Format =
====================

All the examples listed below assume the use of the following config.tcl:

set Separator ","
set ArtifactPathSeparator "/"
set Metric2Key {
	{CHANGES Changed}
}
set Finding2Key {
	{R_NOTLINKED NotLinked}
}
set Info2Key
	{SPECIAL_LABEL Label}
}

How to reference an artefact:
============================
 ==> artefact_type artefact_path
 Example:
 REQ_MODULES,Requirements
 REQ_MODULE,Requirements/Module
 REQUIREMENT,Requirements/Module/My_Req

 References the following artefact
 Application
      Requirements (type: REQ_MODULES)
			Module (type: REQ_MODULE)
				My_Req	(type: REQUIREMENT)

Note: For source code artefacts there are 3 special artefact kinds:
 ==> FILE file_path
 ==> CLASS file_path (Name|Line)
 ==> FUNCTION file_path (Name|Line)

 Examples:
 FUNCTION src/file.c 23
 references the function which contains line 23 in the source file src/file.c, if no
 function found the line whole line of the csv file is ignored.

 FUNCTION src/file.c foo()
 references a function named foo in source file src/file.c. If more than one function foo
 is defined in this file, then the signature of the function (which is optional) is used
 to find the best match.

Layout for Metrics File:
========================
 ==> artefact_type artefact_path (Key Value)*

 When the parent artefact type is not given it defaults to <artefact_type>_FOLDER.
 Example:
 REQ_MODULE,Requirements/Module
 REQUIREMENT,Requirements/Module/My_Req,Changed,1

 will produce the following artefact tree:
 Application
      Requirements (type: REQ_MODULE_FOLDER)
          Module (type: REQ_MODULE)
              My_Req : (type: REQUIREMENT) with 1 metric CHANGES = 1

Note: the key "Changed" is mapped to the metric "CHANGES", as specified by the Metric2Key parameter, so that it matches what is expected by the model.


Layout for Findings File:
=========================
 ==> artefact_type artefact_path key message

 When the parent artefact type is not given it defaults to <artefact_type>_FOLDER.
 Example:
 REQ_MODULE,Requirements/Module
 REQUIREMENT,Requirements/Module/My_Req,NotLinked,A Requiremement should always been linked

 will produce the following artefact tree:
 Application
      Requirements (type: REQ_MODULE_FOLDER)
          Module (type: REQ_MODULE)
              My_Req (type: REQUIREMENT) with 1 finding R_NOTLINKED whose description is "A Requiremement should always been linked"

Note: the key "NotLinked" is mapped to the finding "R_NOTLINKED", as specified by the Finding2Key parameter, so that it matches what is expected by the model.

Layout for Textual Information File:
====================================
 ==> artefact_type artefact_path label value

 When the parent artefact type is not given it defaults to <artefact_type>_FOLDER.
 Example:
 REQ_MODULE,Requirements/Module
 REQUIREMENT,Requirements/Module/My_Req,Label,This is the label of the req

 will produce the following artefact tree:
 Application
      Requirements (type: REQ_MODULE_FOLDER)
          Module (type: REQ_MODULE)
              My_Req (type: REQUIREMENT) with 1 information of type SPECIAL_LABEL whose content is "This is the label of the req"

Note: the label "Label" is mapped to the finding "SPECIAL_LABEL", as specified by the Info2Key parameter, so that it matches what is expected by the model.

Layout for Links File:
======================
 ==> artefact_type artefact_path dest_artefact_type dest_artefact_path link_type

 When the parent artefact type is not given it defaults to <artefact_type>_FOLDER
 Example:
 REQ_MODULE Requirements/Module
 TEST_MODULE Tests/Module
 REQUIREMENT Requirements/Module/My_Req TEST Tests/Module/My_test TESTED_BY

 will produce the following artefact tree:
 Application
	Requirements (type: REQ_MODULE_FOLDER)
		Module (type: REQ_MODULE)
			My_Req (type: REQUIREMENT) ------>
	Tests (type: TEST_MODULE_FOLDER)           |
		Module (type: TEST_MODULE)             |
			My_Test (type: TEST) <------------+ link (type: TESTED_BY)

The TESTED_BY relationship is created with My_Req as source of the link and My_test as the destination

CSV file organisation when SpecifyToolName is set to 1
======================================================
When the variable SpecifyToolName is set to 1 (or true) a column has to be added
at the beginning of each line in each csv file. This column can be empty or filled with a different toolName.

 Example:
 ,REQ_MODULE,Requirements/Module
 MyReqChecker,REQUIREMENT,Requirements/Module/My_Req Label,This is the label of the req

The finding of type Label will be set as reported by the tool "MyReqChecker".

GenericPerl Reference

===============
= GenericPerl =
===============

The GenericPerl framework is an extension of the Generic framework that starts by running a perl script in order to generate the metrics, findings, information and links files. It is useful if you have an input file whose format needs to be converted to match the one expected by the Generic framework, or if you need to retrieve and modify information exported from a web service on your network.


============
= form.xml =
============

In your form.xml, specify the input parameters you need for your Data Provider.
Our example will use two parameters: a path to a CSV file and another text parameter:

<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="CsvPerl" needSources="false">
	<tag type="text" key="csv" defaultValue="/path/to/csv" />
	<tag type="text" key="param" defaultValue="MyValue" />
</tags>


==============
= config.tcl =
==============

Refer to the description of config.tcl for the Generic framework for the basic options.
Additionally, the following options are available for the GenericPerl framework, in order to know which type of information your custom Data Provider should try to import.

# If the data provider needs to specify a different toolName (optional)
#set SpecifyToolName 1

# Set to 1 to import metrics csv file, 0 otherwise


# ImportMetrics
# When set to 1, your custom Data Provider (CustomDP) will try to import
# metrics from a file called CustomDP.mtr.csv that your perl script
# should generate according to the expected format described in the
# documentation of the Generic framework.
set ImportMetrics 1

# ImportInfos
# When set to 1, your custom Data Provider (CustomDP) will try to import
# textual information from a file called CustomDP.inf.csv that your perl script
# should generate according to the expected format described in the
# documentation of the Generic framework.
set ImportInfos 0

# ImportFindings
# When set to 1, your custom Data Provider (CustomDP) will try to import
# findings from a file called CustomDP.fdg.csv that your perl script
# should generate according to the expected format described in the
# documentation of the Generic framework.
set ImportFindings 1

# ImportLinks
# When set to 1, your custom Data Provider (CustomDP) will try to import
# artefact links from a file called CustomDP.lnk.csv that your perl script
# should generate according to the expected format described in the
# documentation of the Generic framework.
set ImportLinks 0

# Ignore findings for artefacts that are not part of the project (orphan findings)
# When set to 1, the findings are ignored
# When set to 0, the findings are imported and attached to the APPLICATION node
# (default: 1)
set IgnoreIfArtefactNotFound 1

# Save the total count of orphan findings as a metric at application level
# Specify the ID of the metric to use in your analysys model
# to store the information
# (default: empty)
set OrphanArteCountId NB_ORPHANS

====================
=  CSV File Format =
====================

Refer to the examples in the Generic framework.


===============
= Perl Script =
===============

The perl scipt will receive as arguments:
- all parameters defined in form.xml (as -${key} $value)
- the location of the output directory where temporary files can be generated
- the full path of the metric csv file to be generated (if ImportMetrics is set to 1 in config.tcl)
- the full path of the findings csv file to be generated (if ImportFindings is set to 1 in config.tcl)
- the full path of the textual information csv file to be generated (if ImportInfos is set to 1 in config.tcl)
- the full path of the links csv file to be generated (if ImportLinks is set to 1 in config.tcl)
- the full path to the output directory used by this data provider in the previous analysis

For the form.xml and config.tcl we created earlier in this document, the command line will be:
 perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.mtr.csv <output_folder>/CustomDP.fdg.csv <previous_output_folder>

The following perl functions are made available in the perl environment so you can use them in your script:
- get_tag_value(key) (returns the value for $key parameter from your form.xml)
- get_output_metric()
- get_output_finding()
- get_output_info()
- get_output_link()
- get_output_dir()
- get_input_dir() (returns the folder containing sources if needSources is set to 1)
- get_previous_dir()


Example of perl script:
======================
#!/usr/bin/perl
use strict;
use warnings;
$|=1 ;

	# Parse input CSV file
	my $csvFile = get_tag_value("csv");
	my $param = get_tag_value("param");
	# ...

	# Write metrics to CSV
	open(METRICS_FILE, ">" . get_output_metric()) || die "perl: can not write: $!\n";
	binmode(METRICS_FILE, ":utf8");
	print METRICS_FILE  "REQUIREMENTS;Requirements/All_Requirements;NB_REQ;15";
	close METRICS_FILE;

	# Write findings to CSV
	open(FINDINGS_FILE, ">" . get_output_findings()) || die "perl: can not write: $!\n";
	binmode(FINDINGS_FILE, ":utf8");
	print FINDINGS_FILE  "REQUIREMENTS;Requirements/All_Requirements;R_LOW_REQS;\"The minimum number of requirement should be at least 25.\"";
	close FINDINGS_FILE;

exit 0;

FindingsPerl Reference

================
= FindingsPerl =
================

The FindingsPerl framework is used to import findings and attach them to existing artefacts. Optionally, if an artefact cannot be found in your project, the finding can be attached to the root node of the project instead. When launching a Data Provider based on the FindingsPerl framework, a perl script is run first. This perl script is used to generate a CSV file with the expected format which will then be parsed by the framework.


============
= form.xml =
============

In your form.xml, specify the input parameters you need for your Data Provider.
Our example will use two parameters: a path to a CSV file and another text parameter:

<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="CsvPerl" needSources="true">
	<tag type="text" key="csv" defaultValue="/path/to/csv" />
	<tag type="text" key="param" defaultValue="MyValue" />
</tags>

- Since FindingsPerl-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project.


==============
= config.tcl =
==============


Sample config.tcl file:
=======================
# The separator to be used in the generated CSV file
# Usually \t or ;
set Separator ";"

# The delimiter used in the input CSV file
# This is normally left empty, except when you know that some of the values in the CSV file
# contain the separator itself, for example:
# "A text containing ; the separator";no problem;end
# In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4.
# To include the delimiter itself in a value, you need to escape it by duplicating it, for example:
# "A text containing "" the delimiter";no problemo;end
# Default: none
set Delimiter \"

# Should the perl script execcuted once for each repository node of the project ?
# 1 or 0 (default)
# If true an additional parameter is sent to the
# perl script (see below for its position)
set ::NeedSources 0

# Should the violated rules definitions be generated?
# true or false (default)
# This creates a ruleset file with rules that are not already
# part of your analysis model so you can review it and add
# the rules manually if needed.
set generateRulesDefinitions false

# Should the File paths be case-insensitive?
# true or false (default)
# This is used when searching for a matching artefact in already-existing artefacts.
set PathsAreCaseInsensitive false

# Should file artefacts declared in the input CSV file be created automatically?
# true (default) or false
set CreateMissingFile true

# Ignore findings for artefacts that are not part of the project (orphan findings)
# When set to 0, the findings are imported and attached to the APPLICATION node instead of the real artefact
# When set to 1, the findings are not imported at all
# (default: 0)
set IgnoreIfArtefactNotFound 0

# Save the total count of orphan findings as a metric at application level
# Specify the ID of the metric to use in your analysys model
# to store the information
# (default: empty)
set OrphanArteCountId NB_ORPHANS

# The tool version to specify in the generated rules definitions
# The default value is ""
# Note that the toolName is the name of the folder you created
# for your custom Data Provider
set ToolVersion ""

# FileOrganisation defines the layout of the CSV file that is produced by your perl script:
#     header::column: values are referenced from the column header
#     header::line: NOT AVAILABLE
#     alternate::line: NOT AVAILABLE
#     alternate::column: NOT AVAILABLE
set FileOrganisation header::column

# In order to attach a finding to an artefact of type FILE:
#   - Tool (optional) if present it overrides the name of the tool providing the finding
#   - Path has to be the path of the file
#   - Type has to be set to FILE
#   - Line can be either empty or the line in the file where the finding is located
#   Rule is the rule identifier, can be used as is or translated using Rule2Key
#   Descr is the description message, which can be empty
#
# In order to attach a finding to an artefact of type FUNCTION:
#   - Tool (optional) if present it overrides the name of the tool providing the finding
#   - Path has to be the path of the file containing the function
#   - Type has to be FUNCTION
#   - If line is an integer, the system will try to find an artefact function
#		at the given line of the file
#   - If no Line or Line is not an integer, Name is used to find an artefact in
# 		the given file having name and signature as found in this column.
# (Line and Name are optional columns)

# Rule2Key contains a case-sensitive list of paired rule IDs:
#     {RuleID KeyName}
# where:
#   - RuleID is the id of the rule as defined in your analysis model
#   - KeyName is the rule ID as written by your perl script in the produced CSV file
# Note: Rules that are not mapped keep their original name. The list of unmapped rules is in the log file generated by your Data Provider.
set Rule2Key {
	{	ExtractedRuleID_1	MappedRuleId_1	 }
	{	ExtractedRuleID_2	MappedRuleId_2	 }
}


====================
=  CSV File Format =
====================

According to the options defined earlier in config.tcl, a valid csv file would be:

Path;Type;Line;Name;Rule;Descr
/src/project/module1/f1.c;FILE;12;;R1;Rule R1 is violated because variable v1
/src/project/module1/f1.c;FUNCTION;202;;R4;Rule R4 is violated because function f1
/src/project/module2/f2.c;FUNCTION;42;;R1;Rule R1 is violated because variable v2
/src/project/module2/f2.c;FUNCTION;;skip_line(int);R1;Rule R1 is violated because variable v2

Working With Paths:
===================

- Path seperators are unified: you do not need to worry about handling differences between Windows and Linux
- With the option PathsAreCaseInsensitive, case is ignored when searching for files in the Squore internal data
- Paths known by Squore are relative paths starting at the root of what was specified in the repository connector durign the analysis. This relative path is the one used to match with a path in a csv file.

Here is a valid example of file matching:
  1. You provide C:\A\B\C\D as the root folder in a repository connector
  2. C:\A\B\C\D contains E\e.c then Squore will know E/e.c as a file

  3. You provide a csv file produced on linux and containing
    /tmp/X/Y/E/e.c as path, then Squore will be able to match it with the known file.

Squore uses the longest possible match.
In case of conflict, no file is found and a message is sent to the log.


===============
= Perl Script =
===============

The perl scipt will receive as arguments:
	- all parameters defined in form.xml (as -${key} $value)
	- the input directory to process (only if ::NeedSources is set to 1)
	- the location of the output directory where temporary files can be generated
	- the full path of the findings csv file to be generated

For the form.xml and config.tcl we created earlier in this document, the command line will be:
 perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.fdg.csv <output_folder>/CustomDP.fdg.csv

Example of perl script:
======================
#!/usr/bin/perl
use strict;
use warnings;
$|=1 ;

($csvKey, $csvValue, $paramKey, $paramValue, $output_folder, $output_csv) = @ARGV;

	# Parse input CSV file
	# ...

	# Write results to CSV
	open(CSVFILE, ">" . ${output_csv}) || die "perl: can not write: $!\n";
	binmode(CSVFILE, ":utf8");
	print CSVFILE "Path;Type;Line;Name;Rule;Descr";
	print CSVFILE "/src/project/module1/f1.c;FILE;12;;R1;Rule R1 is violated because variable v1";
	close CSVFILE;

exit 0;

ExcelMetrics Reference

================
= ExcelMetrics =
================

The ExcelMetrics framework is used to extract information from one or more Microsoft Excel files (.xls or .xslx). A detailed configuration file allows defining how the Excel document should be read and what information should be extracted. This framework allows importing metrics, findings and textual information to existing artefacts or artefacts that will be created by the Data Provider.


============
= form.xml =
============

You can customise form.xml to either:
- specify the path to a single Excel file to import
- specify a pattern to import all Excel files matching this pattern in a directory

In order to import a single Excel file:
=====================================
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="ExcelMetrics" needSources="false">
	<tag type="text" key="excel" defaultValue="/path/to/mydata.xslx" />
</tags>

Notes:
- The excel key is mandatory.

In order to import all files matching a patter in a folder:
===========================================================
<?xml version="1.0" encoding="UTF-8"?>
<tags baseName="ExcelMetrics" needSources="false">
	<!-- Root directory containing Excel files to import-->
	<tag type="text" key="dir" defaultValue="/path/to/mydata" />
	<!-- Pattern that needs to be matched by a file name in order to import it-->
	<tag type="text" key="ext" defaultValue="*.xlsx" />
	<!-- search for files in sub-folders -->
	<tag type="booleanChoice" defaultValue="true" key="sub" />
</tags>


Notes:
- The dir and ext keys are mandatory
- The sub key is optional (and its value set to false if not specified)


==============
= config.tcl =
==============

Sample config.tcl file:
=======================
# The separator to be used in the generated csv file
# Usually \t or ; or ,
set Separator  ";"

# The delimiter used in the input CSV file
# This is normally left empty, except when you know that some of the values in the CSV file
# contain the separator itself, for example:
# "A text containing ; the separator";no problem;end
# In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4.
# To include the delimiter itself in a value, you need to escape it by duplicating it, for example:
# "A text containing "" the delimiter";no problemo;end
# Default: none
set Delimiter \"

# The path separator in an artefact's path
# in the generated CSV file.
set ArtefactPathSeparator "/"

# Ignore findings for artefacts that are not part of the project (orphan findings)
# When set to 1, the findings are ignored
# When set to 0, the findings are imported and attached to the APPLICATION node
# (default: 1)
set IgnoreIfArtefactNotFound 1

# Save the total count of orphan findings as a metric at application level
# Specify the ID of the metric to use in your analysys model
# to store the information
# (default: empty)
set OrphanArteCountId NB_ORPHANS

# The list of the Excel sheets to read, each sheet has the number of the first line to read
# A Perl regexp pattern can be used instead of the name of the sheet (the first sheet matching
# the pattern will be considered)
set Sheets {{Baselines 5} {ChangeNotes 5}}

# ######################
# # COMMON DEFINITIONS #
# ######################
#
# - <value> is a list of column specifications whose values will be concatened. When no column name is present, the
#         text is taken as it appears. Optional sheet name can be added (with ! char to separate from the column name)
#		Examples:
#            - {C:} the value will be the value in column C on the current row
#            - {C: B:} the value will be the concatenation of values found in column C and B of the current row
#            - {Deliveries} the value will be Deliveries
#            - {BJ: " - " BL:} the value will be the concatenation of value found in column BJ,
#            	 string " - " and the value found in column BL fo the current row
#            - {OtherSheet!C:} the value will be the value in column C from the sheet OtherSheet on the current row
#
#  - <condition> is a list of conditions. An empty condition is always true. A condition is a column name followed by colon,
#              optionally followed by a perl regexp. Optional sheet name can be added (with ! char to separate from the column name)
#		Examples:
#       - {B:} the value in column B must be empty on the current row
#       - {B:.+} the value in column B can not be empty on the current row
#       - {B:R_.+} the value in column B is a word starting by R_ on the current row
#       - {A: B:.+ C:R_.+} the value in column A must be empty and the value in column B must contain something and
#       	the column C contains a word starting with R_  on the current row
#       - {OtherSheet!B:.+} the value in column B from sheet OtherSheet on the current row can not be empty.

# #############
# # ARTEFACTS #
# #############
# The variable is a list of artefact hierarchy specification:
# {ArtefactHierarchySpec1 ArtefactHierarchySpec2 ... ArtefactHierarchySpecN}
# where each ArtefactHierarchySpecx is a list of ArtefactSpec
#
# An ArtefactSpec is a list of items, each item being:
# {<(sheetName!)?artefactType> <conditions> <name> <parentType>? <parentName>?}
# where:
#    - <(sheetName!)?artefactType>: allows specifying the type. Optional sheetName can be added (with ! char to separate from the type) to limit
#                                  the artefact search in one specific sheet. When Sheets are given with regexp, the same regexp has to be used
#                                  for the sheetName.
#                                  If the type is followed by a question mark (?), this level of artefact is optional.
#                                  If the type is followed by a plus char (+), this level is repeatable on the next row
#    - <condition>: see COMMON DEFINITIONS
#    - <value>: the name of the artefact to build, see COMMON DEFINITIONS
#
#   - <parentType>: This element is optional. When present, it means that the current element will be attached to a parent having this type
#   - <parentValue>: This is a list like <value> to build the name of the artefact of type <parentType>. If such artefact is not found,
#                  the current artefact does not match
#
# Note: to add metrics at application level, specify an APPLICATION artefact which will match only one line:
#       e.g. {APPLICATION {A:.+} {}} will recognize as application the line having column A not empty.
set ArtefactsSpecs {
	{
		{DELIVERY {} {Deliveries}}
		{RELEASE {E:.+} {E:}}
		{SPRINT {O:SW_Software} {Q:}}
	}
	{
		{DELIVERY {} {Deliveries}}
		{RELEASE {O:SY_System} {Q:}}
	}
	{
		{WP {BL:.+ AF:.+} {BJ: " - " BL:} SPRINT {AF:}}
		{ChangeNotes!TASK {D:(added|changed|unchanged) T:imes} {W: AD:}}
	}
	{
		{WP {} {{Unplanned imes}} SPRINT {AF:}}
		{TASK {BL: D:(added|changed|unchanged) T:imes W:.+} {W: AD:}}
	}
}

# ###########
# # METRICS #
# ###########
# Specification of metrics to be retreived
# This is a list where each element is:
#  {<artefactTypeList> <metricId> <condition> <value> <format>}
# Where:
#     - <artefactTypeList>: the list of artefact types for which the metric has to be used
#                       each element of the list is (sheetName!)?artefactType where sheetName is used
#                       to restrict search to only one sheet. sheetName is optional.
#     - <metricId>: the name of the MeasureId to be injected into Squore, as defined in your analysis model
#     - <confition>: see COMMON DEFINITIONS above. This is the condition for the metric to be generated.
#     - <value> : see COMMON DEFINITIONS above. This is the value for the metric (can be built from multi column)
#     - <format> : optional, defaults to NUMBER
#                Possible format are:
#							* DATE_FR, DATE_EN for date stored as string
#							* DATE for cell formatted as date
#                           * NUMBER_FR, NUMBER_EN for number stored as string
#							* NUMBER for cell formatted as number
#                           * LINES for counting the number of text lines in a cell
#     - <formatPattern> : optional
#                Only used by the LINES format.
# 				 This is a pattern (can contain perl regexp) used to filter lines to count
set MetricsSpecs {
	{{RELEASE SPRINT} TIMESTAMP {} {A:} DATE_EN}
	{{RELEASE SPRINT} DATE_ACTUAL_RELEASE {} {S:} DATE_EN}
	{{RELEASE SPRINT} DATE_FINISH {} {T:} DATE_EN}
	{{RELEASE SPRINT} DELIVERY_STATUS {} {U:}}
	{{WP} WP_STATUS {} {BO:}}
	{{ChangeNotes!TASK} IS_UNPLAN {} {BL:}}
	{{TASK WP} DATE_LABEL {} {BP:} DATE_EN}
	{{TASK WP} DATE_INTEG_PLAN {} {BD:} DATE_EN}
	{{TASK} TASK_STATUS {} {AE:}}
	{{TASK} TASK_TYPE {} {AB:}}
}

# ############
# # FINDINGS #
# ############
# This is a list where each element is:
#  {<artefactTypeList> <findingId> <condition> <value> <localisation>}
# Where:
#     - <artefactTypeList>: the list of artefact type for which the metric has to be used
#                       each element of the list is (sheetName!)?artefactType where sheetName is used
#                       to restrict search to only one sheet. sheetName is optional.
#     - <findingId>: the name of the FindingId to be injected into Squore, as defined in your analysis model
#     - <confition>: see COMMON DEFINITIONS above. This is the condition for the finding to be triggered.
#     - <value>: see COMMON DEFINITIONS above. This is the value for the message of the finding (can be built from multi column)
#     - <localisation>: this a <value> representing the localisation of the finding (free text)
set FindingsSpecs {
	{{WP} {BAD_WP} {BL:.+ AF:.+} {{This WP is not in a correct state } AF:.+} {A:}}
}

# #######################
# # TEXTUAL INFORMATION #
# #######################
# This is a list where each element is:
#  {<artefactTypeList> <infoId> <condition> <value>}
# Where:
#     - <artefactTypeList> the list of artefact types for which the info has to be used
#                       each element of the list is (sheetName!)?artefactType where sheetName is used
#                       to restrict search to only one sheet. sheetName is optional.
#     - <infoId> : is the name of the Information to be attached to the artefact, as defined in your analysis model
#     - <confition> : see COMMON DEFINITIONS above. This is the condition for the info to be generated.
#     - <value> : see COMMON DEFINITIONS above. This is the value for the info (can be built from multi column)
set InfosSpecs {
	{{TASK} ASSIGN_TO {} {XB:}}
}

# ########################
# # LABEL TRANSFORMATION #
# ########################
# This is a list value specification for MeasureId or InfoId:
#    <MeasureId|InfoId> { {<LABEL1> <value1>} ... {<LABELn> <valuen>}}
# Where:
#    - <MeasureId|InfoId> : is either a MeasureId, an InfoId, or * if it is available for every measureid/infoid
#    - <LABELx> : is the label to macth (can contain perl regexp)
#    - <valuex> : is the value to replace the label by, it has to match the correct format for the metrics (no format for infoid)
#
# Note: only metrics which are labels in the excel file or information which need to be rewriten, need to be described here.
set Label2ValueSpec {
	{
		STATUS {
			{OPENED 0}
			{ANALYZED 1}
			{CLOSED 2}
			{.* -1}
		}
	}
	{
		* {
			{FATAL 0}
			{ERROR 1}
			{WARNING 2}
			{{LEVEL:\s*0} 1}
			{{LEVEL:\s*1} 2}
			{{LEVEL:\s*[2-9]+} 3}
		}
	}
}

Note that a sample Excel file with its associated config.tcl is available in $SQUORE_HOME/addons/tools/ExcelMetrics in order to further explain available configuration options.