Data Providers

This chapter describe the available Data Providers and the default parameters that they accept via the Command Line Interface.

Axivion

Axivion axivion

Description

Import Findings from Axivion

For more details, refer to http://www.axivion.com.

Usage

Axivion has the following options:

  • CSV File (csv, mandatory): Specify the CSV file which contains the findings results (MISRA, Coding Style…​)

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • csv_separator(default: ,): Specify the CSV separator, required on CSV import

The full command line syntax for Axivion is:

-d "type=bauhaus,csv=[file],logLevel=[text],csv_separator=[text]"

CANoe

CANoe vector logo

Description

Import data from CANoe XML test results

Usage

CANoe has the following options:

  • Result folder(s) (dir, mandatory): Specify the folder(s) containing XML test results files from CANoe.

    CLI usage : Semicolon-separated list of results folder(s).

    For example: folder1;folder2;folder3

  • Test path (testPath, default: Tests): Define test path (for example Test/HIL Test), by default the value is Tests.

  • Import Traceability Matrix from vTESTstudio? (import_traceability_matrix, default: false): Traceability matrix file (vtc-tso file) can be generated from vTESTstudio. Using the matrix file will automatically link Tests and requirements)

  • Traceability File (vtc-tso) (traceability_file): Traceability File (vtc-tso)

  • Textual information to extract (infoList, default: LIFECYCLE_STATUS?property=TestCase_Lifecycle&map=[Draft:draft]): To specify the list of textual data to extract from the vTESTstudio properties.

    format:

    <INFO_ID>?property=<PROPERTY_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>]

    Examples:

    LIFECYCLE_STATUS?property=TestCase_Lifecycle&map=[Draft:draft]

  • Metric information to extract (metricList, default: TEST_LIFECYCLE_STATUS?property=TestCase_Lifecycle&map=[draft:2,under_review…​): To specify the list of metric data to extract from the vTESTstudio properties.

    format:

    <METRIC_ID>?property=<PROPERTY_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>]

    Examples:

    TEST_LIFECYCLE_STATUS?property=TestCase_Lifecycle&map=[draft:2,under_review:2,approved:1,retired:0]

  • Import Test Execution Plan from vTESTstudio? (import_test_execution_plan, default: false): Test execution plan (vexecplan file) can be generated from vTESTstudio. Using the test execution plan allows to flag requirements as "planned" and test as "executed"

  • Test Execution Plan file (vexecplan) (execution_plan_file): Test Execution Plan file (vexecplan)

  • Reset all "Review" States? (reset_review, default: false): This option reset all "Review" information which have been filled in the GUI. Initialize the review state allows to start a new analysis review from scratch.

  • Reset all "Overload" States? (reset_overload, default: false): This option reset all "Overload" information which have been filled in the GUI. Initialize the review state allows to start a new analysis review from scratch.

  • Import variant data? (create_variant, default: false): Variant data can be imported beside the test results. It is possible to get an overview of the tests results per variant. CARREFUL: a variant key must be defined.

  • Variant key (variant_options): Variant key allows to name the variant according relevant variant property. Key=ECU will list all the variant and name them according the value of the field "ECU".

  • Advanced options (advanced_options, default: false): Advanced options

  • File suffix (suff, default: .xml): Provide the suffix of CANoe test results files.

  • Consider "None" as "Passed" (consider_none_as_passed, default: true): By default, a test case which is stated as "none" (=without real evaluation step) are considered as "Passed".

  • "Hardware revision" key (sut_mapping_hw_rev, default: Hardware revision): "Hardware revision" key can be found in the node of the CANoe xml results.

  • "Software revision" key (sut_mapping_sw_rev, default: Software revision): "Software revision" key can be found in the node of the CANoe xml results.

  • "Boot Loader revision" key (sut_mapping_boot_rev): "Boot Loader revision" key can be found in the node of the CANoe xml results.

  • Add "Report File Name" to the test execution name (displayReportFileName, default: false): Add "Report File Name" to the test execution name.

    Use this option if you want to distinguish the test execution acccording their report file name.

    Adding an extra information to the name of the artefact affects the unique id of the artefact and allow to explicitely differentiate 2 test executions which have the same unique id.

  • Add "Test Unit" to the test execution name (displayTestUnit, default: false): Add "Test Unit" to the test execution name.

    Use this option if you want to distinguish the test execution acccording their Test Unit.

    Adding an extra information to the name of the artefact affects the unique id of the artefact and allow to explicitely differentiate 2 test executions which have the same unique id.

  • Add "Variant" to the test execution name (displayVariant, default: false): Add "Variant" to the test execution name.

    Use this option if you want to distinguish the test execution acccording their variant.

    Adding an extra information to the name of the artefact affects the unique id of the artefact and allow to explicitely differentiate 2 test executions which have the same unique id.

  • Use last execution verdict? (useLastExecution, default: false): This option allows to compile all executions from CANoe into one aggregated run.

    Only the last execution (based on the date of run) will be imported into Squore in order to ease the review of the test results.

  • Import Test Reassessment (importReassessment, default: true): Use this option to import the reassessment verdict from CANoe. This option is useful for user who overloads the test verdict within CANoe.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • createTests(default: YES): Shoult Test artefacts be created?

The full command line syntax for CANoe is:

-d "type=CANoe,dir=[directory],logLevel=[text],testPath=[text],createTests=[multipleChoice],import_traceability_matrix=[booleanChoice],traceability_file=[file],infoList=[text],metricList=[text],import_test_execution_plan=[booleanChoice],execution_plan_file=[file],reset_review=[booleanChoice],reset_overload=[booleanChoice],create_variant=[booleanChoice],variant_options=[text],advanced_options=[booleanChoice],suff=[text],consider_none_as_passed=[booleanChoice],sut_mapping_hw_rev=[text],sut_mapping_sw_rev=[text],sut_mapping_boot_rev=[text],displayReportFileName=[booleanChoice],displayTestUnit=[booleanChoice],displayVariant=[booleanChoice],useLastExecution=[booleanChoice],importReassessment=[booleanChoice]"

Cantata

Cantata cantata

Description

Cantata is a Test Coverage tool. It provides an XML output file which can be imported to generate coverage metrics at function level.

For more details, refer to http://www.qa-systems.com/cantata.html.

Usage

Cantata has the following options:

  • Cantata XML results (xml): Specify the path to the XML results file or directory from Cantata 6.2

  • Regex Files (regexFile, default: .xml): Specify a regular expression to find Cantata xml files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Cantata is:

-d "type=Cantata,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

CheckStyle

CheckStyle CheckStyle

Description

CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to http://checkstyle.sourceforge.net/.

Usage

CheckStyle has the following options:

  • CheckStyle results file(s) (xml, mandatory): Point to the XML file or the directory that contains Checkstyle results. Note that the minimum supported version is Checkstyle 5.3.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find checkstyle files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for CheckStyle is:

-d "type=CheckStyle,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

CheckStyle (plugin)

CheckStyle (plugin) CheckStyle

Description

CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to http://checkstyle.sourceforge.net/.

This data provider requires an extra download to extract the CheckStyle binary in <SQUORE_HOME>/addons/tools/CheckStyle_auto/. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section. In this directory, the pattern of the name of the PMD installation directory is checkstyle-5.6-${TheCheckstyleVersion}, for example checkstyle-8.37. If there are multiple installation directories, the most recent version will be chosen. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

CheckStyle (plugin) has the following options:

  • Configuration file (configFile): A Checkstyle configuration specifies which modules to plug in and apply to Java source files. Modules are structured in a tree whose root is the Checker module. Specify the absolute path of the configuration file. If no custom configuration file is found, a default configuration will be used (this default configuration is for Checkstyle versions from 8.11).

  • Source code folder (customDirs): Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, select None.

  • Xmx (xmx, default: 1024m): Maximum amount of memory allocated to the java process launching Checkstyle.

  • Excluded directory pattern (excludedDirectoryPattern): Java regular expression of directories to exclude from CheckStyle, for example: ^test|generated-sources|.*-report$ or ou ^lib$

In addition the following options are avaiable on command line:

  • defaultConfFile(default: checkstyle_for_squore.xml): If the default configuration file is not checkstyle_for_squore.xml, you can specify its name here. If there is no configuration file defined, then this file will be used.

The full command line syntax for CheckStyle (plugin) is:

-d "type=CheckStyle_auto,configFile=[file],customDirs=[directory],xmx=[text],excludedDirectoryPattern=[text],defaultConfFile=[text]"

CheckStyle for SQALE (plugin)

CheckStyle for SQALE (plugin) CheckStyle

Description

CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to http://checkstyle.sourceforge.net/.

This data provider requires an extra download to extract the CheckStyle binary in <SQUORE_HOME>/addons/tools/CheckStyle_auto_for_SQALE/. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

CheckStyle for SQALE (plugin) has the following options:

  • Configuration file (configFile): A Checkstyle configuration specifies which modules to plug in and apply to Java source files. Modules are structured in a tree whose root is the Checker module. Specify the name of the configuration file only, and the data provider will try to find it in the CheckStyle_auto folder of your custom configuration. If no custom configuration file is found, a default configuration will be used.

  • Xmx (xmx, default: 1024m): Maximum amount of memory allocated to the java process launching Checkstyle.

The full command line syntax for CheckStyle for SQALE (plugin) is:

-d "type=CheckStyle_auto_for_SQALE,configFile=[file],xmx=[text]"

Cobertura format

Description

Cobertura is a free code coverage library for Java. Its XML report file can be imported to generate code coverage metrics for your Java project.

For more details, refer to http://cobertura.github.io/cobertura/.

Usage

Cobertura format has the following options:

  • XML report (xml, mandatory): Specify the path to the XML report generated by Cobertura (or by a tool able to produce data in this format).

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Cobertura format is:

-d "type=Cobertura,xml=[file],logLevel=[text]"

CodeNarc

CodeNarc codenarc logo

Description

CodeNarc is an open source tool that verifies that Groovy applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to https://codenarc.org/.

Usage

CodeNarc has the following options:

  • CodeNarc results file(s) (xml, mandatory): Point to the XML file or the directory that contains CodeNarc results. Note that the minimum supported version is CodeNarc 0.22.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find CodeNarc files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for CodeNarc is:

-d "type=CodeNarc,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

CodeSonar

CodeSonar cslogo

Description

Codesonar is a static analysis tool for C and C++ code designed for zero tolerance defect environments. It provides an XML output file which is imported to generate findings.

For more details, refer to http://www.grammatech.com/codesonar.

Usage

CodeSonar has the following options:

  • XML results files (xml, mandatory): Specify the path to the XML results file generated by Codesonar. The minimum version of Codesonar compatible with this data provider is 3.3.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find CodeSonar files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for CodeSonar is:

-d "type=CodeSonar,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

Configuration Checker

Description

Use this tool to check for duplicated files or XML Elements between a custom configuration and the standard configuration.

Usage

Configuration Checker has the following options:

  • Standard Configuration Path (s):

  • Custom Configurations Path (p):

The full command line syntax for Configuration Checker is:

-d "type=conf-checker,s=[directory],p=[directory]"

Coverity

Coverity Coverity

Description

Coverity is a static analysis tool for C, C++, Java and C#. It provides an XML output which can be imported to generate findings.

For more details, refer to http://www.coverity.com/.

Usage

Coverity has the following options:

  • XML results file (xml, mandatory): Specify the path to the XML file containing Coverity results.

The full command line syntax for Coverity is:

-d "type=Coverity,xml=[file]"

Cppcheck

Cppcheck cppcheck

Description

Cppcheck is a static analysis tool for C/C++ applications. The tool provides an XML output which can be imported to generate findings.

For more details, refer to http://cppcheck.sourceforge.net/.

Usage

Cppcheck has the following options:

  • CPPCheck XML results (xml, mandatory): Specify the path to the XML results file or the directory from CPPCheck. Note that the minimum required version of CPPCheck for this data provider is 1.61.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find CPPCheck files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Cppcheck is:

-d "type=CPPCheck,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

Cppcheck (plugin)

Cppcheck (plugin) cppcheck

Description

Cppcheck is a static analysis tool for C/C++ applications. The tool provides an XML output which can be imported to generate findings.

For more details, refer to http://cppcheck.sourceforge.net/.

On Windows, this data provider requires an extra download to extract the Cppcheck binary in <SQUORE_HOME>/addons/tools/CPPCheck_auto/ and the MS Visual C++ 2010 Redistributable Package available from http://www.microsoft.com/en-in/download/details.aspx?id=5555. On Linux, you can install the cppcheck application anywhere you want. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

Cppcheck (plugin) has the following options:

  • Source code folder (dir): Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, select None.

  • Ignore List (ignores): Specify a semi-colon-separated list of source files or source file directories to exclude from the check. For example: "lib/;folder2/". Leave this field empty to deactivate this option and analyse all files with no exception.

  • Cppcheck addon (addon): to a Cppcheck addon. Will be passed to Cppcheck using the --addon= option.

The full command line syntax for Cppcheck (plugin) is:

-d "type=CPPCheck_auto,dir=[directory],ignores=[text],addon=[file]"

Cppcheck CERT (plugin)

Cppcheck CERT (plugin) cppcheck

Description

Cppcheck is a static analysis tool for C/C++ applications. Note that this DP requires Cppcheck version 2.1 or higher to run. This tool is configured to find CERT violations. The tool provides an XML output which can be imported to generate findings.

For more details, refer to http://cppcheck.sourceforge.net/.

On Windows, this data provider requires an extra download to extract the Cppcheck binary in <SQUORE_HOME>/addons/tools/CPPCheck_auto/ and the MS Visual C++ 2010 Redistributable Package available from http://www.microsoft.com/en-in/download/details.aspx?id=5555. On Linux, you can install the cppcheck application anywhere you want. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

Cppcheck CERT (plugin) has the following options:

  • Source code folder (dir): Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, select None.

  • Cppcheck CERT addon (addon, mandatory): Path to CPPCheck CERT python file.

    Example: $SQUORE_HOME/addons/tools/CPPCheck_auto/addons/cert.py

  • Ignore List (ignores): Specify a semi-colon-separated list of source files or source file directories to exclude from the check. For example: "lib/;folder2/". Leave this field empty to deactivate this option and analyse all files with no exception.

The full command line syntax for Cppcheck CERT (plugin) is:

-d "type=CPPCheck_auto_cert,dir=[directory],addon=[file],ignores=[text]"

Cppcheck MISRA (plugin)

Cppcheck MISRA (plugin) cppcheck

Description

Cppcheck is a static analysis tool for C/C++ applications. Note that this DP requires Cppcheck version 2.1 or higher to run. This tool is configured to find MISRA violations. The tool provides an XML output which can be imported to generate findings.

For more details, refer to http://cppcheck.sourceforge.net/.

On Windows, this data provider requires an extra download to extract the Cppcheck binary in <SQUORE_HOME>/addons/tools/CPPCheck_auto/ and the MS Visual C++ 2010 Redistributable Package available from http://www.microsoft.com/en-in/download/details.aspx?id=5555. On Linux, you can install the cppcheck application anywhere you want. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

Cppcheck MISRA (plugin) has the following options:

  • Source code folder (dir): Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, select None

  • Cppcheck MISRA addon (addon, mandatory): Path to CPPCheck MISRA python file.

    Example: $SQUORE_HOME/addons/tools/CPPCheck_auto/addons/misra.py.

  • Ignore List (ignores): Specify a semi-colon-separated list of source files or source file directories to exclude from the check. For example: "lib/;folder2/". Leave this field empty to deactivate this option and analyse all files with no exception.

The full command line syntax for Cppcheck MISRA (plugin) is:

-d "type=CPPCheck_auto_misra,dir=[directory],addon=[file],ignores=[text]"

CPPTest

CPPTest Parasoft

Description

Parasoft C/Ctest is an integrated solution for automating a broad range of best practices proven to improve software development team productivity and software quality for C and C. The tool provides an XML output file which can be imported to generate findings and metrics.

For more details, refer to http://www.parasoft.com/product/cpptest/.

Usage

CPPTest has the following options:

  • Directory which contains the XML results files (results_dir, mandatory): Specify the path to the CPPTest results directory. This data provider is compatible with files exported from CPPTest version 7.2.10.34 and up.

  • Results file extensions (pattern, mandatory, default: *.xml): Specify the pattern of the results files

The full command line syntax for CPPTest is:

-d "type=CPPTest,results_dir=[directory],pattern=[text]"

CPU Data Import

Description

CPU Data Import provides a generic import mechanism for CPU data from a CSV or Excel file.

Usage

CPU Data Import has the following options:

  • Choose Excel or CSV import (import_type, default: excel): Specify if the import is about Excel or CSV.

  • File or Directory (xls_file, mandatory): Specify the location of the Excel or CSV file or directory containing CPU information.

  • Sheet Name (xls_sheetname): Specify the name of the Excel sheet that contains the CPU list.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • Specify the CSV separator (csv_separator, default: ;): Specify the CSV separator

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • CPU Column name (xls_key, mandatory): Specify the header name of the column which contains the CPU key.

  • CPU artefact root path (root_node, default: Resources): Specify the root path in Squore of artefacts extracted from the file.

    By default the root artefact path is Resources

  • Grouping Structure (xls_groups): Artifacts can be grouped by contextual elements of the file, separated by ";".

    For example: "column_name_1=regex1;column_name_2=regex2; the result in Squore Resources/"value_regex1"/"value_regex2"/MyArt

  • Filtering (xls_filters): Specify the list of Header for filtering

    For example: "column_name_1=regex1;column_name_2=regex2;

  • "CPU Loop Time" Column name (cpu_loop_column_name, default: Total Loop Time [ms]): Specify the column name of the CPU Loop Time (Ex: "Total Loop Time [ms]")

  • "Average Idle Time per loop" Column name (cpu_idle_column_name, default: Average idle Time per loop [ms]): Specify the column name of the Average Idle Time per loop (Ex: "Average idle Time per loop [ms]")

  • "Worst Case Idle Time per loop" Column name (cpu_worst_column_name, default: Worse case idle Time per loop [ms]): Specify the column name of the Worst Case Idle Time per loop (Ex: "Worse case idle Time per loop [ms]")

  • Create an output file (createOutput, default: true): Create an output file

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for CPU Data Import is:

-d "type=import_cpu,import_type=[multipleChoice],xls_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],xls_key=[text],root_node=[text],xls_groups=[text],xls_filters=[text],cpu_loop_column_name=[text],cpu_idle_column_name=[text],cpu_worst_column_name=[text],createOutput=[booleanChoice],logLevel=[text]"

CSV Coverage Import

Description

CSV Coverage Import provides a generic import mechanism for coverage results at function level

Usage

CSV Coverage Import has the following options:

  • CSV file (csv, mandatory): Enter the path to the CSV containing the coverage data.

    The expected format of each line contained in the file is PATH;NAME;TESTED_C1;OBJECT_C1;TESTED_MCC;OBJECT_MCC;TESTED_MCDC;OBJECT_MCDC;TCOV_MCC;TCOV_MCDC;TCOV_C1

The full command line syntax for CSV Coverage Import is:

-d "type=csv_coverage,csv=[file]"

CSV Findings

Description

CSV Findings is a generic tool that allows importing findings into the project.

Usage

CSV Findings has the following options:

  • CSV File(s) (csv): Specify the path(s) to your CSV file(s) containing findings. To provide multiple files click on '+'. Each line in the file must use the following format and the file should include the following header:

    FILE;FUNCTION;RULE_ID;MESSAGE;LINE;COL;STATUS;STATUS_MESSAGE;TOOL

The full command line syntax for CSV Findings is:

-d "type=csv_findings,csv=[file]"

CSV Import

Description

Imports artefacts, metrics, findings, textual information and links from one or more CSV files. The expected CSV format for each of the input files is described in the user manuals in the csv_import framework reference.

Consult link csv_import Reference section for more details about the expected CSV format.

Usage

CSV Import has the following options:

  • CSV Separator (separator, default: ;): Specify the CSV Separator used in the CSV file.

  • CSV Delimiter (delimiter, default: "): CSV Delimiter is used when the separator is used inside a cell value. If a delimiter is used as a char in a cell it has to be doubled.

    The ' char is not allowed as a delimiter.

  • Artefact Path Separator (pathSeparator, default: /): Specify the character used as a separator in an artefact's path in the input CSV file.

  • Case-sensitive artefact lookup (pathAreCaseSensitive, default: true): When this option is turned on, artefacts in the CSV file are matched with existing source code artefacts in a case-sensitive manner.

  • Ignore source file path (ignoreSourceFilePath, default: false): When ignoring source file path it is your responsbility to ensure that file names are unique in the project.

  • Create missing files (createMissingFile, default: false): Automatically creates the artefacts declared in the CSV file if they do not exist.

  • Ignore finding if artefact not found (ignoreIfArtefactNotFound, default: true): If a finding can not be attached to any artefact then it is either ignored (checked) or it is attached to the project node instead (unchecked).

  • Unknown rule ID (unknownRuleId): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Measure ID for orphan artifacts count (orphanArteCountId): To save the total count of orphan findings as a metric at application level, specify the ID of the measure to use in your analysis model.

  • Measure ID for unknown rules count (orphanRulesCountId): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Information ID receiving the list of unknown rules IDs (orphanRulesListId): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • CSV File (csv): Specify the path to the input CSV file containing artefacts, metrics, findings, textual information, links and keys.

  • Metrics CSV File (metrics): Specify the path to the CSV file containing metrics.

  • Infos CSV File (infos): Specify the path to the CSV file containing textual information.

  • Findings CSV File (findings): Specify the path to the CSV file containing findings.

  • Keys CSV File (keys): Specify the path to the CSV file containing artefact keys.

  • Links CSV File (links): Specify the path to the CSV file containing links.

  • Reports artefacts mapping problem as (level, default: info): When an artefact referenced in the CSV file can not be found in the project, reports the problem as an information or as a warning.

In addition the following options are avaiable on command line:

  • inconstantFdgDescr: A list of patterns separated by ; corresponding to inconstant part of finding descriptions.

The full command line syntax for CSV Import is:

-d "type=csv_import,separator=[text],delimiter=[text],pathSeparator=[text],pathAreCaseSensitive=[booleanChoice],ignoreSourceFilePath=[booleanChoice],createMissingFile=[booleanChoice],ignoreIfArtefactNotFound=[booleanChoice],unknownRuleId=[text],orphanArteCountId=[text],orphanRulesCountId=[text],orphanRulesListId=[text],csv=[file],metrics=[file],infos=[file],findings=[file],keys=[file],links=[file],level=[multipleChoice],inconstantFdgDescr=[text]"

CSV Tag Import

Description

This data provider allows setting values for attributes in the project.

Usage

CSV Tag Import has the following options:

  • CSV file (csv, mandatory): Specify the path to the file containing the metrics.

The full command line syntax for CSV Tag Import is:

-d "type=csv_tag_import,csv=[file]"

ESLint

ESLint eslint

Description

ESLint is an open source tool that verifies that JavaScript applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to https://eslint.org/.

Usage

ESLint has the following options:

  • ESLint results file (xml, mandatory): Point to the XML file that contains ESLint results in Checkstyle format.

The full command line syntax for ESLint is:

-d "type=ESLint,xml=[file]"

FindBugs-SpotBugs

FindBugs SpotBugs Findbugs

Description

FindBugs (and its successor SpotBugs) is an open source tool that looks for bugs in Java code. It produces an XML result file which can be imported to generate findings.

For more details, refer to http://findbugs.sourceforge.net/.

Usage

FindBugs-SpotBugs has the following options:

  • XML results file (xml, mandatory): Specify the location of the XML file or directory containing FindBugs results. Note that the minimum supported version for FindBugs is 1.3.9, and 3.1.7 to 3.1.12 for SpotBugs.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find FindBugs xml files

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for FindBugs-SpotBugs is:

-d "type=Findbugs,xml=[file_or_directory],regexFile=[text],logLevel=[text]"

FindBugs-SpotBugs (plugin)

FindBugs SpotBugs (plugin) Findbugs

Description

FindBugs is an open source tool that looks for bugs in Java code. It produces an XML result file which can be imported to generate findings. Note that the data provider requires an extra download to extract the FindBugs binary in [INSTALLDIR]/addons/tools/Findbugs/. You are free to use FindBugs 3.0 or FindBugs 2.0 depending on what your standard is. For more information, refer to the Installation and Administration Manual’s "Third-Party Plugins and Applications" section.This Data Provider also supports SpotBugs (successor to FindBugs), with the same parameters. If you are using SpotBugs, its binary also has to be accessible, in [INSTALLDIR]/addons/tools/Findbugs/.

For more details, refer to http://findbugs.sourceforge.net/.

This data provider requires an extra download to extract the FindBugs or SpotBugs binary in <SQUORE_HOME>/addons/tools/Findbugs_auto/. In this directory, the pattern of the name of the FindBugs-SpotBugs installation directory is findbugs-${TheFindBugsVersion} or spotbugs${TheSpotBugsVersion}, for example findbugs-3.0.1 or spotbugs-4.2.2. If there are multiple installation directories, the most recent version will be chosen. If there are FindBugs or SpotBugs installation, the SpotBugs installation will be chosen. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

FindBugs-SpotBugs (plugin) has the following options:

  • Classes (class_dir): Specify the folders and/or jar files for your project in classpath format, or point to a text file that contains one folder or jar file per line.

  • Auxiliary Class path (auxiliarypath): Specify a list of folders and/or jars in classpath format, or specify the path to a text file that contains one folder or jar per line. This information will be passed to FindBugs or SpotBugs via the -auxclasspath parameter.

  • Memory Allocation (xmx, default: 1024m): Maximum amount of memory allocated to the java process launching FindBugs or SpotBugs.

The full command line syntax for FindBugs-SpotBugs (plugin) is:

-d "type=Findbugs_auto,class_dir=[file_or_directory],auxiliarypath=[file_or_directory],xmx=[text]"

FxCop

FxCop FXCop

Description

FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. FxCop generates an XML results file which can be imported to generate findings.

Usage

FxCop has the following options:

  • XML results file (xml, mandatory): Specify the XML file containing FxCop's analysis results. Note that the minimum supported version of FxCop is 1.35.

The full command line syntax for FxCop is:

-d "type=FxCop,xml=[file]"

GCov

GCov GCov

Description

GCov is a Code coverage program for C application. GCov generates raw text files which can be imported to generate metrics.

For more details, refer to http://gcc.gnu.org/onlinedocs/gcc/Gcov.html.

Usage

GCov has the following options:

  • Directory containing results files (dir, mandatory): Specify the path of the root directory containing the GCov results files.

  • Results files extension (ext, mandatory, default: *.c.gcov): Specify the file extension of GCov results files.

The full command line syntax for GCov is:

-d "type=GCov,dir=[directory],ext=[text]"

Generic Findings XML Import

Description

Generic Findings XML Import

Usage

Generic Findings XML Import has the following options:

  • XML File (xml, mandatory): Specify the XML file which contains the findings results (MISRA, Coding Style…​)

  • "Issue" mapping definition (issue):

  • "Rule Id" mapping definition (id_rule):

  • "Message" mapping definition (message):

  • "File" mapping definition (file):

  • "Line" mapping definition (line):

  • "Justification" mapping definition (justification):

The full command line syntax for Generic Findings XML Import is:

-d "type=findings_xml,xml=[file],issue=[text],id_rule=[text],message=[text],file=[text],line=[text],justification=[text]"

GitHub Issues

Description

This Data Provider extracts tickets and their attributes from GitHub Issues to create ticket artefacts in your project.

For more details, refer to https://github.com/features/issues.

The extracted JSON from GitHub is then passed to the Ticket Data Import Data Provider (described in Ticket Data Import).

Usage

GitHub Issues has the following options:

  • GitHub REST API URL (url, mandatory): The URL used to connect to your GitHub's REST API URL (e.g: https://api.github.com/repos/OWNER/REPO)

  • GitHub token (token): Specify your GitHub token. This token is used in Bearer authentication type.

  • Filter query (query, default: state=all): Filter query, see github api documentation for available filters, https://docs.github.com/en/rest/issues/issues. For example to retrieve issues with "bug" and "ui" label and with state open enter the query "state=open&label=bug,ui". The query must be encoded. The "&" for parameter separation does not need to be encoded. Spaces and other characters must be encoded.

  • Number of queried tickets (max_results, default: -1): Maximum number of queried tickets returned by the query (default is -1, meaning 'retrieve all tickets').

  • Use a proxy (useProxy, default: false): If Squore is behind a proxy and needs to access the outside, you can configure the different properties by selecting the checkbox.

  • Host name (proxyHost): Configure the host name (it's the same for http or https protocol). Example: http://my-company-proxy.com

  • Port (proxyPort): Configure the port (it's the same for http or https protocol). Example: 2892

  • Proxy User (proxyUser): Configure the user if authentication is required

  • Proxy User Password (proxyPassword): Configure the user password if authentication is required

  • Enhancement Pattern (enhancement_def, default: ["enhancement", "code cleanup", "refactoring"]): Specify the pattern applied to define tickets as enhancements. This field accepts a regular expression to match one or more path with a list of possible values.

  • Defect Pattern (bug_def, default: ["bug", ">bug", "backport"]): Specify the pattern applied to define tickets as defects. This field accepts a regular expression to match one or more path with a list of possible values.

  • Open Ticket Pattern (todo_def, default: ["status:approved", "status:confirmed", "status:work-in-progress"]): Specify the pattern applied to define tickets as open. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

  • Grouping Structure (group_def, default: ["docs", "tests", "metrics", "design", "ci", "automation", "packaging", "se…​): Specify the paths for Grouping Structure, separated by ";".

    For example: "path_1=regex1;path_2=regex2;

  • In Development Ticket Pattern (r_d_progress_def, default: ["status:work-in-progress"]): Specify the pattern applied to define tickets as in development. This field accepts a regular expression to match one or more path with a list of possible values.

  • Fixed Ticket Pattern (v_v_progress_def, default: ["status:verification"]): Specify the pattern applied to define tickets as fixed. This field accepts a regular expression to match one or more path with a list of possible values.

In addition the following options are avaiable on command line:

  • endPoint(default: /issues): The end point most probably "issues", set by default.

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for GitHub Issues is:

-d "type=github_issues,url=[text],endPoint=[text],token=[token],query=[text],max_results=[text],useProxy=[booleanChoice],proxyHost=[text],proxyPort=[text],proxyUser=[text],proxyPassword=[password],enhancement_def=[text],bug_def=[text],todo_def=[text],group_def=[text],r_d_progress_def=[text],v_v_progress_def=[text],logLevel=[text]"

Import Data

Description

Import data by using an Excel, CSV, JSON, XML or Text file. For CSV, JSON, XML and Text formats, only UTF-8 encoding is supported. This data provider uses specific operations to find, extract or modify data.

All operations are described in Import Data documentation.

Every data provider created using the Data Provider Editor are derived from this data provider.

This section describes all available operations. At the end of the section, an example for using a root path in an XML or Json context.

<OPERATION> means any operation described in this section.

The following operations are available in the Data Provider Editor wizard.

  • CONSTANT defines a string, it has to be between apostrophe.

    • Example:

      • txt('MY_ID','FOLDER')

  • loc (<LOC_VALUE>) defines the value of a location.

    • <LOC_VALUE> can be a column name (CSV or Excel), a path (JSON, XML) or a regex group (Text).

    • Examples:

      • loc(name)

      • loc(tests/name/time)

  • concat (<OPERATION>,<OPERATION>,…​,use_default_value('KEEP_VALUE'|'DEFAULT_VALUE','<DEFAULT_VALUE>')) allows to concatenate constant strings and values.

    • The operation use_default_value('KEEP_VALUE'|'DEFAULT_VALUE','<DEFAULT_VALUE>') is optional. By default, if the value of a single operation is empty or null then all other operations will be ignored.

    • Examples:

      • concat(loc(Name),' ',loc(Description))

        • ' ' designates the constant space separation between the two locations path loc(Name) and loc(Description).

      • concat(loc(Name),' ',loc(Description),use_default_value('KEEP_VALUE'))

        • If loc(Name) or loc(Description) are empty, no value will be generated.

      • concat(loc(Name),' ',loc(Description),use_default_value('DEFAULT_VALUE','no empty string'))

        • If loc(Name) or loc(Description) are empty, a new string will be produced with 'no empty string' in place of empty values.

  • map (loc(<LOC_VALUE>),<OPERATION>:<OPERATION>,…​,use_default_value('KEEP_VALUE'|'DEFAULT_VALUE','<DEFAULT_VALUE>')) allows to replace the value of a location with another value resulting from an operation.

    • The operation use_default_value('KEEP_VALUE'|'DEFAULT_VALUE','<DEFAULT_VALUE>') is optional. By default, if the value is not the expected one (null or does not match), it will be ignored.

    • Examples:

      • map(loc(My Date),'2019-02-03':'My new Value')

        • The result will be "My new Value" for all values in "My Date" location that are equal to "2019-02-03".

      • map(loc(My Date),'2019-02-03':'My new Value',use_default_value('KEEP_VALUE'))

        • The result will be "My new Value" for all values in "My Date" location that are equal to "2019-02-03", all others keep their value.

      • map(loc(My Date),'2019-02-03':'My new Value',use_default_value('DEFAULT_VALUE',"For other"))

        • The result will be "My new Value" for all values in "My Date" location that are equal to "2019-02-03", all others will be "For other".

Operations that are not yet available in the Data Provider Editor wizard.

  • extract (<OPERATION>,'<REGEX>') extracts a value from operation.

    • <REGEX> is a constant and defines the part that will be removed.

    • Examples:

      • txt('MY_ID',extract('2019-02-03','(-[0-9]*)'))

        • The result will be 2019.

  • match (<OPERATION>,'<REGEX>') checks if a value from operation matches a regular expression.

    • <REGEX> is a constant.

    • Example:

      • match('2019-02-03','(-[0-9]*)')

        • The result will be true so the value 2019-02-03 will be kept.

  • format (<OPERATION>,'<DATE_PATTERN>') transforms a String date into timestamp.

    • <DATE_PATTERN> is a constant based on SimpleDateFormat Java class specifications.

    • Example:

      • format('2019-02-03','yyyy-MM-dd')

        • The result will be 1549148400000.

Numerical operations can be applied to values. Both operations have to return a numeric value to calculate the new value.

  • add (<OPERATION>,<OPERATION>) is the addition operation.

    • Example:

      • add('18','3')

        • The result will be 21.0.

  • sub (<OPERATION>,<OPERATION>) is the subtraction operation.

    • Example:

      • sub('18','3')

        • The result will be 15.0.

  • div (<OPERATION>,<OPERATION>) is the division operation.

    • Example:

      • div('18','3')

        • The result will be 6.0.

  • mult (<OPERATION>,<OPERATION>) is the multiplication operation.

    • Example:

      • mult('18','3')

        • The result will be 54.0.

All fields can have an error management operation.

  • error_mgmt ('DO_NOTHING'): the data value will be ignored and an INFO level log will be in the Data Provider log file.

    • Example:

      • art_name(loc(ID),error_mgmt('DO_NOTHING'))

  • error_mgmt ('BUILD_ERROR'): the build will be in error.

    • Example:

      • fdg_loc(loc(line),error_mgmt('BUILD_ERROR'))

  • error_mgmt ('BUILD_WARN'): the build will be in warning.

    • Example:

      • hierarchy(art_type('MY_FOLDER'),art_name(loc(Area)),error_mgmt('BUILD_WARN'))

  • error_mgmt ('NEW_FINDING','<FINDING_ID>'): to create a new finding. <FINDING_ID> is a constant and the finding id.

    • Example:

      • txt('CATEGORY',loc(category),error_mgmt('NEW_FINDING','R_CAT_EMPTY'))

Root path examples:

  • Example XML: alltickets represents the ticket array for the two XML examples;

<alltickets>

  <ticket id="ID_1" description="This is a description" name="My First Issue" />

  <ticket id="ID_2" description="This is a description" name="My Second Issue" />

</alltickets>

<alltickets>

  <ticket>

    <id>"ID_1"</id>

    <description>"This is a description"</description>

    <name>"My First Issue"</name>

  </ticket>

  <ticket>

    <id>"ID_2"</id>

    <description>"This is a description"</description>

    <name>"My Second Issue"</name>

  </ticket>

</alltickets>

  • Example Json: alltickets represents the ticket array

{"alltickets" : [ {

      "ticket": {

        "id": "ID_1"

        "description": "This is a description",

        "name": "My First Issue" } },

    { "ticket": {

        "id": "ID_2"

        "description": "This is a description",

        "name": "My Second Issue"

      } }] }

  • Example Json: the root_path is empty

[ {"ticket": {

      "id": "ID_1",

      "description": "This is a description",

      "name": My First Issue }},

  { "ticket": {

      "id": "ID_2",

      "description": "This is a description",

      "name": My Second Issue}} ]

Usage

Import Data has the following options:

  • Syntax Version (syntax_version, default: 0.0): The current version is 1.0.

  • Choose file type (import_type, default: excel): Specify if the file is about Excel, CSV, JSON, XML or Text.

  • Input file(s) (input_file, mandatory): Specify the location of the file or directory containing Excel, CSV, JSON, XML or Text file(s) to parse.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files.

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files.

  • JSON file regular expression (json_file_pattern, default: *.json$): Specify a regular expression to find JSON files.

  • XML file regular expression (xml_file_pattern, default: *.xml$): Specify a regular expression to find XML files.

  • Root path (root_path): Defined the root element to retrieve the list of artifacts in the file, required on JSON or XML import.

    It can be empty for a JSON file if the first element is an array.

    Examples are available in Import Data documentation.

  • Text regex (txt_regex): Define a regular expression that will split the lines of a file. Each group of the expression will correspond to a CSV header. The regular expression must be compatible with Java specifications.

  • Sheet name (sheetname, mandatory): Sheet name to read data from, required on Excel import.

  • First Line Data index (initial_row): Specify the line index where the first data are defined.

    Note: Indexes start at value '0', e.g. the 4th line has index 3.

  • First Column Data index (initial_column): Specify the column index where the first data are defined.

    Note: Indexes start at value '0', e.g. the 4th column has index 3.

  • Specify the CSV separator (csv_separator, default: ;): Specify the CSV separator, required on CSV import.

  • With header (with_header, default: true): Specify if a CSV or Excel file begin with a header line. By default, it is true.

  • Artifact filters (filter_list): Artifacts complying with the provided filters are kept.

    filter (loc(…​),'<REGEX>')

    • loc(…​) defines the value of a location. For more information, see Import Data documentation.

    • '<REGEX>' is a constant operation.

    • Filters should be separated by semicolon on command line.

    Examples:

    • filter(loc(Name),'^ST*') Only create artifacts for which location 'Name' starts with 'ST'

    • filter(loc(Name),'^ST*');filter(loc(Region),'Europe') Same as before, but restrict to artifacts where location 'Region' equals 'Europe'

    Note: Not available on DP Editor.

  • Artifact type (artefact_type): The artifact type used by Squore Analysis model.

    art_type (<CONSTANT>)

    • <CONSTANT> is a constant operation, string has to be between apostrophe.

    Examples:

    • art_type('INTEGRATION_TEST')

    • art_type('RESOURCE')

    • art_type('TICKET')

  • Artifact name (artefact_name): Artifact name as displayed in Squore.

    art_name (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    Examples:

    • art_name(loc(ID))

    • art_name(concat('T_',loc(Name)))

    • art_name(concat(loc(ticket/name),' ',loc(ticket/description)))

  • Artifact hierarchy (artefact_hierarchy): To specify the path in Squore of artifacts extracted from the file.

    If not used, artifacts extracted from the file will be directly added to the Application artifact.

    hierarchy (art_type(<CONSTANT>),art_name(<OPERATION>),use_default_value('DEFAULT_VALUE',<CONSTANT_OPTIONAL>),path_sep(<CONSTANT>))

    • art_type(<CONSTANT>) is optional. The configuration is the same as "Artifact type" field

    • art_name(<OPERATION>) is mandatory. The configuration is the same as "Artifact name" field.

    • use_default_value('DEFAULT_VALUE',<CONSTANT_OPTIONAL>)) is optional. Configure the default hierarchy name if the location operation is empty, by default the value is "Unknown".

    • path_sep(<CONSTANT>) is optional. This field can be used to create artifact hierarchy from the art_name value and a separator.

    • Hierarchies should be separated by semicolon on command line.

    Examples:

    • hierarchy(Area)

    • hierarchy(art_type('MY_FOLDER'),art_name(loc(Area)))

    • hierarchy(art_type('MY_FOLDER'),art_name(loc(Area)),use_default_value('DEFAULT_VALUE','My Unknown Value'))

    • hierarchy(art_name(map(match(Area,'^A'):'Area A',match(Area,'^B'):'Area B')))

    • hierarchy(art_type('MY_FOLDER'),art_name('loc(Area)'),path_sep(':'))

  • Artifact unique ID (artefact_uid): This is the artifact unique ID, to be used by links, from this Data Provider or another Data Provider. Optional unless you want to use links to these artifacts.

    art_id (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    Examples:

    • art_id(loc(ticket/id))

    • art_id(concat(loc(ticket/name),' ',loc(ticket/description)))

    • art_id(loc(ID))

  • Artifact keys (artefact_keys): Artifact keys to be used to link artifact.

    art_key (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    • Keys should be separated by semicolon on command line.

    Examples:

    • art_key(loc(ticket/id))

    • art_key(concat(loc(key),'_',loc(name)))

  • Data to extract (data_list): Three measures types are available:

    txt ('<ID>',<OPERATION>) is the textual measure.

    num ('<ID>',<OPERATION>) is the numerical measure.

    date ('<ID>',<OPERATION>) is the date measure.

    • '<ID>' is a constant and the measure id that will be recorded in Squore. The string has to be between apostrophe.

    • <OPERATION> means all operations described in Import Data documentation.

    • Measures should be separated by semicolon on command line.

    Examples:

    • txt('CRITICAL_FACTOR_STR',loc(Criticality))

    • num('REQ_STATUS',map(loc(Status),'Proposed':'0','Analyzed':'1','Approved':'2','Implemented':'3','Verified':'4','Postponed':'5','Deleted':'6','Rejected':'7'))

    • date('CLOSURE_DATE',computed_ref(cpt3))

  • Computed operation (computed_list): Computed operations can be used in several other operations.

    computed ('<UNIQUE_NAME>',<OPERATION>)

    • '<UNIQUE_NAME>' is a constant and the name to referenced, has to be unique.

    • <OPERATION> means all operations described in Import Data documentation.

    • To reference a computed operation within another operation use computed_ref('<COMPUTED_NAME>')

    • Computed should be separated by semicolon on command line.

    Examples

    • In "Computed operation" field: computed('cpt1' ,format(map(match(loc(state),'Closed|Solution Proposed|Invalid'):loc(sys_updated_on)),'dd.MM.yyyy hh:mm:ss'))

    • In "Data to extract" field: date('CLOSURE_DATE',computed_ref(cpt1) )

    • In "Computed operation" field: computed('cpt1' ,match(loc(state),'Closed|Solution Proposed|Invalid'));computed('cpt2' ,map(computed_ref(cpt1) :loc(sys_updated_on)));computed('cpt3' ,format(computed_ref(cpt2) ,'dd.MM.yyyy hh:mm:ss'))

    • In "Data to extract" field: date('CLOSURE_DATE',computed_ref(cpt3) )

  • Artifact links (artefact_link): Specify how to create links between this artifact and other artifacts.

    link (<LINK_ID>,loc(…​),'IN'|'OUT',<LINK_SEPARATOR>)

    • <LINK_ID> is a constant and the link id that will be recorded in Squore. The string has to be between apostrophe.

    • loc(…​) defines the value of a location. For more information, see Import Data documentation.

    • 'IN'|'OUT' is the link direction possible values are 'IN' or 'OUT'. 'OUT' is the default value.

    • <LINK_SEPARATOR> can be specify if a location contains multiple values to be link. Semicolon and comma have to be prefixed with \

    • Links should be separated by semicolon on command line.

    Examples:

    • link('TESTED_BY',loc(Test))>

    • link('IMPLEMENTED_BY',loc(Implements),'IN')

    • link('TESTED_BY',loc(Tests),'\,');link('REFINED_BY',loc(DownLinks),'IN','\,')

    Note: Not available on DP Editor.

  • Extract other artifacts (extract_artifacts): Extract other artifacts to be linked with the main artifact

    sub_artefact (art_name(<OPERATION>),art_type(<CONSTANT>),<HIERARCHIES>,<LINKS>)

    • art_name(<OPERATION>) is mandatory and the configuration is the same as "Artifact name" field

    • art_type(<CONSTANT>) is mandatory and the configuration is the same as "Artifact type" field

    • <HIERARCHIES> defines all the hierarchies for the extracted artifacts, its format is hierarchies (hierarchy(…​),…​). Hierarchy operations should be separated by comma. See "Artifact hierarchy" field configuration for hierarchy operations.

    • <LINKS> defines all links for the extracted artifacts with current artifact, its format is links (link(…​),…​).Link operations should be separated by comma. See "Artifact links" field configuration for link operations.

    • Extract operations should be separated by semicolon on command line.

    Examples:

    • sub_artefact(art_name(loc(fields/reporter/name)),art_type('TICKET'),links(link('FROMREPORTER',loc(fields/reporter/name),'OUT'))) An artifact with name in location 'fields/reporter/name' will be created with type 'TICKET' and a link.

    • sub_artefact(art_name(loc(fields/reporter/key)),art_type('TICKET'),hierarchies(hierarchy(art_type('TICKET_FOLDER'),art_name('Reporter'),use_default_value('DEFAULT_VALUE','My Default')))) An artifact with name in location 'loc(fields/reporter/key)' will be created with type 'TICKET' and it will be in a defined hierarchy.

    • sub_artefact(art_name(loc(fields/labels/0)),art_type('LABEL'),hierarchies(hierarchy(art_type('LABEL_FOLDER'),art_name('Tickets')),hierarchy(art_type('LABEL_FOLDER'),art_name('Labels'))),links(link('TOLABEL',loc(fields/labels/0),'IN'))) An artifact with name in location 'fields/reporter/name' will be created with type 'TICKET', a hierarchy and a link.

    Note: Not available on DP Editor.

  • Other operation (other_op_list): Currently only the array_data operation is available.

    An array_data operation will allow you to extract information from an array into a JSON file. There will thus be several measures, links or sub-artifacts resulting from the data in this table.

    array_data (loc(<ARRAY_LOCATION>),<OPERATION>)

    • <ARRAY_LOCATION> this is the array location in JSON file.

    • <OPERATION> possible operations are txt(…​), num(…​), date(…​), link(…​), sub_artefact(…​)

    • To reference child location, use relative_loc(…​) instead of loc(…​) in operation.

    • array_data operations should be separated by semicolon on command line.

    Examples:

    • array_data(fields/subtasks,link('SUBTASK',relative_loc(key) ,'OUT')) The location is not loc but relative_loc here, the path in json file is fields/subtasks/key.

    • array_data(fields/issuelinks,link(map(relative_loc(type/inward) ,'BAD_WORD':'GOOD_WORD',use_default_value('KEEP_VALUE')),relative_loc(inwardIssue/key) ,'IN')) The relative_loc(type/inward) has path in the json file fields/issuelinks/type/inward and relative_loc(inwardIssue/key) has fields/issuelinks/inwardIssue/key

    • array_data(fields/issuelinks,link(relative_loc(type/outward) ,relative_loc(outwardIssue/key) ,'OUT'))

    Not available on DP Editor.

  • Finding rule (finding_rule): It is used to create findings result.

    fdg_rule (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    Example:

    • fdg_rule(loc(source))

  • Artifact binding (finding_art_binding): Two operations are possible to find the artifact of the findings.

    fdg_art_binding (by_kind(<CONSTANT>,<OPERATION>),by_line(<OPERATION>))

    • by_kind(<CONSTANT>,<OPERATION>) is a finding which will be attached to an artifact coming from the source code.<CONSTANT> is the kind that Squore Analyzer determines in the input-artifacts.xml file, it can be 'FILE', 'FOLDER', 'CLASS', or 'FUNCTION'. <OPERATION> means all operations described in Import Data documentation.

    • by_line(<OPERATION>) is optional, it can only be combined with by_kind. <OPERATION> means all operations described in Import Data documentation.

    Examples:

    • fdg_art_binding(by_kind('FILE',loc(/checkstyle/file/name)))

    • fdg_art_binding(by_kind('FILE',loc(/checkstyle/file/name)),by_line(loc(line)))

    fdg_art_binding (by_art_ref(<OPERATION>))

    • by_art_ref(<OPERATION>) is the operation which will allow a finding to be linked to an artifact by its reference (a key or an id). <OPERATION> means all operations described in Import Data documentation.

    Example:

    • fdg_art_binding(by_art_ref(loc(ref)))

  • Finding message (finding_msg): It is the finding message.

    fdg_msg (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    Example:

    • fdg_msg(loc(message))

  • Finding location (finding_loc): It is the finding location.

    fdg_loc (<OPERATION>)

    • <OPERATION> means all operations described in Import Data documentation.

    Example:

    • fdg_loc(loc(line))

  • Inconstant pattern (finding_inconstant_pattern): This field is a list of pattern to exclude string form message. The regular expression must be enclosed in a single quote and separated by a semicolon in command line.

  • Rules to ignore (finding_ignore_rule): This field is a list of rules to ignore when processing finding. The rule must be enclosed in a single quote and separated by a semicolon in command line.

  • Artifact path is case-sensitive (path_is_case_sensitive, default: true): This option allows you to search for an artifact path taking into account case or not.

  • Artifact name is case-sensitive (name_is_case_sensitive, default: true): This option allows you to search for an artifact name taking into account case or not.

  • Ignore artifact path (ignore_src_file_path, default: false): This option allows you to search for an artifact only by its name.

  • Global error management (global_error_mgmt): If the data is missing or non-compliant:

    • error_mgmt ('DO_NOTHING'): the data value will be ignored.

    • error_mgmt ('BUILD_ERROR'): the build will be in error

    • error_mgmt ('BUILD_WARN'): the build will be in warning

    • error_mgmt ('NEW_FINDING','<FINDING_ID>'): to create a new finding. <FINDING_ID> is a constant and the finding id.

  • Date format (date_format, default: yyyy-MM-dd): Formatting the date to match the given pattern. This pattern can be used for all Date metrics and "format(<DATE_PATTERN>)" operation is no longer required. If format is not specified, the following is used by default: dd-MMM-yyyy .

    Date patterns are based on SimpleDateFormat Java class specifications.

    Examples:

    • "dd/mm/yyyy"

    • "yyyy-MM-dd'T'hh:mm:ss'Z'"

  • Log level (logLevel, default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

In addition the following options are avaiable on command line:

  • findings_list: To specify the list of findings to extract from the file.

    finding (<FINDING_ID>,<OPERATOR>,<CONDITIONS>,<FINDING_MSG>)

    • <FINDING_ID> is a constant and the Finding rule that will be recorded in Squore. The string has to be between apostrophe.

    • <OPERATOR> is the operator to apply to conditions. Possible values are in, notIn, greater, grOrEq, lesser, lsOrEq.

    • <CONDITIONS> the format is finding_cond(loc(columnOrPath),regex) . Multiple conditions can be set and separated by commas.

    • <FINDING_MSG> the finding message is an operation, all operations described in Import Data documentation can be used.

    • Findings should be separated by semicolon on command line.

    Examples:

    • finding('R_MAPPED_STATUS','notIn',finding_cond(loc(Status),'Open|Spec Validation|Estimated|Reopened'),finding_cond(loc(Category),'Bug|Evolution'),concat('The status ',loc(Status),' and for category ',loc(Category),' is not mapped with one of the aggregated status: Open|Spec Validation|Estimated|Reopened.'))The finding R_MAPPED_STATUS will be created if in 'Status' location the value doesn't match condition (Open|Spec Validation|Estimated|Reopened) and in 'Category' location the value doesn't match condition 'Bug|Evolution'.

    • finding('R_MAPPED_STATUS','in',finding_cond(loc(Status),'Open|Spec Validation|Estimated|Reopened'),concat('The status ',loc(Status),' and for category ',loc(Category),' is mapped with one of the aggregated status: Open|Spec Validation|Estimated|Reopened.'))The finding R_MAPPED_STATUS will be created if in 'Status' location the value matches condition 'Open|Spec Validation|Estimated|Reopened'.

    Note: Not available on DP Editor

  • csv_delimiter(default: false): Define if a CSV String is delimited by a " or '.

  • prettyXml(default: prettyL): Specify the pretty level to write the output file loaded by Squore, by default it is prettyL.

    • pretty: line break and indentation.

    • prettyL: only line break.

    • no: all on one line.

  • elementSeparator(default: ;): DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • concatSeparator(default: ,): DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • squore_model: Specify the Squore Model used, this field will be displayed in DP editor.

  • artefact_type_container: DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • defaultPath(default: Unknown): DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • path_list: DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • info_list: DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • metric_list: DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • date_list: DEPRECATED Parameter used in previous versions of 23.0.0. Not available on DP Editor.

  • squan_output_dir: The configuration ${squanOutputDirectory} can be used.

The full command line syntax for Import Data is:

-d "type=import_generic_data,syntax_version=[multipleChoice],import_type=[multipleChoice],input_file=[file_or_directory],xlsx_file_pattern=[text],csv_file_pattern=[text],json_file_pattern=[text],xml_file_pattern=[text],root_path=[text],txt_regex=[text],sheetname=[text],initial_row=[text],initial_column=[text],csv_separator=[text],with_header=[booleanChoice],filter_list=[text],artefact_type=[text],artefact_name=[text],artefact_hierarchy=[text],artefact_uid=[text],artefact_keys=[text],data_list=[text],computed_list=[text],artefact_link=[text],findings_list=[text],extract_artifacts=[text],other_op_list=[text],finding_rule=[text],finding_art_binding=[text],finding_msg=[text],finding_loc=[text],finding_inconstant_pattern=[text],finding_ignore_rule=[text],path_is_case_sensitive=[booleanChoice],name_is_case_sensitive=[booleanChoice],ignore_src_file_path=[booleanChoice],global_error_mgmt=[text],date_format=[text],logLevel=[text],csv_delimiter=[booleanChoice],prettyXml=[text],elementSeparator=[text],concatSeparator=[text],squore_model=[text],artefact_type_container=[text],defaultPath=[text],path_list=[text],info_list=[text],metric_list=[text],date_list=[text],squan_output_dir=[text]"

JaCoCo

JaCoCo jacoco

Description

JaCoCo is a free code coverage library for Java. Its XML report file can be imported to generate code coverage metrics for your Java project.

For more details, refer to http://www.eclemma.org/jacoco/.

To understand JaCoCo measures

*_STAT match line

*_BRANCH are calculated, for the if statement, as follows 2*n. 2 represents if and else statements and n the number of arguments.For example:

  • if (a && b && c) {…​} else {…​} gives 2*3 or 6 branches

  • if (a && b) {…​} gives 2*2 or 4 branches

Usage

JaCoCo has the following options:

  • XML report (xml, mandatory): Specify the path to the XML report generated by JaCoCo. Note that the folder containing the XML file must also contain JaCoCo's report DTD file, available from http://www.eclemma.org/jacoco/trunk/coverage/report.dtd. XML report files are supported from version 0.6.5.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for JaCoCo is:

-d "type=Jacoco,xml=[file],logLevel=[text]"

Jira

Description

This Data Provider extracts tickets and their attributes from a Jira instance to create ticket artefacts in your project.

For more details, refer to https://www.atlassian.com/software/jira.

The extracted JSON from Jira is then passed to the Ticket Data Import Data Provider (described in Ticket Data Import).

Usage

Jira has the following options:

  • Jira REST API URL (url, mandatory): The URL used to connect to yout Jira instance's REST API URL (e.g: https://jira.domain.com/rest/api/2)

  • Authentication (useAccountCredentials, default: NO_CREDENTIALS): Possible values for authentication are

    • No credentials : Used when authentication is not required

    • Use my Squore credentials : If the login/password are the same between Squore and Jira

    • Define credentials : To be prompted for login/password

    • Use token : To be prompted for the token to authenticate with Jira

  • Jira User login (login): Specyfy your Jira User login. The login is used in Basic authentication type.

  • Jira User password (pwd, mandatory): Specify your Jira User password. The password is used in Basic authentication type. This password can be encoded by the Jira Server so as not to be provided in readable form.

  • Jira User token (token, mandatory): Specify your Jira User token. This token is used in Bearer authentication type.

  • JQL Request (jql_request): Specify a JQL request (see JIRA documentation) in order to limit the number of elements sent by the JIRA server.

    For example: project=MyProject .

    This parameter is optional.

  • Number of queried tickets (max_results, mandatory, default: -1): Maximum number of queried tickets returned by the query (default is -1, meaning 'retrieve all tickets').

  • Additional Fields (additional_fields, default: environment,votes,issuelinks): List additional fields to be exported from Jira.

    This field accepts a comma-separated list of field names that are added to the export request URL, for example fixVersions,versions

  • Use a proxy (useProxy, default: false): If Squore is behind a proxy and needs to access the outside, you can configure the different properties by selecting the checkbox.

  • Host name (proxyHost): Configure the host name (it's the same for http or https protocol). Example: http://my-company-proxy.com

  • Port (proxyPort): Configure the port (it's the same for http or https protocol). Example: 2892

  • Proxy User (proxyUser): Configure the user if authentication is required

  • Proxy User Password (proxyPassword): Configure the user password if authentication is required

  • Ticket Name (artefact_name, mandatory, default: ${key}): Specify the pattern used to build the name of the ticket. The name can use any information collected from the JSON file as a parameter.

    Example: ${ID} : ${Summary}

  • Grouping Structure (artefact_groups, default: fields/components[0]/name): Specify the paths for Grouping Structure, separated by ";".

    For example: "path_1=regex1;path_2=regex2;

  • Filtering (artefact_filters, default: fields/issuetype/name=(Task|Bug|Improvement|New Feature)): Specify the list of path for filtering

    For example: "path_1=regex1;path_2=regex2;

  • Todo list regex (in_todo_list, default: fields/status/name=.)*: Todo list regex (ticket which fit the regex will be considered as part of the TODO list for the analysis)

  • JSON Root Path (root_path, default: issues): Specify the root path in the JSON file to retrieve issues.

  • Ticket ID (artefact_id, default: id): Specify the path to the field containing the ticket ID.

  • Ticket Type (ticket_type, default: fields/issuetype/name): Specify the path to the field containing the type for the ticket.

  • Ticket UID (artefact_uid, default: JR#${id}): Specify the pattern used to build the ticket Unique ID. The UID can use any information collected from the JSON file as a parameter.

    Example: TK#${ID}

  • Ticket Keys (artefact_keys, default: key): Specify the pattern used to find the ticket keys. The keys can use any information collected from the file, separated by ";".

    Example: ${ID};key

  • Ticket Links (artefact_link, default: type/inward?array=fields/issuelinks,/inwardIssue/key&direction=IN;): Specify the pattern used to find the ticket links. The links can have special syntax, see import_generic_data documentation.

    Example: type/inward?array=fields/issuelinks,/inwardIssue/key&direction=IN;

  • Creation Date Field (creation_date, default: fields/created): Enter the path to the field containing the creation date of the ticket.

    For example: path&format="dd/mm/yyyy" .

    If format is not specified, the following is used by default: dd-MMM-yyyy .

  • Closure Date Field (closure_date, default: fields/resolutiondate): Enter the path to the field containing the closure date of the ticket.

    For example: path&format="dd/mm/yyyy" .

    If format is not specified, the following is used by default: dd-MMM-yyyy .

  • Due Date Field (due_date, default: fields/duedate): Enter the path to the field containing the due date of the ticket.

    For example: path&format="dd/mm/yyyy" .

    If format is not specified, the following is used by default: dd-MMM-yyyy .

  • Last Updated Date Field (last_updated_date, default: fields/updated): Enter the path to the field containing the last updated date of the ticket.

    For example: path&format="dd/mm/yyyy" .

    If format is not specified, the following is used by default: dd-MMM-yyyy .

  • Time Spent (time_spent, default: fields/timespent): Specify the path to the field containing time spent on the issue.

  • Remaining Time (remaining_time, default: fields/timeestimate): Specify the path to the field containing the remaining time for the issue.

  • Original Time Estimate (original_time_estimate, default: fields/timeoriginalestimate): Specify the path to the field containing the original time estimate for the issue.

  • Open Ticket Pattern (definition_open, default: fields/status/name=[To Do|Open|Reopened]): Specify the pattern applied to define tickets as open. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Status=[Open|New]

  • In Development Ticket Pattern (definition_rd_progress, default: fields/status/name=[In Progress|In Review]): Specify the pattern applied to define tickets as in development. This field accepts a regular expression to match one or more path with a list of possible values.

    Example: Status=Implementing

  • Fixed Ticket Pattern (definition_vv_progress, default: fields/status/name=[Verified]): Specify the pattern applied to define tickets as fixed. This field accepts a regular expression to match one or more path with a list of possible values.

    Example: Status=Verifying;Resolution=[fixed;removed]

  • Closed Ticket Pattern (definition_close, default: fields/status/name=[Resolved|Closed|Done]): Specify the pattern applied to define tickets as closed. This field accepts a regular expression to match one or more path with a list of possible values.

    Example: Status=Closed

  • Defect Pattern (definition_defect, default: fields/issuetype/name=[Bug]): Specify the pattern applied to define tickets as defects. This field accepts a regular expression to match one or more path with a list of possible values.

    Example: Type=Bug

  • Enhancement Pattern (definition_enhancement, default: fields/issuetype/name=[Improvement|New Feature]): Specify the pattern applied to define tickets as enhancements. This field accepts a regular expression to match one or more path with a list of possible values.

    Example: Type=Enhancement

  • Other Pattern (definition_other): Specify the pattern applied to define tickets of 'Other' types. This means not a defect or an enhancement. This field accepts a regular expression to match one or more column headers with a list of possible values.

    Example: Type=Decision

  • Category (category, default: fields/components[0]/name): Specify the path to the field containing the ticket category.

  • Priority (priority, default: fields/priority/name): Specify the path to the field containing the ticket priority.

  • Severity (severity, default: fields/priority/name): Specify the path to the field containing severity data.

  • Severity Mapping (severity_mapping, default: [Lowest:0,Low:1,Medium:2,High:3,Highest:4]): Specify the mapping used to associate a severity to a scale on the severity scale in the model, where 0 is least critical and 4 is most critical.

  • Status (status, default: fields/status/name): Specify the path to the field containing the status of the ticket.

  • Information Fields (informations, default: fields/environment;fields/votes/votes): Specify a semicolon-separated list of paths to fields you want to extract from the Jira JSON export to be added as textual information for the ticket artefacts.

    For example: fields/fixVersions[0]/name,CUSTOM_VERSIONfields/versions[0]/name

  • Issue URL (issue_url, default: ${self}/../../../../../browse/${key}): Specify the pattern used to build the ticket URL. The URL can use any information collected from the file as a parameter.

  • Title (title, default: fields/summary): Specify the path to the field containing the title of the ticket.

  • Description (description, default: fields/description): Specify the path to the field containing the full description of the ticket.

  • Reporter (reporter, default: fields/reporter/displayName): Specify the path to the field containing the reporter of the ticket.

  • Handler (handler, default: fields/assignee/displayName): Specify the path to the field containing the handler of the ticket.

  • Extract other artefacts (extract_artifacts): Extract other artefacts to be linked with the main artefact. Use the following format:

    <PATH>?type=<Artefact type>&hierarchy=<All hierarchy element separate by ','>&isArray=<true or false>&link=<name link>&direction=<IN or OUT>

    Example: fields/labels?type=LABEL&hierarchy=Labels&isArray=true&link=HAS_LABEL&direction=IN

    If the type is not defined, it's the main artefact type would be used.

    If the hierarchy is not defined, the artefact will be under APPLICATION

    If the isArray is not defined, the default value is false.

    If the direction is not defined, the default value is OUT

    If the link is not defined, the artefact will still be created

    If the field is an object array, you have to enter the property of the object to search for

    Example: fields/fixVersions?type=TICKET_VERSION&hierarchy=Tickets,Versions&isArray=true*,name* &link=TARGET_VERSION&direction=IN;

  • Additional Metrics Fields (custom_metrics): Specify a semicolon-separated list of paths to fields you want to extract from the Jira JSON export and to be added as extra metrics for the ticket artefacts.

    For example: POINTS?path=fields/customfield_10002;fields/customfield_10003

  • Additional Date Fields (custom_metrics_date): Specify a semicolon-separated list of paths to fields you want to extract from the Jira JSON export and to be added as extra date information.

    For example: CUSTOM_CREATED?path=fields/customfield_10004;fields/customfield_10005

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Jira is:

-d "type=jira,url=[text],useAccountCredentials=[multipleChoice],login=[text],pwd=[password],token=[token],jql_request=[text],max_results=[text],additional_fields=[text],useProxy=[booleanChoice],proxyHost=[text],proxyPort=[text],proxyUser=[text],proxyPassword=[password],artefact_name=[text],artefact_groups=[text],artefact_filters=[text],in_todo_list=[text],root_path=[text],artefact_id=[text],ticket_type=[text],artefact_uid=[text],artefact_keys=[text],artefact_link=[text],creation_date=[text],closure_date=[text],due_date=[text],last_updated_date=[text],time_spent=[text],remaining_time=[text],original_time_estimate=[text],definition_open=[text],definition_rd_progress=[text],definition_vv_progress=[text],definition_close=[text],definition_defect=[text],definition_enhancement=[text],definition_other=[text],category=[text],priority=[text],severity=[text],severity_mapping=[text],status=[text],informations=[text],issue_url=[text],title=[text],description=[text],reporter=[text],handler=[text],extract_artifacts=[text],custom_metrics=[text],custom_metrics_date=[text],logLevel=[text]"

JSHint

JSHint jshint

Description

JSHint is an open source tool that verifies that JavaScript applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.

For more details, refer to http://jshint.com/.

Usage

JSHint has the following options:

  • JSHint results file (Checkstyle formatted): (xml, mandatory): Point to the XML file that contains JSHint results Checkstyle formatted.

The full command line syntax for JSHint is:

-d "type=JSHint,xml=[file]"

JUnit Format

JUnit Format JUnit

Description

JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks. JUnit XML result files are imported as test artefacts and links to tested classes are generated in the project.

For more details, refer to http://junit.org/.

Usage

JUnit Format has the following options:

  • Results folder (resultDir, mandatory): Specify the path to the folder containing the JUnit results (or by a tool able to produce data in this format). The data provider will parse subfolders recursively. Note that the minimum support version of JUnit is 4.10.

  • File Pattern (filePattern, mandatory, default: TEST-.xml)*: Specify the pattern for files to read reports from.

  • Root Artefact (root, mandatory, default: tests[type=TEST_FOLDER]/junit[type=TEST_FOLDER]): Specify the name and type of the artefact under which the test artefacts will be created.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for JUnit Format is:

-d "type=JUnit,resultDir=[directory],filePattern=[text],root=[text],logLevel=[text]"

Klocwork

Klocwork Klocwork

Description

Klocwork is a static analysis tool. Its XML result file can be imported to generate findings.

For more details, refer to http://www.klocwork.com.

Usage

Klocwork has the following options:

  • XML results file (xml, mandatory): Specify the path to the XML results file exported from Klocwork. Note that Klocwork version 9.6.1 is the minimum required version.

  • Use specific ruleset (ruleset, default: klocwork): Specify which ruleset to use.

In addition the following options are avaiable on command line:

  • defaultToFile(default: true): Set the following field to false to not attach the results to the file, if the function is not found. By default, it's true.

  • generateRelaxedFindings(default: false): set the following field to true to produce findings for ignored states or statuses. By default, it's false

  • stateToIgnore(default: Fixed): List of state for which a finding does not need to be created, separated by comma.

  • statusToIgnore(default: Ignore,Defer,Not a Problem): List of citingStatus for which a finding does not need to be created, separated by comma.

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Klocwork is:

-d "type=klocwork,xml=[file_or_directory],ruleset=[multipleChoice],defaultToFile=[booleanChoice],generateRelaxedFindings=[booleanChoice],stateToIgnore=[text],statusToIgnore=[text],logLevel=[text]"

Klocwork MISRA (deprecated)

Klocwork MISRA Klocwork

This Data Provider is deprecated and will be removed in a future version. You can still use it when building existing projects but it cannot be used in new projects. Consult your server administrator to get more information about which Data Provider should be used instead.

Description

Klocwork is a static analysis tool. Its XML result file can be imported to generate findings.

For more details, refer to http://www.klocwork.com.

Usage

Klocwork MISRA has the following options:

  • XML results file (xml, mandatory): Specify the path to the XML results file exported from Klocwork. Note that Klocwork version 9.6.1 is the minimum required version.

In addition the following options are avaiable on command line:

  • defaultToFile(default: true): Set the following field to false to not attach the results to the file, if the function is not found. By default, it's true.

  • generateRelaxedFindings(default: false): set the following field to true to produce findings for ignored states or statuses. By default, it's false

  • stateToIgnore(default: Fixed): List of state for which a finding does not need to be created, separated by comma.

  • statusToIgnore(default: Ignore,Defer,Not a Problem): List of citingStatus for which a finding does not need to be created, separated by comma.

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Klocwork MISRA is:

-d "type=Klocwork_misra,xml=[file_or_directory],defaultToFile=[booleanChoice],generateRelaxedFindings=[booleanChoice],stateToIgnore=[text],statusToIgnore=[text],logLevel=[text]"

Memory Data Import

Description

Memory Data Import provides a generic import mechanism for memory data from a CSV or Excel file.

Usage

Memory Data Import has the following options:

  • Choose Excel or CSV import (import_type, default: excel): Specify if the import is about Excel or CSV.

  • File or Directory (xls_file, mandatory): Specify the location of the Excel or CSV file or directory containing Memory information.

  • Sheet Name (xls_sheetname, mandatory): Specify the name of the Excel sheet that contains the Memory list.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • Specify the CSV separator (csv_separator, default: ;): Specify the CSV separator

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • Memory Column name (xls_key, mandatory): Specify the header name of the column which contains the Memory key.

  • Memory artefact root path (root_node, default: Resources): Specify the root path in Squore of artefacts extracted from the file.

    By default the root artefact path is Resources

  • Grouping Structure (xls_groups): Artifacts can be grouped by contextual elements of the file, separated by ";".

    For example: "column_name_1=regex1;column_name_2=regex2; the result in Squore Resources/"value_regex1"/"value_regex2"/MyArt

  • Filtering (xls_filters): Specify the list of Header for filtering

    For example: "column_name_1=regex1;column_name_2=regex2;

  • Memory size column name (memory_size_column_name, default: Total): Specify the header name of the column which contains the memory size.

  • Used memory column name (memory_used_column_name, default: Used): Specify the header name of the column which contains the used memory.

  • Memory type column name (memory_type_column_name, default: Type): Specify the header name of the column which contains the memory type.

  • ROM memory type name (memory_type_rom_name, default: ROM): Specify the name used for ROM memory.

  • RAM memory type name (memory_type_ram_name, default: RAM): Specify the name used for RAM memory.

  • NVM memory type name (memory_type_nvm_name, default: NVM): Specify the name used for NVM memory.

  • Create an output file (createOutput, default: true): Create an output file

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Memory Data Import is:

-d "type=import_memory,import_type=[multipleChoice],xls_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],xls_key=[text],root_node=[text],xls_groups=[text],xls_filters=[text],memory_size_column_name=[text],memory_used_column_name=[text],memory_type_column_name=[text],memory_type_rom_name=[text],memory_type_ram_name=[text],memory_type_nvm_name=[text],createOutput=[booleanChoice],logLevel=[text]"

MISRA Rule Checking with QAC

MISRA Rule Checking with QAC QAC

Description

QAC identifies problems in C source code caused by language usage that is dangerous, overly complex, non-portable, difficult to maintain, or simply diverges from coding standards. Its CSV results file can be imported to generate findings.

Usage

MISRA Rule Checking with QAC has the following options:

  • Code Folder (logDir): Specify the path to the folder that contains the annotated files to process.

    For the findings to be successfully linked to their corresponding artefact, several requirements have to be met:

    - The annotated file name should be [Original source file name].txt

    e.g. The annotation of file "controller.c" should be called "controller.c.txt"

    - The annotated file location in the annotated directory should match the associated source file location in the source directory.

    e.g. The annotation for source file "[SOURCE_DIR]/subDir1/subDir2/controller.c" should be located in "[ANNOTATIONS_DIR]/subDir1/subDir2/controller.c.txt"

    The previous comment suggests that the source and annotated directory are different.

    However, these directories can of course be identical, which ensures that locations of source and annotated files are the same.

  • Extension (ext, default: html): Specify the extension used by QAC to create annotated files.

  • Force import of all QAC violations (not only MISRA) (force_all_import, default: false): Force the import of all QAC findings ( not only the MISRA violations)

The full command line syntax for MISRA Rule Checking with QAC is:

-d "type=qac_misra_by_logs,logDir=[directory],ext=[text],force_all_import=[booleanChoice]"

PC-lint Plus

PC lint Plus vector logo

Description

PC-lint Plus is a static analysis tool that finds defects in software by analyzing C and C++ source code. This Data Provider imports the violations detected by PC-lint Plus into you project by processing your configuration and results file.

For more details, refer to https://pclintplus.com/.

Usage

PC-lint Plus has the following options:

  • Results File Format (file_format, mandatory): PC-lint Plus output file can be generated using the following lnt configuration:

    • Text:

    • -v // turn off verbosity

    • -width(0) // don't insert line breaks (unlimited output width)

    • -"format=%f(%l): %t %n: %m"

    • -hs1 // The height of a message should be 1

    • Xml:

    • -v // turn off verbosity

    • -width(0) // don't insert line breaks (unlimited output width).

    • +xml(?xml version="1.0" ?) // add version information

    • +xml(doc) // turn on xml escapes; the whole is bracketed with the pair …​

    • -"format=<issue><file line='%l'>%f</file><message id='%n'>%m</message></issue>"

    • -hs1 // The height of a message should be 1 (i.e. don't output the line in error); and Space between messages

  • Xml Results File (results): Specify the path to the PC-lint Plus xml results file.

  • Text Results File (text_file): Specify the path to the PC-lint Plus text results file.

  • Synchronize PC-lint Plus Configuration (update_rules, default: false): Providing the lnt files allows synchronizing the Squore ruleset with regard to what was really activated during PC-lint Plus analysis.

    If checked, only the activated rules from PC-lint Plus are considered while computing the "Rule Standard Compliance".Otherwise, all rules from PC-lint Plus are considered.

  • Configuration Files (.lnt) (config_files, mandatory)*: Path to PC-lint Plus lnt configuration files.

  • Standards (convert_standard, mandatory): Choose the standards to activate among the available Standards defined in the lnt configuration files.

    When PC-lint Plus reports a violation (e.g.: <message id='586'>…​), you can decide to generate:

    • The standard PC-lint Plus violation: R_PCLINT_586

    • The associated standard rules: "Rule_17-0-5 and Rule_18-0-2" (for MISRA) or "EXP43-C and INT05-C" (for CERT)

    • All rules from all standards.

    In that case, several violations could be generated from the original violation

    Mapping between PC-lint Plus checkers and industrial Standards can be found in the PC-lint Plus documentation.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE). The default is INFO

The full command line syntax for PC-lint Plus is:

-d "type=pclint_plus,logLevel=[text],file_format=[multipleChoice],results=[file_or_directory],text_file=[file_or_directory],update_rules=[booleanChoice],config_files=[file_or_directory],convert_standard=[multipleChoice]"

PMD

PMD Pmd

Description

PMD scans Java source code and looks for potential problems like possible bugs, dead code, sub-optimal code, overcomplicated expressions, duplicate code…​ The XML results file it generates is read to create findings.

For more details, refer to http://pmd.sourceforge.net.

Usage

PMD has the following options:

  • XML results file or directory (xml, mandatory): Specify the path to the PMD XML or JSON results file(s) or directory. Note that the minimum supported version of PMD for this data provider is 4.2.5.

  • Regex Files (regexFile, mandatory, default: .xml): Specify a regular expression to find PMD XML files

  • PMD SARIF format results (isSarifFormat): Check the box if the PMD result files are in SARIF format. Leave it unchecked to work with XML files instead.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE)

The full command line syntax for PMD is:

-d "type=PMD,xml=[file_or_directory],regexFile=[text],isSarifFormat=[booleanChoice],logLevel=[text]"

PMD (plugin)

PMD (plugin) Pmd

Description

PMD scans Java source code and looks for potential problems like possible bugs, dead code, sub-optimal code, overcomplicated expressions, duplicate code …​ The XML results file it generates is read to create findings.

For more details, refer to http://pmd.sourceforge.net.

This data provider requires an extra download to extract the PMD binary in <SQUORE_HOME>/addons/tools/PMD_auto/. In this directory, the pattern of the name of the PMD installation directory is pmd-bin-${ThePMDVersion}, for example pmd-bin-6.34.0. If there are multiple installation directories, the most recent version will be chosen. For more information, refer to the Installation and Administration Guide’s 'Third-Party Plugins and Applications' section.

Usage

PMD (plugin) has the following options:

  • Source code folder (customDirs): Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, select None.

  • Ruleset file (configFile): Specify the path to the PMD XML ruleset you want to use for this analysis. If you do not specify a ruleset, the default one from INSTALLDIR/addons/tools/PMD_auto will be used.

  • Memory Allocation (xmx, default: 1024m): Maximum amount of memory allocated to the java process launching PMD.

  • PMD SARIF format results (outputSarifFormat): Output results in SARIF instead of XML

The full command line syntax for PMD (plugin) is:

-d "type=PMD_auto,customDirs=[directory],configFile=[file],xmx=[text],outputSarifFormat=[booleanChoice]"

pycodestyle (old pep8)

Description

pycodestyle is a tool to check your Python code against some of the style conventions in PEP 88. Results files created with the old pep8 tool are compatible with this data provider.

For more details, refer to https://pypi.org/project/pycodestyle.

Usage

pycodestyle (old pep8) has the following options:

  • Results file (csv, mandatory): Specify the path to the CSV report file created by pep8. The output format will be the default or pylint format.

The full command line syntax for pycodestyle (old pep8) is:

-d "type=pep8,csv=[file]"

pylint

pylint pylint

Description

Pylint is a Python source code analyzer which looks for programming errors, helps enforcing a coding standard and sniffs for some code smells (as defined in Martin Fowler’s Refactoring book). Pylint results are imported to generate findings for Python code.

For more details, refer to http://www.pylint.org/.

Usage

pylint has the following options:

  • CSV results file (csv, mandatory): Specify the path to the CSV file containing pylint results. Note that the minimum version supported is 1.1.0.

The full command line syntax for pylint is:

-d "type=pylint,csv=[file]"

pylint (plugin)

pylint (plugin) pylint

Description

Coding Guide for Python Code. Pylint results are imported to produce findings on Python code. This data provider requires having pylint installed on the machine running the analysis and the pylint command to be available in the path. It is known to work with pylint 1.7.0 and may also work with older versions.

Prior to using this repository connector, the path to the pylint command has to be configured in the config.xml file located in the root directory of the Squore server or the Squore client:

<path name="pylint" path="path_to_pylint_executable"/>

Usage

pylint (plugin) has the following options:

  • Source code directory to analyse (dir): Leave this field empty to analyse all sources.

The full command line syntax for pylint (plugin) is:

-d "type=pylint_auto,dir=[directory]"

QAC 8.2

QAC 8.2 qac

Description

QA-C is a static analysis tool for MISRA, CERT, HIS or CWE checking.

Usage

QAC 8.2 has the following options:

  • QAC output file(s) (txt, mandatory): Specify the path(s) to the .tab file(s) to extract findings from. To provide multiple files click on '+'

  • Eliminate duplicated findings (eliminateDuplicate, default: false): When 2 occurences of the same finding (same rule, same file, same line, same description) is found, only one is reported.

  • C Coding Standards (cCodingStandard, default: misra): Speficy the C Coding Standards used

In addition the following options are avaiable on command line:

  • fileName(default: Filename with path): Speficy the file name header in the file, by default it is "Filename with path".

  • line(default: Line): Speficy the line number header in the file, by default it is "Line".

  • description(default: Message text): Speficy the description header in the file, by default it is "Message text".

  • rule(default: Rule): Speficy the rule header in the file, by default it is "Rule".

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • regexFile(default: .tab$): Specify a regular expression to find QAC tab files

  • fileSeparator(default: \t): Specify the File separator, required on File import

  • default2application(default: true): If this field is "true": when a file is not found for a rule, a finding is added to the Artefact application. By default it is true.

The full command line syntax for QAC 8.2 is:

-d "type=qac,txt=[file_or_directory],eliminateDuplicate=[booleanChoice],cCodingStandard=[multipleChoice],fileName=[text],line=[text],description=[text],rule=[text],logLevel=[text],regexFile=[text],fileSeparator=[text],default2application=[booleanChoice]"

QAC CERT (8.2)

QAC CERT (8.2) qac

Description

QA-C is a static analysis tool for MISRA, CERT, HIS or CWE checking.

Usage

QAC CERT (8.2) has the following options:

  • QAC CERT output file(s) (txt, mandatory): Specify the path(s) to the .tab file(s) to extract findings from. To provide multiple files click on '+'

  • Eliminate duplicated findings (eliminateDuplicate, default: false): When 2 occurences of the same finding (same rule, same file, same line, same description) is found, only one is reported.

In addition the following options are avaiable on command line:

  • fileName(default: Filename with path): Speficy the file name header in the file, by default it is "Filename with path".

  • line(default: Line): Speficy the line number header in the file, by default it is "Line".

  • description(default: Message text): Speficy the description header in the file, by default it is "Message text".

  • rule(default: Rule): Speficy the rule header in the file, by default it is "Rule".

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • regexFile(default: .tab$): Specify a regular expression to find QAC tab files

  • fileSeparator(default: \t): Specify the File separator, required on File import

  • default2application(default: true): If this field is "true": when a file is not found for a rule, a finding is added to the Artefact application. By default it is true.

The full command line syntax for QAC CERT (8.2) is:

-d "type=qac_cert,txt=[file_or_directory],eliminateDuplicate=[booleanChoice],fileName=[text],line=[text],description=[text],rule=[text],logLevel=[text],regexFile=[text],fileSeparator=[text],default2application=[booleanChoice]"

QAC CWE

QAC CWE qac

Description

QA-C is a static analysis tool for MISRA, CERT, HIS or CWE checking.

Usage

QAC CWE has the following options:

  • QAC CWE output file(s) (txt, mandatory): Specify the path(s) to the .tab file(s) to extract findings from. To provide multiple files click on '+'

  • Eliminate duplicated findings (eliminateDuplicate, default: false): When 2 occurences of the same finding (same rule, same file, same line, same description) is found, only one is reported.

In addition the following options are avaiable on command line:

  • fileName(default: Filename with path): Speficy the file name header in the file, by default it is "Filename with path".

  • line(default: Line): Speficy the line number header in the file, by default it is "Line".

  • description(default: Message text): Speficy the description header in the file, by default it is "Message text".

  • rule(default: Rule): Speficy the rule header in the file, by default it is "Rule".

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • regexFile(default: .tab$): Specify a regular expression to find QAC tab files

  • fileSeparator(default: \t): Specify the File separator, required on File import

  • default2application(default: true): If this field is "true": when a file is not found for a rule, a finding is added to the Artefact application. By default it is true.

The full command line syntax for QAC CWE is:

-d "type=qac_cwe,txt=[file_or_directory],eliminateDuplicate=[booleanChoice],fileName=[text],line=[text],description=[text],rule=[text],logLevel=[text],regexFile=[text],fileSeparator=[text],default2application=[booleanChoice]"

QAC HIS

QAC HIS qac

Description

QA-C is a static analysis tool for MISRA, CERT, HIS or CWE checking.

Usage

QAC HIS has the following options:

  • QAC HIS output file(s) (txt, mandatory): Specify the path(s) to the .tab file(s) to extract findings from. To provide multiple files click on '+'

  • Eliminate duplicated findings (eliminateDuplicate, default: false): When 2 occurences of the same finding (same rule, same file, same line, same description) is found, only one is reported.

In addition the following options are avaiable on command line:

  • fileName(default: Filename with path): Speficy the file name header in the file, by default it is "Filename with path".

  • line(default: Line): Speficy the line number header in the file, by default it is "Line".

  • description(default: Message text): Speficy the description header in the file, by default it is "Message text".

  • rule(default: Rule): Speficy the rule header in the file, by default it is "Rule".

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • regexFile(default: .tab$): Specify a regular expression to find QAC tab files

  • fileSeparator(default: \t): Specify the File separator, required on File import

  • default2application(default: true): If this field is "true": when a file is not found for a rule, a finding is added to the Artefact application. By default it is true.

The full command line syntax for QAC HIS is:

-d "type=qac_his,txt=[file_or_directory],eliminateDuplicate=[booleanChoice],fileName=[text],line=[text],description=[text],rule=[text],logLevel=[text],regexFile=[text],fileSeparator=[text],default2application=[booleanChoice]"

QAC MISRA (8.2)

QAC MISRA (8.2) qac

Description

QA-C is a static analysis tool for MISRA, CERT, HIS or CWE checking.

Usage

QAC MISRA (8.2) has the following options:

  • QAC MISRA output file(s) (txt, mandatory): Specify the path(s) to the .tab file(s) to extract findings from. To provide multiple files click on '+'

  • Eliminate duplicated findings (eliminateDuplicate, default: false): When 2 occurences of the same finding (same rule, same file, same line, same description) is found, only one is reported.

In addition the following options are avaiable on command line:

  • fileName(default: Filename with path): Speficy the file name header in the file, by default it is "Filename with path".

  • line(default: Line): Speficy the line number header in the file, by default it is "Line".

  • description(default: Message text): Speficy the description header in the file, by default it is "Message text".

  • rule(default: Rule): Speficy the rule header in the file, by default it is "Rule".

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

  • regexFile(default: .tab$): Specify a regular expression to find QAC tab files

  • fileSeparator(default: \t): Specify the File separator, required on File import

  • default2application(default: true): If this field is "true": when a file is not found for a rule, a finding is added to the Artefact application. By default it is true.

The full command line syntax for QAC MISRA (8.2) is:

-d "type=qac_misra,txt=[file_or_directory],eliminateDuplicate=[booleanChoice],fileName=[text],line=[text],description=[text],rule=[text],logLevel=[text],regexFile=[text],fileSeparator=[text],default2application=[booleanChoice]"

Rational Logiscope

Rational Logiscope Logiscope

Description

The Logiscope suite allows the evaluation of source code quality in order to reduce maintenance cost, error correction or test effort. It can be applied to verify C, C++, Java and Ada languages and produces a CSV results file that can be imported to generate findings.

For more details, refer to http://www.kalimetrix.com/en/logiscope.

Usage

Rational Logiscope has the following options:

  • RuleChecker results file (csv, mandatory): Specify the path to the CSV results file from Logiscope.

The full command line syntax for Rational Logiscope is:

-d "type=Logiscope,csv=[file]"

Requirement ASIL via Excel Import

Description

Requirement ASIL via Excel Import

Usage

Requirement ASIL via Excel Import has the following options:

  • Input file (input_file, mandatory): Specify the location of the Excel file or directory containing information.

  • Sheetname (sheetname, mandatory): Sheetname to read data from

  • Artefact name (artefact_name, mandatory): Artefact name as displayed in Squore. Examples:

    • ${ID}

    • T_${Name}

    • ${Name} ${Descr}

    Note:${NAME} designates the column called NAME

  • Path to the artefact (path_list): Optional. If not used, artefacts extracted from the Excel file will be directly added to the Squore root.

    To specify the path in Squore of artefacts exracted from the Excel file, using the following format:

    <COLUMN_NAME>?map=[<REGEX_1>:<GROUP_NAME_1>,…​,<REGEX_N>:<GROUP_NAME_N>]&groupByDate=<YES>&format=<dd-mm-YYYY> Examples:

    • Area

    Artefacts will be regrouped by the value found in the 'Area' column

    • Area?map=[A*:Area A,B*:Area B]

    Artefacts will be regrouped into two groups:'Area A', for all values of 'Area' column starting with letter 'A', and 'Area B' for letter 'B'.

    • Started on?groupByDate=Yes&format=YYYY/mm/dd

    Artefacts will be regrouped by the date found in column 'Started on', using the format 'YYYY/mm/dd'

    Note:Date patterns are based on SimpleDateFormat Java class specifications.

  • Textual data to extract (info_list): Optional.

    To specify the list of textual data to extract from the Excel file, using the following format:

    <METRIC_ID>?column=<COLUMN_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>] Examples:

    • ZONE_ID?column=Zone

    Textual data found in column 'Zone' will be associated to metric ZONE_ID

    • ZONE_ID?column=Zone;OWNER?column=Belongs to

    Textual data found in columns 'Zone' and 'Belongs to' will be associated to metric ZONE_ID and OWNER respectively

    • ORIGIN?column=Comes from,map=[Cust*:External,Sub-contractor*:External,Support:Internal,Dev:Internal]

    _Textual data found in column 'Comes from' will be associated to metric ORIGIN:

    • With value 'External' if the column starts with 'Cust' or 'Sub-contractor'

    • With value 'Internal' if the column equals 'Support' or 'Dev'

    _

    • Started on?groupByDate=Yes&format=YYYY/mm/dd

    Artefacts will be regrouped by the date found in column 'Started on', using the format 'YYYY/mm/dd'

  • Numerical metrics to extract (metric_list): Optional.

    To specify the list of numerical data to extract from the Excel file, using the following format:

    <METRIC_ID>?column=<COLUMN_NAME>&extract=<REGEX_EXRACT>&map=[<REGEX_1>:<VALUE_1>,…​,<REGEX_N>:<VALUE_N>] Examples:

    • PRIORITY?column=Priority level

    Numerical values found in column 'Priority level' will be associated to metric PRIORITY

    • SEVERITY?column=Severity level,extract=S_

    Numerical values found in column 'Severity level' will be associated to metric SEVERITY, after having extracted (removed) the string 'S_', because in this example, column 'Severity level' contains for example 'S_1', 'S_4', etc., and we want to obtain '1', '4', etc.

    • STATUS?column=State&map=[passed:0,Passed:0,Pass:0,*nconclusive*:1,failed:2,Failed:2,FAIL:2]

    _Textual values found in column 'State' will be mapped to numerical values using these rules:

    • For values containing 'passed', 'Passed', 'Pass'

    • For values containing 'nconclusive'

    • For values containing 'failed', 'Failed, 'FAIL'

    _

  • Artefact unique ID (artefact_uid): Optional unless you want to use links to these artefacts.

    This is the artefact unique ID, to be used by links, from this Data Provider, or another Data Provider.Examples:

    • ${ID}

    • T_${Name}

    • ${Name} ${Descr}

    Note:${NAME} designates the column called NAME

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Requirement ASIL via Excel Import is:

-d "type=import_req_asil,input_file=[file_or_directory],sheetname=[text],artefact_name=[text],path_list=[text],info_list=[text],metric_list=[text],artefact_uid=[text],xlsx_file_pattern=[text],logLevel=[text]"

Requirement Data Import

Description

Requirement Data Import provides a generic import mechanism for requirements from a file.

Requirement Data Import provides fields so you can map all your requirements and spread them over the following statuses: Proposed, Analyzed, Approved, Implemented, Verified, Postponed, Deleted, Rejected. Overlapping statuses will cause an error, but if a requirement’s status is not declared in the definition, the requirement will still be imported, and a finding will be created.

Usage

Requirement Data Import has the following options:

  • Choose Excel, CSV, JSON or XML import (import_type, default: excel): Specify if the import is about Excel, CSV, JSON or Xml file.

  • Data File (input_file, mandatory): Specify the location of the file or directory containing Excel, CSV, JSON or XML files with requirements.

  • Sheet Name (xls_sheetname): Specify the sheet name that contains the requirement list.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • CSV Separator (csv_separator, default: ;): Specify the character used in the CSV file to separate columns.

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • List of future artifacts (root_path): Defined the root element to retrieve the list of artifacts in the file, required on JSON or XML import.

    • Example XML: allrequirements represents the requirement array for the two XML examples

    <*allrequirements* >

      <requirement id="ID_1" description="This is a description" name="My First Requirement" />

      <requirement id="ID_2" description="This is a description" name="My Second Requirement" />

    </allrequirements>

    <*allrequirements* >

      <requirement>

        <id>"ID_1"</id>

        <description>"This is a description"</description>

        <name>"My First Requirement"</name>

      </requirement>

      <requirement>

        <id>"ID_2"</id>

        <description>"This is a description"</description>

        <name>"My Second Requirement"</name>

      </requirement>

    </allrequirements>

    • Example Json: allrequirements represents the requirement array

    {"allrequirements ": [ {

          "requirement": {

            "id": "ID_1"

            "description": "This is a description",

            "name": "My First Requirement" } },

        { "requirement": {

            "id": "ID_2"

            "description": "This is a description",

            "name": "My Second Requirement"

          } }] }

    • Example Json: the root_path is empty

    [ {"requirement": {

          "id": "ID_1",

          "description": "This is a description",

          "name": My First Requirement }},

      { "requirement": {

          "id": "ID_2",

          "description": "This is a description",

          "name": My Second Requirement}} ]

  • JSON file regular expression (json_file_pattern, default: *.json$): Specify a regular expression to find JSON files, by default it's *.json$

  • XML file regular expression (xml_file_pattern, default: *.xml$): Specify a regular expression to find XML files, by default it's *.xml$

  • Requirement Name (artefact_name, mandatory): Required Specify the pattern used to build the name of the requirement. The name can use any information collected from the file as a parameter.

    Example: ${ID} : ${Summary}

  • Requirement artefact path (root_node, default: Requirements): Specify the root path in Squore of artefacts extracted from the file.

    By default the root artefact path is Requirements

  • Requirement ID (artefact_id, mandatory): Specify the column name or path which contain the requirement ID.

    Examples:

    • ID

    Note: ID is the column name in Excel or CSV file

    • requirement/id

    Note: requirement/id is the path in Json or Xml File

  • Requirement version (version): Specify the column name or path which contain the requirement version.

    Examples:

    • Version

    Note: Version is the column name in Excel or CSV file

    • requirement/version

    Note: requirement/version is the path in Json or Xml File

  • Linked Requirements IDs which satisfy this requirement (link_satisfied_by): Specify the column name or path which contain the requirements IDs which satisfy this requirement.

    Examples:

    • Satisfied by

    Note: Satisfied by is the column name in Excel or CSV file

    • requirement/satisfiedby

    Note: requirement/satisfiedby is the path in Json or Xml File

  • Linked Test ID verifying this requirement (link_tested_by): Specify the column name or path which contain the linked test ID verifying this requirement.

    Examples:

    • Tested by

    Note: Tested by is the column name in Excel or CSV file

    • requirement/testedby

    Note: requirement/testedby is the path in Json or Xml File

  • Linked Ticket ID associated to this requirement (link_ticket): Specify the column name or path which contain the linked Ticket ID corresponding to an issue or enhancement request.

  • Requirement UID (artefact_uid): Specify the pattern used to build the requirement Unique ID. The UID can use any information collected from the file as a parameter.

    Example: TK#${ID}

  • Grouping Structure (artefact_groups): Artifacts can be grouped by contextual elements of the file, separated by ";".

    Examples: "column_name_or_path_1=regex1;column_name_or_path_2=regex2; the result in Squore Requirements/"value_regex1"/"value_regex2"/MyArt

  • Filtering (artefact_filters): If specified: only artefacts complying with the provided filters are kept. Use the following format:

    <COLUMN_NAME_OR_PATH>?regex=<REGEX>

    Examples:

    • Name?regex=^ST*

    Only create artefacts for which column 'Name' starts with 'ST'

    • requirement/name?regex=^ST*;Region?regex=Europe

    Only create artefacts for which path 'requirement/name' starts with 'ST'

    • Name?regex=^ST*;Region?regex=Europe

    Same as before, but restrict to artefacts where column 'Region' is 'Europe'

  • Status (status, default: Status): Specify the status of requirement.

  • Applicable Requirement Pattern (definition_applicable): Specify the pattern applied to define requirements as Applicable. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Applicable=Yes

  • Proposed Requirement Pattern (definition_proposed): Specify the pattern applied to define requirements as proposed. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Proposed

  • Analyzed Requirement Pattern (definition_analyzed): Specify the pattern applied to define requirements as analyzed. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Examples:

    • Status=Analyzed

    • Status=[Analyzed|Introduced]

    • Status=Analyzed;Decision=[final;revised]

  • Approved Requirement Pattern (definition_approved): Specify the pattern applied to define requirements as approved. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Proposed

  • Implemented Pattern (definition_implemented): Specify the pattern applied to define requirements as Implemented. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Implemented

  • Verified Requirement Pattern (definition_verified): Specify the pattern applied to define requirements as Verified. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Verified

  • Postponed Requirement Pattern (definition_postponed): Specify the pattern applied to define requirements as Postponed. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=postponed

  • Deleted Requirement Pattern (definition_deleted): Specify the pattern applied to define requirements as deleted. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Deleted

  • Rejected Requirement Pattern (definition_rejected): Specify the pattern applied to define requirements as rejected. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Rejected

  • Priority Column (priority): Specify the header of the column containing priority data.

  • 'Very high' Requirement priority Pattern (definition_priority_very_high): Specify the pattern applied to define requirements priority as 'Very High' (usually associated to value '1'). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Priority=1

  • 'High' Requirement priority Pattern (definition_priority_high): Specify the pattern applied to define requirements priority as 'High' (usually associated to value '2'). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Priority=2

  • 'Medium' Requirement priority Pattern (definition_priority_medium): Specify the pattern applied to define requirements priority as 'Medium' (usually associated to value '3'). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Priority=3

  • 'Low' Requirement priority Pattern (definition_priority_low): Specify the pattern applied to define requirements priority as 'Low' (usually associated to value '4'). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Priority=4

  • Compliance (compliance, default: Compliance): Specify the compliance of requirement.

  • 'Met' Compliance Pattern (definition_met): Specify the pattern applied to define requirement Compliance as 'Met'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Compliance=Met

  • 'Partially Met' Compliance Pattern (definition_partially_met): Specify the pattern applied to define requirement Compliance as 'Partially Met'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Compliance=Partially Met

  • 'Not Met' Compliance Pattern (definition_not_met): Specify the pattern applied to define requirement Compliance as 'Not Met'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Compliance=Not Met

  • IADT (iadt, default: IADT Method): Specify the IADT of requirement.

  • 'Inspection' Test Method Pattern (definition_inspection): Specify the pattern applied to define requirement Test method as 'Inspection'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: IADT Method=Inspection

  • 'Analysis' Test Method Pattern (definition_analysis): Specify the pattern applied to define requirement Test method as 'Analysis'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: IADT Method=Analysis

  • 'Demonstration' Test Method Pattern (definition_demonstration): Specify the pattern applied to define requirement Test method as 'Demonstration'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: IADT Method=Demonstration

  • 'Test' Test Method Pattern (definition_test): Specify the pattern applied to define requirement Test method as 'Test'. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: IADT Method=Test

  • Creation Date (creation_date): Specify the column name or path which contain the creation date of the requirement.

    If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    Examples:

    • Creation Date&format="yyyy-MM-dd"

    If format parameter and "Date format" field are not specified, the following is used by default: dd-MMM-yyyy

    Note: Date patterns are based on SimpleDateFormat Java class specifications

  • Last Update (last_updated): Specify the column name or path which contain the last modification date of the requirement.

    If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    Examples:

    • Last Date&format="yyyy-MM-dd"

    If format parameter and "Date format" field are not specified, the following is used by default: dd-MMM-yyyy

    Note: Date patterns are based on SimpleDateFormat Java class specifications

  • URL (url): Specify the pattern used to build the requirement URL. The URL can use any information collected from the CSV file as a parameter.

  • Description Column (description): Specify the header of the column containing the description of the requirement.

  • Criticity (criticity, default: Criticity): Specify the criticity of requirement.

  • 'A' critical factor Pattern (definition_crit_factor_A): Specify the pattern applied to define requirement critical factor as 'A' (low). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Criticity=A .

  • 'B' critical factor Pattern (definition_crit_factor_B): Specify the pattern applied to define requirement critical factor as 'B' (medium). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Criticity=B .

  • 'C' critical factor Pattern (definition_crit_factor_C): Specify the pattern applied to define requirement critical factor as 'C' (high). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Criticity=C .

  • 'D' critical factor Pattern (definition_crit_factor_D): Specify the pattern applied to define requirement critical factor as 'D' (highest). This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Criticity=D .

  • Information Fields (informations): Specify the list of extra textual information to import from the file. This parameter expects a list of column name or path separated by ";" characters.

    For example: Company;Country;Resolution

  • Date format (date_format, default: yyyy-MM-dd): Formatting the date to match the given pattern. This pattern can be used for "Last Update" and "Creation Date" fields and "&format" parameter is no longer requiredFor examples: "dd/mm/yyyy" or "yyyy-MM-dd'T'hh:mm:ss'Z'" Note:Date patterns are based on SimpleDateFormat Java class specifications.

  • Save Output (createOutput):

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Requirement Data Import is:

-d "type=import_req,import_type=[multipleChoice],input_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],root_path=[text],json_file_pattern=[text],xml_file_pattern=[text],artefact_name=[text],root_node=[text],artefact_id=[text],version=[text],link_satisfied_by=[text],link_tested_by=[text],link_ticket=[text],artefact_uid=[text],artefact_groups=[text],artefact_filters=[text],status=[text],definition_applicable=[text],definition_proposed=[text],definition_analyzed=[text],definition_approved=[text],definition_implemented=[text],definition_verified=[text],definition_postponed=[text],definition_deleted=[text],definition_rejected=[text],priority=[text],definition_priority_very_high=[text],definition_priority_high=[text],definition_priority_medium=[text],definition_priority_low=[text],compliance=[text],definition_met=[text],definition_partially_met=[text],definition_not_met=[text],iadt=[text],definition_inspection=[text],definition_analysis=[text],definition_demonstration=[text],definition_test=[text],creation_date=[text],last_updated=[text],url=[text],description=[text],criticity=[text],definition_crit_factor_A=[text],definition_crit_factor_B=[text],definition_crit_factor_C=[text],definition_crit_factor_D=[text],informations=[text],date_format=[text],createOutput=[booleanChoice],logLevel=[text]"

SARIF Format

Description

Import Findings from JSon file in SARIF format

Usage

SARIF Format has the following options:

  • JSon File(s) (sarifFile, mandatory): Specify the JSon file (or the directory) which contains the findings results

  • Finding description pattern (findingPattern): The list of patterns to apply to finding description.

    For example

    • "Line is longer than 120 characters (found 126)."

    • the regular expression can be \\(found [0-9]+

    • the part (found 126 will be a parameter of the finding description.

    In this example, if in next build we have this description "Line is longer than 120 characters (found 127)" the finding will not be detected as new despite the difference between 126 and 127.

  • Rules exclusion pattern (excludedRules): Specify rules to be excluded by a pattern. The syntax will be ruleField=pattern.

    For example

    • properties.familiy=Violation

    • id=OTHER|ANOTHER

  • Search by file name only (ignoreArtPath): Checking the box allows to search matching file only by file name instead of using the entire path

In addition the following options are avaiable on command line:

  • tool(default: true): Checking the box save the tool name in input-data.xml.

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for SARIF Format is:

-d "type=sarif,sarifFile=[file_or_directory],findingPattern=[text],excludedRules=[text],ignoreArtPath=[booleanChoice],tool=[booleanChoice],logLevel=[text]"

Squore Analyzer

Squore Analyzer Squore

Description

Squore Analyzer provides basic-level analysis of your source code.

For more details, refer to https://www.vector.com/squore.

The analyser can output info and warning messages in the build logs. Recent additions to those logs include better handling of structures in C code, which will produce these messages:

  • [Analyzer] Unknown syntax declaration for function XXXXX at line yyy to indicate that we whould have found a function but, probably due to preprocessing directives, we are not able to parse it.

  • [Analyzer] Unbalanced () blocks found in the file. Probably due to preprocessing directives, parenthesis in the file are not well balanced.

  • [Analyzer] Unbalanced {} blocks found in the file. Probably due to preprocessing directives, curly brackets in the file are not well balanced.

You can specify the languages for your source code by passing pairs of language and extensions to the languages parameter. Extensions are case-sensitive and cannot be used for two different languages. For example, a project mixing php and javascript files can be analysed with:

--dp "type=SQuORE,languages=php:.php;javascript:.js,.JS"

In order to launch an analysis using all the available languages by default, do not specify the languages parameter in your command line.

Usage

Squore Analyzer has the following options:

  • Languages (languages, default: ada;c;cpp;csharp;cobol;java;fortran77;fortran90;groovy;php;python;swift;vbn…​): Check the boxes for the languages used in the specified source repositories. Adjust the list of file extensions as necessary. Note that two languages cannot use the same file extension, and that the list of extensions is case-sensitive. Tip: Leave all the boxes unchecked and Squore Analyzer will auto-detect the language parser to use.

  • Force full analysis (rebuild_all, default: false): Analyses are incremental by default. Check this box if you want to force the source code parser to analyse all files instead of only the ones that have changed since the previous analysis. This is useful if you added new rule files or text parsing rules and you want to re-evaluate all files based on your modifications.

  • Generate control graphs (genCG, default: true): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Use qualified names (qualified, default: false): Note: This option cannot be modified in subsequent runs after you create the first version of your project.

  • Limit analysis depth (depth, default: false): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Add a 'Source Code' node (scnode, default: false): Using this options groups all source nodes under a common source code node instead of directly under the APPLICATION node. This is useful if other data providers group non-code artefacts like tests or requirements together under their own top-level node. This option can only be set when you create a new project and cannot be modified when creating a new version of your project.

  • 'Source Code' node label (scnode_name, default: Source Code): Specify a custom label for your main source code node. Note: this option is not modifiable. It only applies to projects where you use the "Add a 'Source Code' node" option. When left blank, it defaults to "Source Code".

  • Compact folders (compact_folder, default: true): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Content exclusion via regexp (pattern): Specify a PERL regular expression to automatically exclude files from the analysis if their contents match the regular expression. Leave this field empty to disable content-based file exclusion.

  • File Filtering (files_choice, default: Exclude): Specify a pattern and an action to take for matching file names. Leave the pattern empty to disable file filtering.

  • pattern (pattern_files): Use a shell-like wildcard e.g. '*-test.c'.

    • * Matches any sequence of characters in string, including a null string.

    • ? Matches any single character in string.

    • [chars] Matches any character in the set given by chars. If a sequence of the form x-y appears in chars, then any character between x and y, inclusive, will match. On Windows, this is used with the -nocase option, meaning that the end points of the range are converted to lower case first. Whereas [A-z] matches '_' when matching case-sensitively ('_' falls between the 'Z' and 'a'), with -nocase this is considered like [A-Za-z].

    • \x Matches the single character x. This provides a way of avoiding the special interpretation of the characters *?[] in pattern.

    Tip : Use ';' to separate multiple patterns.

    How to specify a file:

    • By providing its name, containing or not a pattern

    • By providing its name and its path, both containing or not a pattern

    e.g.

    • *D??l?g.* : will match MyDialog.java, WinDowlog.c, …​ anywhere in the project

    • */[Dd]ialog/*D??l?g.* : will match src/java/Dialog/MyDialog.java, src/c/dialog/WinDowlog.c, but not src/Dlg/c/WinDowlog.c

  • Folder Filtering (dir_choice, default: Exclude): Specify a pattern and an action to take for matching folder names. Leave the pattern empty to disable folder filtering.

  • pattern (pattern_dir): Use a shell-like wildcard e.g. 'Test_*'.

    • * Matches any sequence of characters in string, including a null string.

    • ? Matches any single character in string.

    • [chars] Matches any character in the set given by chars. If a sequence of the form x-y appears in chars, then any character between x and y, inclusive, will match. On Windows, this is used with the -nocase option, meaning that the end points of the range are converted to lower case first. Whereas [A-z] matches '_' when matching case-sensitively ('_' falls between the 'Z' and 'a'), with -nocase this is considered like [A-Za-z].

    • \x Matches the single character x. This provides a way of avoiding the special interpretation of the characters *?[] in pattern.

    Tip : Use ';' to separate multiple patterns.

    A directory can be specified:

    • By providing its name, containing or not a pattern

    • By providing its name and its path, both containing or not a pattern. In that case the full path has to match.

    e.g.

    • source? : will match directories source, sources, …​ anywhere in the project

    • src/tests : will not match any directory because the full path can not match

    • */src/tests : will match java/src/tests, native/c/src/tests, …​

    To get the root path of the project it is possible to use the nodes variables ($src, $Node1, …​). Refers to "Using Data Provider Input Files From Version Control" in the Getting Started to learn more.

    e.g. $src/source/tests will match only the directory source/tests if it is a root directory of the project.

  • Exclude files whose size exceeds (size_limit, default: 500000): Provide the size in bytes above which files are excluded automatically from the Squore project (Big files are usually generated files or test files). Leave this field empty to deactivate this option.

  • Detect algorithmic cloning (clAlg, default: true): When checking this box, Squore Analyzer launches a cloning detection tool capable of finding algorithmic cloning in your code.

  • Detect text cloning (clTxt, default: true): When checking this box, Squore Analyzer launches a cloning detection tool capable of finding text duplication in your code.

  • Ignore blank lines (clIgnBlk, default: true): When checking this box, blanks lines are ignored when searching for text duplication

  • Ignore comment blocks (clIgnCmt, default: true): When checking this box, blocks of comments are ignored when searching for text duplication

  • Minimum size of duplicated blocks (clRSlen, default: 10): This threshold defines the minimum size (number of lines) of blocks that can be reported as cloned.

  • Textual Cloning fault ratio (clFR, default: 0.1): This threshold defines how much cloning between two artefacts is necessary for them to be considered as clones by the text duplication tool. For example, a fault ratio of 0.1 means that two artefacts are considered clones if less than 10% of their contents differ.

  • Algorithmic cloning fault ratio (clAlgFR, default: 0.1): This threshold defines how much cloning between two artefacts is necessary for them to be considered as clones by the algorithmic cloning detection tool.

  • Compute Textual stability (genTs, default: true): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Compute Algorithmic stability (genAs, default: true): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Detect artefact renaming (clRen, default: true): this option is deprecated and will be removed in a future release. You should not use it anymore.

  • Mark relaxed or confirmed findings as suspicious (susp, default: MODIFIED_BEFORE): Depending on the choosen option, relaxed findings can be flagged as suspicious in case of changes in and around the finding. In all cases, the following is to be considered:

    • Only changes on effective code are considered, comments are ignored.

    • Only changes inside the scope of the artefact containing the finding are considered.

  • Accept Relaxation from source code comment (relax, default: true): Relaxing Violations in Code

    Squore interprets comments formatted in one of these three ways:

    • Inline Relaxation

    This syntax is used to relax violations on the current line.

    some code; /* %RELAX<keys> : Text to justify the relaxation */

     

    • Relax Next Line

    This syntax is used to relax a violation on the first following line that is not a comment. In the example the text of the justification will be: "Text to justify the relaxation the text of the justification continues while lines are made of comments only"

    /* >RELAX<keys> : Text to justify the relaxation */

    /* the text of the justification continues while */

    /* lines are made of comments only */

    some code;

     

    • Block Relaxation

    This syntax is used to relax violations in an entire block of code.

    /* {{ RELAX<keys> : Text to justify the relaxation */

    /* like for format 2 text can be on more than one line */

    int my_func() {

       /* contains many violations */

       …​

    }

    /* }} RELAX<keys> */

    <keys> can be one of the following:

    • <*>: relax all violations

    • <MNEMO>: relax violations of the rule MNEMO

    • <MNEMO1,MNEMO2,…​,MNEMOn>: relax violations of rules MNEMO1 and MNEMO2 …​ and MNEMOn

    It is possible to relax using a status different from derogation. In that case the keyword RELAX has to be followed by :RELAXATION_STATUS

     

    e.g. RELAX:APPROVED if the status RELAXED_APPOVED is defined in the model.

     

  • Function calls as links (C language only) (calls, default: false): When selecting this option a link between caller and called will be imported (link of type CALLS).

  • Additional parameters (additional_param): These additional parameters can be used to pass instructions to external processes started by this data provider. This value is generally left empty in most cases.

The full command line syntax for Squore Analyzer is:

-d "type=SQuORE,languages=[multipleChoice],rebuild_all=[booleanChoice],genCG=[booleanChoice],qualified=[booleanChoice],depth=[booleanChoice],scnode=[booleanChoice],scnode_name=[text],compact_folder=[booleanChoice],pattern=[text],files_choice=[multipleChoice],pattern_files=[text],dir_choice=[multipleChoice],pattern_dir=[text],size_limit=[text],clAlg=[booleanChoice],clTxt=[booleanChoice],clIgnBlk=[booleanChoice],clIgnCmt=[booleanChoice],clRSlen=[text],clFR=[text],clAlgFR=[text],genTs=[booleanChoice],genAs=[booleanChoice],clRen=[booleanChoice],susp=[multipleChoice],relax=[booleanChoice],calls=[booleanChoice],additional_param=[text]"

Squore Import

Squore Import logo

Description

Squore Import is a data provider used to import the results of another data provider analysis. It is generally only used for debugging purposes.

For more details, refer to support@vector.com.

Usage

Squore Import has the following options:

  • XML folder (inputDir, mandatory): Specify the folder that contains the squore_data_*.xml files that you want to import.

The full command line syntax for Squore Import is:

-d "type=SQuOREImport,inputDir=[directory]"

Stack Data Import

Description

Stack Data Import provides a generic import mechanism for stack data from a CSV or Excel file.

Usage

Stack Data Import has the following options:

  • Choose Excel or CSV import (import_type, default: excel): Specify if the import is about Excel or CSV.

  • File or Directory (xls_file, mandatory): Specify the location of the Excel or CSV file or directory containing Stack information.

  • Sheet Name (xls_sheetname): Specify the sheetname that contains the Stack list.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • Specify the CSV separator (csv_separator, default: ;): Specify the CSV separator

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • Stack Column name (xls_key, mandatory): Specify the header name of the column which contains the Stack key.

  • Stack artefact root path (root_node, default: Resources): Specify the root path in Squore of artefacts extracted from the file.

    By default the root artefact path is Resources

  • Grouping Structure (xls_groups): Artifacts can be grouped by contextual elements of the file, separated by ";".

    For example: "column_name_1=regex1;column_name_2=regex2; the result in Squore Resources/"value_regex1"/"value_regex2"/MyArt

  • Filtering (xls_filters): Specify the list of Header for filtering

    For example: "column_name_1=regex1;column_name_2=regex2;

  • Stack size column (stack_size_column_name, default: Stack Size [Bytes]): Specify the name of the column of Stack Size

  • Stack Average column (stack_average_column_name, default: Average Stack Size used [Bytes]): Specify the name of the column of Stack Average

  • Stack Worst column (stack_worst_column_name, default: Worse Case Stack Size used [Bytes]): Specify the name of the column of Stack Worst

  • Create an output file (createOutput, default: true): Create an output file

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Stack Data Import is:

-d "type=import_stack,import_type=[multipleChoice],xls_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],xls_key=[text],root_node=[text],xls_groups=[text],xls_filters=[text],stack_size_column_name=[text],stack_average_column_name=[text],stack_worst_column_name=[text],createOutput=[booleanChoice],logLevel=[text]"

StyleCop

StyleCop StyleCop

Description

StyleCop is a C# code analysis tool. Its XML output is imported to generate findings.

For more details, refer to https://stylecop.codeplex.com/.

Usage

StyleCop has the following options:

  • XML results file (xml, mandatory): Specify the path to the StyleCop XML results file. The minimum version compatible with this data provider is 4.7.

The full command line syntax for StyleCop is:

-d "type=StyleCop,xml=[file]"

Tessy

Tessy logo

Description

Tessy is a tool automating module/unit testing of embedded software written in dialects of C/C++. Tessy generates an XML results file which can be imported to generate metrics. This data provider supports importing files that are provided by 4.3.x Tessy version.

For more details, refer to https://www.hitex.com/en/tools/tessy/.

Usage

Tessy has the following options:

  • Results folder (resultDir, mandatory): Specify the top folder containing XML result files from Tessy. Note that this data provider will recursively scan sub-folders looking for *.xml files to aggregate results.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE)

The full command line syntax for Tessy is:

-d "type=Tessy,resultDir=[file_or_directory],logLevel=[text]"

Test Data Import

Description

Test Data Import provides a generic import mechanism for tests from a CSV, Excel or JSON file. Additionnally, it generates findings when the imported tests have an unknown status or type.

This Data Provider provides fields so you can map all your tests and spread them over the following statuses: Failed, Inconclusive, Passd. Overlapping statuses and types will cause an error, but if a test status is not declared in the definition, the test will still be imported, and a finding will be created.

Usage

Test Data Import has the following options:

  • Choose Excel, CSV, JSON or XML import (import_type, default: excel): Specify if the import is about Excel, CSV, JSON or Xml file.

  • Data File (input_file, mandatory): Specify the location of the Excel, CSV, JSON or XML file or directory containing tests.

  • Excel Sheet Name (xls_sheetname): Specify the sheet name that contains the test list if your import file is in Excel format.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • CSV Separator (csv_separator, default: ;): Specify the character used in the CSV file to separate columns.

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • JSON file regular expression (json_file_pattern, default: *.json$): Specify a regular expression to find JSON files, by default it's *.json$

  • XML file regular expression (xml_file_pattern, default: *.xml$): Specify a regular expression to find XML files, by default it's *.xml$

  • Test Name (artefact_name, mandatory): Specify the pattern used to build the name of the test. The name can use any information collected from the file as a parameter.

    Example: ${ID} : ${Summary}

  • List of future artifacts (root_path): Defined the root element to retrieve the list of artifacts in the file, required on JSON or XML import.

    • Example XML: alltests represents the test array for the two XML examples

    <*alltests* >

      <test id="ID_1" description="This is a description" name="My First Test" />

      <test id="ID_2" description="This is a description" name="My Second Test" />

    </alltests>

    <*alltests* >

      <test>

        <id>"ID_1"</id>

        <description>"This is a description"</description>

        <name>"My First Test"</name>

      </test>

      <test>

        <id>"ID_2"</id>

        <description>"This is a description"</description>

        <name>"My Second Test"</name>

      </test>

    </alltests>

    • Example Json: alltests represents the test array

    {"alltests ": [ {

          "test": {

            "id": "ID_1"

            "description": "This is a description",

            "name": "My First Test" } },

        { "test": {

            "id": "ID_2"

            "description": "This is a description",

            "name": "My Second Test"

          } }] }

    • Example Json: the root_path is empty

    [ {"test": {

          "id": "ID_1",

          "description": "This is a description",

          "name": My First Test }},

      { "test": {

          "id": "ID_2",

          "description": "This is a description",

          "name": My Second Test}} ]

  • Test artefact path (root_node, default: Tests): Specify the root path in Squore of artefacts extracted from the file.

    By default the root artefact path is Tests

  • TestID (artefact_id): Specify the column name or path which contain the test ID.

    Examples:

    • ID

    Note: ID is the column name in Excel or CSV file

    • test/id

    Note: test/id is the path in Json or Xml File

  • Linear Index (linear_idx): Specify the column name or path of the Linear Index (=Linear Index is used to order unit or integration tests in matrix graph).

  • Test UID (artefact_uid): Specify the pattern used to build the test Unique ID. The UID can use any information collected from the file as a parameter.

    Example: TST#${ID}

  • Grouping Structure (artefact_groups): Artifacts can be grouped by contextual elements of the file, separated by ";".

    Examples: "column_name_1=regex1;column_name_2=regex2; the result in Squore Tests/"value_regex1"/"value_regex2"/MyArt

  • Filtering (artefact_filters): If specified: only artefacts complying with the provided filters are kept. Use the following format:

    <COLUMN_NAME_OR_PATH>?regex=<REGEX>

    Examples:

    • Name?regex=^ST*

    Only create artefacts for which column 'Name' starts with 'ST'

    • ticket/name?regex=^ST*;Region?regex=Europe

    Only create artefacts for which path 'ticket/name' starts with 'ST'

    • Name?regex=^ST*;Region?regex=Europe

    Same as before, but restrict to artefacts where column 'Region' is 'Europe'

  • Failed Test Pattern (definition_failed): Specify the pattern applied to define tests as failed. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Failed

  • Inconclusive Test Pattern (definition_inconclusive): Specify the pattern applied to define tests as inconclusive. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=[Inconclusive|Unfinished]

  • Passed Test Pattern (definition_passed): Specify the pattern applied to define tests as passed. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Status=Passed

  • Status (status, default: status): Specify the status test.

  • Date when the test was executed (execution_date): Enter column name or path containing the execution date of the test.

    If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    Examples:

    • Execution Date&format="yyyy-MM-dd"

    If format parameter and "Date format" field are not specified, the following is used by default: dd-MMM-yyyy

    Note: Date patterns are based on SimpleDateFormat Java class specifications

  • Unit of test duration (execution_duration_unit, default: 1): Enter the unit used for the test duration. Possible values are 's' (seconds) or 'ms' (milliseconds), default is 'ms')

  • Duration of the test (execution_duration): Enter column name or path containing the execution duration of the test, in milliseconds.

  • TODO Pattern (in_todo_list): Specify the pattern applied to include tests in the TODO list. This field accepts a regular expression to match one or more column name or path with a list of possible values.

    Example: Active=Yes

  • Creation Date (creation_date): Enter column name or path containing the creation date of the test.

    If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    Examples:

    • Creation Date&format="yyyy-MM-dd"

    If format parameter and "Date format" field are not specified, the following is used by default: dd-MMM-yyyy

    Note: Date patterns are based on SimpleDateFormat Java class specifications

  • Last Updated Date (last_updated_date): Enter column name or path containing the last updated date of the test.

    If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    Examples:

    • Last Date&format="yyyy-MM-dd"

    If format parameter and "Date format" field are not specified, the following is used by default: dd-MMM-yyyy

    Note: Date patterns are based on SimpleDateFormat Java class specifications

  • URL (url): Specify the pattern used to build the test URL. The URL can use any information collected from the file as a parameter.

  • Description (description): Specify column name or path containing the description of the test.

  • Category (category): Specify column name or path containing the category of the test.

  • Priority (priority): Specify column name or path containing priority data.

  • Information Fields (informations): Specify the list of extra textual information to import from the file. This parameter expects a list of headers separated by ";" characters.

    For example: Architecture;Responsible;Target

  • Date format (date_format, default: yyyy-MM-dd): Formatting the date to match the given pattern. This pattern can be used for "Last Updated Date", "Creation Date" and "Date when the test was executed" fields and "&format" parameter is no longer requiredFor examples: "dd/mm/yyyy" or "yyyy-MM-dd'T'hh:mm:ss'Z'" Note:Date patterns are based on SimpleDateFormat Java class specifications.

  • Save Output (createOutput):

In addition the following options are avaiable on command line:

  • config_file: Specify the path to configuration file containing Test import parameters.

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Test Data Import is:

-d "type=import_test,import_type=[multipleChoice],input_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],json_file_pattern=[text],xml_file_pattern=[text],config_file=[text],artefact_name=[text],root_path=[text],root_node=[text],artefact_id=[text],linear_idx=[text],artefact_uid=[text],artefact_groups=[text],artefact_filters=[text],definition_failed=[text],definition_inconclusive=[text],definition_passed=[text],status=[text],execution_date=[text],execution_duration_unit=[multipleChoice],execution_duration=[text],in_todo_list=[text],creation_date=[text],last_updated_date=[text],url=[text],description=[text],category=[text],priority=[text],informations=[text],date_format=[text],createOutput=[booleanChoice],logLevel=[text]"

Test Excel Import

Description

Test Excel Import

Usage

Test Excel Import has the following options:

  • Input file (input_file): Specify the location of the Excel file or directory containing test information.

  • Sheetname (sheetname, mandatory): Sheetname to read data from

  • Artefact name (artefact_name, mandatory): Artefact name as displayed in Squore. Examples:

    • ${ID}

    • T_${Name}

    • ${Name} ${Descr}

    Note:${NAME} designates the column called NAME

  • Path to the artefact (path_list): Optional. If not used, artefacts extracted from the Excel file will be directly added to the Squore root.

    To specify the path in Squore of artefacts exracted from the Excel file, using the following format:

    <COLUMN_NAME>?map=[<REGEX_1>:<GROUP_NAME_1>,…​,<REGEX_N>:<GROUP_NAME_N>]&groupByDate=<YES>&format=<dd-mm-YYYY> Examples:

    • Area

    Artefacts will be regrouped by the value found in the 'Area' column

    • Area?map=[A*:Area A,B*:Area B]

    Artefacts will be regrouped into two groups:'Area A', for all values of 'Area' column starting with letter 'A', and 'Area B' for letter 'B'.

    • Started on?groupByDate=Yes&format=YYYY/mm/dd

    Artefacts will be regrouped by the date found in column 'Started on', using the format 'YYYY/mm/dd'

    Note:Date patterns are based on SimpleDateFormat Java class specifications.

  • Textual data to extract (info_list): Optional.

    To specify the list of textual data to extract from the Excel file, using the following format:

    <METRIC_ID>?column=<COLUMN_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>] Examples:

    • ZONE_ID?column=Zone

    Textual data found in column 'Zone' will be associated to metric ZONE_ID

    • ZONE_ID?column=Zone;OWNER?column=Belongs to

    Textual data found in columns 'Zone' and 'Belongs to' will be associated to metric ZONE_ID and OWNER respectively

    • ORIGIN?column=Comes from,map=[Cust*:External,Sub-contractor*:External,Support:Internal,Dev:Internal]

    _Textual data found in column 'Comes from' will be associated to metric ORIGIN:

    • With value 'External' if the column starts with 'Cust' or 'Sub-contractor'

    • With value 'Internal' if the column equals 'Support' or 'Dev'

    _

    • Started on?groupByDate=Yes&format=YYYY/mm/dd

    Artefacts will be regrouped by the date found in column 'Started on', using the format 'YYYY/mm/dd'

  • Numerical metrics to extract (metric_list): Optional.

    To specify the list of numerical data to extract from the Excel file, using the following format:

    <METRIC_ID>?column=<COLUMN_NAME>&extract=<REGEX_EXRACT>&map=[<REGEX_1>:<VALUE_1>,…​,<REGEX_N>:<VALUE_N>] Examples:

    • PRIORITY?column=Priority level

    Numerical values found in column 'Priority level' will be associated to metric PRIORITY

    • SEVERITY?column=Severity level,extract=S_

    Numerical values found in column 'Severity level' will be associated to metric SEVERITY, after having extracted (removed) the string 'S_', because in this example, column 'Severity level' contains for example 'S_1', 'S_4', etc., and we want to obtain '1', '4', etc.

    • STATUS?column=State&map=[passed:0,Passed:0,Pass:0,*nconclusive*:1,failed:2,Failed:2,FAIL:2]

    _Textual values found in column 'State' will be mapped to numerical values using these rules:

    • For values containing 'passed', 'Passed', 'Pass'

    • For values containing 'nconclusive'

    • For values containing 'failed', 'Failed, 'FAIL'

    _

  • Date metrics to extract (date_list): Optional.

    To specify the list of date data to extract from the Excel file, using the following format:

    <METRIC_ID>?column=<COLUMN_NAME>&format=<DATE_FORMAT> Examples:

    • CREATION_DATE?column=Created on

    Date values found in column 'Created on' will be associated to metric CREATION_DATE, using the default dd-MMM-yyyy format

    • LAST_UPDATE?column=Updated on&format=yyyy/mm/dd

    Date values found in column 'Created on' will be associated to metric CREATION_DATE, using the yyyy/mm/dd format

    Note:Date patterns are based on SimpleDateFormat Java class specifications.

  • Filters to set the list of artefacts to keep (filter_list): Optional.

    If specified only artefacts complying with the provided filters are kept. Use the following format:

    <COLUMN_NAME>?regex=<REGEX> Examples:

    • Name?regex=^ST*

    Only create artefacts for which column 'Name' starts with 'ST'

    • Name?regex=^ST*;Region?regex=Europe

    Same as before, but restrict to artefacts where column 'Region' is 'Europe'

  • Artefact unique ID (artefact_uid): Optional unless you want to use links to these artefacts.

    This is the artefact unique ID, to be used by links, from this Data Provider, or another Data Provider.Examples:

    • ${ID}

    • T_${Name}

    • ${Name} ${Descr}

    Note:${NAME} designates the column called NAME

  • Links to this artefact (artefact_link): Specify how to create links between this artefact and other artefacts with the following format:

    <LINK_TYPE>?direction=<IN OR OUT>&column=<COLUMN_NAME>&separator=<SEPARATOR> Examples:

    • TESTED_BY?column=Test

    A 'TESTED_BY' link will be created with the UID found in column 'Test'

    • IMPLEMENTED_BY?direction=IN&column=Implements

    An 'IMPLEMENTED_BY' link will be created with the UID found in column 'Implements'. Since the optional 'direction' attribute is provided, it will be set as 'IN' (default value is 'OUT')

    • TESTED_BY?column=Tests&separator=','

    'TESTED_BY' links will be created with all UIDs found in column 'Tests', separated by a comma

    • TESTED_BY?column=Tests&separator=',';REFINED_BY?column=DownLinks&separator=','

    'TESTED_BY' and 'REFINED_BY' links will be created with UIDs found in columns 'Tests' and 'DownLinks' respectively

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Test Excel Import is:

-d "type=import_test_excel,input_file=[file_or_directory],sheetname=[text],artefact_name=[text],path_list=[text],info_list=[text],metric_list=[text],date_list=[text],filter_list=[text],artefact_uid=[text],artefact_link=[text],xlsx_file_pattern=[text],logLevel=[text]"

Testwell CTC++

Testwell CTC++ testwell ctc

Description

Import data from Testwell CTC++ XML results

For more details, refer to http://www.testwell.fi/ctcdesc.html.

Usage

Testwell CTC++ has the following options:

  • Results folder (dir, mandatory): Specify the folder containing XML test results files from Testwell CTC++.

  • Instrumented files extension (extension, mandatory, default: .test.runner.c): Instrumented files extension (Extension of the instrumented files generated by ctc++)

The full command line syntax for Testwell CTC++ is:

-d "type=testwell_ctc,dir=[directory],extension=[text]"

Ticket Data Import

Description

Ticket Data Import provides a generic import mechanism for tickets from a CSV, Excel or JSON file. Additionnally, it generates findings when the imported tickets have an unknown status or type.

This Data Provider provides fields so you can map all your tickets as Enhancements and defects and spread them over the following statuses: Open, In Implementation, In Verification, Closed. Overlapping statuses and types will cause an error, but if a ticket’s type or status is not declared in the definition, the ticket will still be imported, and a finding will be created.

Usage

Ticket Data Import has the following options:

  • Choose Excel, CSV, JSON or XML import (import_type, default: excel): Specify if the import is about Excel, CSV, JSON or Xml file.

  • Data File (input_file, mandatory): Specify the location of the Excel, CSV, JSON or XML file or directory containing tickets.

  • Excel Sheet Name (xls_sheetname): Specify the sheet name that contains the ticket list if your import file is in Excel format.

  • Excel file regular expression (xlsx_file_pattern, default: *.xlsx$): Specify a regular expression to find Excel files, by default it's *.xlsx$

  • CSV Separator (csv_separator, default: ;): Specify the character used in the file to separate columns. Have to be specified if the imported file is CSV type

  • CSV file regular expression (csv_file_pattern, default: *.csv$): Specify a regular expression to find CSV files, by default it's *.csv$

  • JSON/XML Root Path (root_path): Specify the root path in the JSON or XML file to retrieve issues. Have to be specified if the imported file is JSON type or XML type.

    • Example XML: alltickets represents the ticket array for the two XML examples

    <*alltickets* >

      <ticket id="ID_1" description="This is a description" name="My First Issue" />

      <ticket id="ID_2" description="This is a description" name="My Second Issue" />

    </alltickets>

    <*alltickets* >

      <ticket>

        <id>"ID_1"</id>

        <description>"This is a description"</description>

        <name>"My First Issue"</name>

      </ticket>

      <ticket>

        <id>"ID_2"</id>

        <description>"This is a description"</description>

        <name>"My Second Issue"</name>

      </ticket>

    </alltickets>

    • Example Json: alltickets represents the ticket array

    {"alltickets ": [ {

          "ticket": {

            "id": "ID_1"

            "description": "This is a description",

            "name": "My First Issue" } },

        { "ticket": {

            "id": "ID_2"

            "description": "This is a description",

            "name": "My Second Issue"

          } }] }

    • Example Json: the root_path is empty

    [ {"ticket": {

          "id": "ID_1",

          "description": "This is a description",

          "name": My First Issue }},

      { "ticket": {

          "id": "ID_2",

          "description": "This is a description",

          "name": My Second Issue}} ]

  • JSON file regular expression (json_file_pattern, default: *.json$): Specify a regular expression to find JSON files, by default it's *.json$

  • XML file regular expression (xml_file_pattern, default: *.xml$): Specify a regular expression to find XML files, by default it's *.xml$

  • Ticket Name (artefact_name, mandatory): Specify the pattern used to build the name of the ticket. The name can use any information collected from the file as a parameter.

    Example: ${ID} : ${Summary}

  • Root Node (root_node, default: Tickets): Specify the name of the node to attach tickets to.

  • Ticket ID (artefact_id, mandatory): Specify the header name or of the column or path which contains the ticket ID.

  • Ticket Type (ticket_type): Specify the header of the column or path containing the type for the issue.

  • Ticket Keys (artefact_keys): Specify the list of keys to import from the file. This parameter expects a list of headers or path separated by ";" characters.

    For example: ${key};ID

  • Ticket Links (artefact_link): Specify the pattern used to find the ticket links. The links can have special syntax, see import_generic_data documentation.

    Example: type/inward?array=fields/issuelinks,/inwardIssue/key&direction=IN;

  • Ticket UID (artefact_uid): Specify the pattern used to build the ticket Unique ID. The UID can use any information collected from the file as a parameter.

    Example: TK#${ID}

  • Grouping Structure (artefact_groups): Specify the headers for Grouping Structure, separated by ";".

    For example: "column_name_1=regex1;column_name_2=regex2;

  • Filtering (artefact_filters): Specify the list of Header or path for filtering

    For example: "column_name_1=regex1;column_name_2=regex2;

  • Open Ticket Pattern (definition_open): Specify the pattern applied to define tickets as open. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Status=[Open|New]

  • In Development Ticket Pattern (definition_rd_progress): Specify the pattern applied to define tickets as in development. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Status=Implementing

  • Fixed Ticket Pattern (definition_vv_progress): Specify the pattern applied to define tickets as fixed. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Status=Verifying,Resolution=[fixed|removed]

  • Closed Ticket Pattern (definition_close): Specify the pattern applied to define tickets as closed. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Status=Closed

  • Defect Pattern (definition_defect): Specify the pattern applied to define tickets as defects. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Type=Bug

  • Enhancement Pattern (definition_enhancement): Specify the pattern applied to define tickets as enhancements. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Type=Enhancement

  • Other Pattern (definition_other): Specify the pattern applied to define tickets of 'Other' types. This means not a defect or an enhancement. This field accepts a regular expression to match one or more column headers with a list of possible values.

    Example: Type=Decision

  • TODO Pattern (in_todo_list): Specify the pattern applied to include tickets in the TODO list. This field accepts a regular expression to match one or more column headers or path with a list of possible values.

    Example: Sprint=2018-23

  • Creation Date Column or Path (creation_date): Enter the name of the column or path containing the creation date of the ticket. If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    For example: column_name&format="dd/mm/yyyy" or column_name&format="yyyy-MM-dd'T'hh:mm:ss'Z'" .

  • Due Date Column or Path (due_date): Enter the name of the column or path containing the due date of the ticket. If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    For example: column_name&format="dd/mm/yyyy" or column_name&format="yyyy-MM-dd'T'hh:mm:ss'Z'" .

  • Last Updated Date Column or Path (last_updated_date): Enter the name of the column or path containing the last updated date of the ticket. If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    For example: column_name&format="dd/mm/yyyy" or column_name&format="yyyy-MM-dd'T'hh:mm:ss'Z'" .

  • Closure Date Column or Path (closure_date): Enter the name of the column or path containing the closure date of the ticket. If the pattern to format date isn't the same as the "Date format" field, the parameter format="date_pattern" can be used.

    For example: column_name&format="dd/mm/yyyy" or column_name&format="yyyy-MM-dd'T'hh:mm:ss'Z'" .

  • Time Spent (time_spent): Specify the header of the column or path containing time spent on the issue.

  • Remaining Time (remaining_time): Specify the header of the column or path containing the remaining time for the issue.

  • Original Time Estimate (original_time_estimate): Specify the header of the column or path containing the original time estimate for the issue.

  • Status Column or Path (status, default: Status): Specify the header of the column or path containing the status of the ticket.

  • URL (url): Specify the pattern used to build the ticket URL. The URL can use any information collected from the file as a parameter.

  • Title Column or Path (title): Specify the header of the column or path containing the title of the ticket.

  • Description Column or Path (description): Specify the header of the column or path containing the description of the ticket.

  • Category Column or Path (category): Specify the header of the column or path containing the category of the ticket.

  • Reporter Column or Path (reporter): Specify the header of the column or path containing the reporter of the ticket.

  • Handler Column or Path (handler): Specify the header of the column or path containing the handler of the ticket.

  • Priority Column or Path (priority): Specify the header of the column or path containing priority data.

  • Severity Column or Path (severity): Specify the header of the column or path containing severity data.

  • Severity Mapping (severity_mapping): Specify the mapping used to associate a severity to a scale on the severity scale in the model, where 0 is least critical and 4 is most critical.

  • Ticket Type (type, default: Type): The Ticket type for example defect or enhancement

  • Date format (date_format, default: yyyy-MM-dd): Formatting the date to match the given pattern. This pattern will be applied to all date fields.

    For example: "dd/mm/yyyy" or "yyyy-MM-dd'T'hh:mm:ss'Z'" .

    If format is not specified, the following is used by default: dd-MMM-yyyy .

  • Information Fields (informations): Specify the list of extra textual information to import from the file. This parameter expects a list of headers or path separated by ";" characters.

    For example: Company;Country;Resolution

  • Additional Metrics Fields (custom_metrics): Specify the list of extra metrics to import from the file. This parameter expects a list of headers or path separated by ";" characters.

    For example: points;custom_time_spent or POINTS?path=fields/customfield_10002;CUSTOM_TIME_SPENT?path=fields/customfield_10003

  • Additional Date Fields (custom_metrics_date): Specify the list of extra date information to import from the file. This parameter expects a list of headers or path separated by ";" characters.

    For example: custom_created;custom_updated or CUSTOM_CREATED?path=fields/customfield_10004;CUSTOM_UPDATED?path=fields/customfield_10005

  • Extract other artifacts (extract_artifacts): Extract other artifacts to be linked with the main artifact. Use the following format:

    <COLUMN_NAME>?type=<Artifact type&gt&hierarchy=<All hierarchy element separate by ','>&isArray=<true or false>&link=<name link>&direction=<IN or OUT>

    Example: fields/labels?type=LABEL&hierarchy=Labels&isArray=true&link=HAS_LABEL&direction=IN <br\>If the type is not defined, it's the main artifact type would be used.

    If the hierarchy is not defined, the artifact will be under APPLICATION

    If the isArray is not defined, the default value is false.

    If the direction is not defined, the default value is OUT

    If the link is not defined, the artifact will still be created

    If the field is an object array, you have to enter the property of the object to search for

    Example: fields/fixVersions?type=TICKET_VERSION&hierarchy=Tickets,Versions&isArray=true*,name* &link=TARGET_VERSION&direction=IN;

  • Save Output (createOutput):

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Ticket Data Import is:

-d "type=import_ticket,import_type=[multipleChoice],input_file=[file_or_directory],xls_sheetname=[text],xlsx_file_pattern=[text],csv_separator=[text],csv_file_pattern=[text],root_path=[text],json_file_pattern=[text],xml_file_pattern=[text],artefact_name=[text],root_node=[text],artefact_id=[text],ticket_type=[text],artefact_keys=[text],artefact_link=[text],artefact_uid=[text],artefact_groups=[text],artefact_filters=[text],definition_open=[text],definition_rd_progress=[text],definition_vv_progress=[text],definition_close=[text],definition_defect=[text],definition_enhancement=[text],definition_other=[text],in_todo_list=[text],creation_date=[text],due_date=[text],last_updated_date=[text],closure_date=[text],time_spent=[text],remaining_time=[text],original_time_estimate=[text],status=[text],url=[text],title=[text],description=[text],category=[text],reporter=[text],handler=[text],priority=[text],severity=[text],severity_mapping=[text],type=[text],date_format=[text],informations=[text],custom_metrics=[text],custom_metrics_date=[text],extract_artifacts=[text],createOutput=[booleanChoice],logLevel=[text]"

Vector Trace Items

Vector Trace Items vector logo

Description

Import Trace Items in Vector generic format as Requirements in Squore

Usage

Vector Trace Items has the following options:

  • Trace Items folder (dir, mandatory): Specify the folder containing Trace Items (Requirements) files

  • Trace Items file suffix (suff, mandatory, default: .vti-tso): Provide the suffix of Trace Items (Requirements) files.

  • Add "Unique Id" to name (addIdToName, default: false): Add Unique Id to name (the unique id will be added at the end of the artefact name).

    This option may be mandatory in case you have requirements with the same path (ie, same name AND same location)

  • Add Readable ID (addReadableId, default: false): Add Readable ID (the readable id will be added at the beginning of the artefact name)

  • Planned Trace Items folder (dirPlanned): Specify the folder containing Planned Trace Items files.

  • Filter on Requirements (filter): The filter is a way to keep Requirements which properties match a certain pattern.

    Syntax:<PROPERTY_NAME>?regex=<REGEX>

    Examples:

    • *No filters are provided * …​ If no filters are provided, all Requirements from vTESTstudio are shown in Squore (default behavior)

    • Property 1?regex=V_.* …​ Only keep Requirements where 'Property 1' starts with 'V_'

    • Property 1?regex=V_.*;Property 2?regex=.*VALID.* …​ Only keep Requirements where 'Property 1' starts with 'V_', AND 'Property 2' contains 'VALID'

  • Requirements grouping (grouping): Grouping is a way to structure Requirements in Squore by the value of given properties, in the order they are provided.

    Examples:Suppose Requirements have:

    • an 'Origin' property ('Internal', 'External')

    • and a 'Criticity' property ('A', 'B', 'C, 'D')

    Possible values for grouping:

    • grouping is empty …​ If no grouping is provided, Requirement will be shown in Squore with the same structure as in vTESTstudio (default behavior)

    • grouping ='Origin' …​ In addition to the original structure, Requirements will be broken down by origin ('Internal, 'External', or 'Unknown' if the 'Origin' property is absent or empty)

    • grouping ='Origin;Criticity' …​ Same as before, but the Requirements will be broken down by Origin, THEN by Criticity ('A', 'B', 'C, 'D', or 'Unknown' if the 'Criticity' property is absent or empty)

  • Textual information to extract (infoList, default: STATUS?property=Status&map=[In progress:under review];VERIFICATION_METHOD?p…​): To specify the list of textual data to extract from the vTraceitem properties.

    format:

    <INFO_ID>?property=<PROPERTY_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>]

    Examples:

    STATUS?property=Status&map=[In progress:under review];VERIFICATION_METHOD?property=SW_Verification_Level;SAFETY_CLASSIFICATION?property=Safety_Classification

  • Metric information to extract (metricList, default: REQ_STATUS?property=Status&map=[rejected:0,obsolete:0,draft:2,accept:1];RE…​)*: To specify the list of metric data to extract from the vTraceitem properties.

    format:

    <METRIC_ID>?property=<PROPERTY_NAME>&map=[<REGEX_1>:<TEXT_1>,…​,<REGEX_N>:<TEXT_N>]

    Examples:

    REQ_STATUS?property=Status&map=[rejected:0,obsolete:0,draft:2,accept*:1];REQ_IADT?property=SW_Verification_Level&map=[(T|t)est*:3,(R|r)eview:1];CRITICAL_FACTOR?property=Safety_Classification&map=[QM: 0,ASIL A:1,ASIL B:2,ASIL C:3,ASIL D:4]

  • Reset Review flags (reset_review, default: false): Reset Review flags is used to initialize all review flags to "false".

    This option can be used when the team is starting a new delivery cycle.

    Using this option will turn all requirements to "not reviewed".

  • Reset Overload flags (reset_overload, default: false): Reset Overload flags is used to initialize all overload flags to "default status".

    This option can be used when the team is starting a new delivery cycle.

    Using this option will imply to use the default verdicts of all requirements.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for Vector Trace Items is:

-d "type=Vector_TraceItems,dir=[directory],suff=[text],logLevel=[text],addIdToName=[booleanChoice],addReadableId=[booleanChoice],dirPlanned=[directory],filter=[text],grouping=[text],infoList=[text],metricList=[text],reset_review=[booleanChoice],reset_overload=[booleanChoice]"

VectorCAST

VectorCAST vector logo

Description

The VectorCAST Data Provider extracts coverage results, as well as tests and their status

For more details, refer to https://www.vectorcast.com/.

Usage

VectorCAST has the following options:

  • Retrieve the coverage data via vectorCAST API? (generate_report, mandatory): Squore imports vectorCAST data via "report.xml" files.

    The xml report file is extracted from vectorCAST API.

    If vectorCAST is installed on the Squore server, you can select "Yes" to ask Squore to generate the xml report file.

    In that case, make sure the Squore server can access to the vectorCAST results directory.

    If vectorCAST is not available on the Squore server, thus, you have to select "No" to import the test and coverage data via xml report files.

  • VectorCAST project configuration files (Path to vcm, vce or vcp) (project_file_list): Specify the path to your project configuration file.

    The path should be either:

    1) Path to your project ".vcm" file

    2) Path to the directory which contains all the vce or vcp (Squore will look for all reccursive folders)

  • Folder of vectorCAST data files (ie, vectorcast_report.xml) (squore_report_folder): Specify the folder which contains all the vectorCAST data files (ie, vectorcast_report.xml).

    The vectorcast_report.xml file is generated from vectorCAST API for squore.

  • Import variant data? (create_variant, default: false): Variant data can be imported beside the test results. It is possible to get an overview of the tests results per variant. CARREFUL: a variant key must be defined.

  • Variant key (variant_options, default: compiler;testsuite;group): Variant key allows to name the variant according relevant variant property. Key=compiler;testsuite will list all the variant and name them according the value of the field "Compiler/TestSuite".

  • Advanced options (advanced_options, default: false):

  • Root Artefact (sub_root, default: Tests): Specify the name of the artefact under which the test artefacts will be created.

  • Unit Tests folder (sub_root_for_unit, default: Unit Tests): Specify the name of the artefact under which the unit tests artefacts will be created.

  • System Tests folder (sub_root_for_system, default: System Tests): Specify the name of the artefact under which the system tests artefacts will be created.

  • Don’t Be "case sensitve" (case_sensitve_option, default: true): Don't Be "case sensitve"

  • Generate a testcase unique id (create_path_unique_id, default: false): Generate a testcase unique id based on "path + test name"

    This option is needed if you want to link test objects with external requirements.

  • VectorCAST Dir (vectorcast_dir): In case you want to trigger a specific version of vectorCAST, you can specify the value of the %VECTORCAST_DIR%.

    This is particularly useful when Squore is installed on linux.

  • VectorCAST version (vectorcast_version, default: Version 20): This option is useful when vectorcast_dir option is set.

    The value is the results of the following command:

    - %VECTORCAST_DIR%/manage -V

  • Ignore Missing Files (ignoreMissingFiles, default: true): This option must be used to ignore files which are found in vectorCAST database but not in the scope of the Squore Analysis.

In addition the following options are avaiable on command line:

  • logLevel(default: INFO): Specify the log level to be used (ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE) the default is INFO

The full command line syntax for VectorCAST is:

-d "type=VectorCAST_API,logLevel=[text],generate_report=[multipleChoice],project_file_list=[file_or_directory],squore_report_folder=[file_or_directory],create_variant=[booleanChoice],variant_options=[text],advanced_options=[booleanChoice],sub_root=[text],sub_root_for_unit=[text],sub_root_for_system=[text],case_sensitve_option=[booleanChoice],create_path_unique_id=[booleanChoice],vectorcast_dir=[text],vectorcast_version=[text],ignoreMissingFiles=[booleanChoice]"

Adding More Languages to Squore Analyzer

Squore Analyzer can handle files written in languages that are not officially supported with a bit of extra configuration. In this mode, only a basic analysis of the file is carried out so that an artefact is created in the project and findings can be attached to it. A subset of the base metrics from Squore Analyzer is optionally recorded for the artefact so that line counting, stability and text duplication metrics are available at file level for the new language.

The example below shows how you can add TypeScript files to your analysis:

  1. Copy <SQUORE_HOME>/configuration/tools/SQuORE/form.xml and its .properties files into your own configuration

  2. Edit form.xml to add a new language key and associated file extensions:

    <?xml version="1.0" encoding="UTF-8"?>
    <tags baseName="SQuORE" ...>
    	<tag type="multipleChoice" key="languages" ... defaultValue="...;typescript">
    		...
    		<value key="typescript" option=".ts,.TS" />
    	</tag>
    </tags>

    Files with extensions matching the typescript language will be added to your project as TYPESCRIPT_FILE artefacts

  3. Edit the defaultValue of the additional_param field to specify how Squore Analyzer should count source code lines and comment lines in the new language, based on another language officially supported by Squore. This step is optional, and is only needed if you want the to record basic line counting metrics for the artefacts.

    <?xml version="1.0" encoding="UTF-8"?>
    <tags baseName="SQuORE" ...>
    	...
    	<tag type="text" key="additional_param" defaultValue="typescript=javascript" />
    	...
    </tags>

    Lines in TypeScript files will be counted as they would for Javascript code.

  4. Add translations for the new language key to show in the web UI in Squore Analyzer’s form_en.properties

    OPT.typescript.NAME=TypeScript
  5. Add translations for the new artefact type and new LANGUAGE information value in one of the properties files imported by your Description Bundle:

    T.TYPESCRIPT_FILE.NAME=TypeScript File
    
    INFO_VALUE.LANGUAGE.TYPESCRIPT.NAME=Typescript
    INFO_VALUE.LANGUAGE.TYPESCRIPT.COLOR=#2b7489
  6. The new artefact type should also be declared as a type in your model. The easiest way to do this is to add it to the GENERIC_FILE alias in your analysis model, which is pre-configured to record the line counting metrics for new artefacts. You should also define a root indicator for you new artefact type. The following snippet shows a minimal configuration using a dummy indicator:

    <!-- <configuration>/MyModel/Analysis/Bundle.xml -->
    <?xml version="1.0" encoding="UTF-8"?>
    <Bundle>
    ...
    	<ArtefactType id="GENERIC_FILE" heirs="TYPESCRIPT_FILE" />
    
    	<RootIndicator artefactTypes="TYPESCRIPT_FILE" indicatorId="DUMMY" />
    	<Indicator indicatorId="DUMMY" scaleId="SCALE_INFO" targetArtefactTypes="TYPESCRIPT_FILE" displayTypes="IMAGE" />
    
    	<Measure measureId="DUMMY">
    		<Computation targetArtefactTypes="TYPESCRIPT_FILE" result="0" />
    	</Measure>
    ...
    </Bundle>

    Make sure that this declaration appears in your analysis model before the inclusion of import.xml, so it overrides the default analysis model.

    Don’t forget to add translations for your dummy indicator to avoid warnings in the Model Validator:

    DUMMY.NAME= Generic Indicator
    DUMMY.DESCR= This is an indicator for additional languages in Squore Analyzer. It does not rate files in any way.
  7. Reload your configuration and analyse a project, checking the box for TypeScript in Squore Analyzer’s options to get Typescript artefacts in your project.

    ALL squan new language
    Figure 1. The new option for TypeScript files in Squore Analyzer

    If you are launching an analysis from the command line, use the language key defined in step 2 to analyse TypeScript files:

    -d "type=SQuORE,languages=typescript,additional_param=typescript=javascript"
  8. After the analysis finishes and you can see your artefacts in the tree, use the Dashboard Editor to build a dashboard for your new artefact type.

  9. Finally, create a handler for the source code viewer to display your new file type into your configuration folder, by copying <SQUORE_HOME>/configuration/sources/javascript_file.properties into your own configuration as <SQUORE_HOME>/configuration/sources/typescript_file.properties.

Advanced COBOL Parsing

By default, Squore Analyzer generates artefacts for all PROGRAM(s) in COBOL source files. It is possible to configure the parser to also generate artefacts for all SECTION(s) and PARAGRAPH(s) in your source code. This feature can be enabled with the following steps:

  1. Open <SQUORE_HOME>/configuration/tools/SQuORE/Analyzer/artifacts/cobol/ArtifactsList.txt

  2. Edit the list of artefacts to generate and add the section and paragraph types:

    program
    section
    paragraph
  3. Save your changes

If you create a new project, you will see the new artefacts straight away. For already-existing projects, make sure to launch a new analysis and check Squore Analyzer’s Force full analysis option to parse the entire code again and generate the new artefacts.

Using Data Provider Input Files From Version Control

Input files for Squore’s Data Providers, like source code, can be located in your version control system. When this is the case, you need to specify a variable in the input field for the Data Provider instead of an absolute path to the input file.

SUM dp alias
Figure 2. A Data Provider using an input file extracted from a remote repository

The variable to use varies depending on your scenario:

  • You have only one node of source code in your project

    In this case, the variable to use is $src.

  • You have more than one node of source code in your project

    In this case, you need to tell Squore in which node the input file is located. This is done using a variable that has the same name as the alias you defined for the source code node in the previous step of the wizard. For example, if your nodes are labelled Node1 and Node2 (the default names), then you can refer to them using the $Node1 and $Node2 variables.

When using these variables from the command line on a Linux system, the $ symbol must be escaped:

-d "type=PMD,configFile=\$src/pmd_data.xml"