|
|
Copyright © 2019 Squoring Technologies
Licence
No part of this publication may be reproduced, transmitted, stored in a retrieval system, nor translated into any human or computer language, in any form or by any means, electronic, mechanical, magnetic, optical, chemical, manual or otherwise, without the prior written permission of the copyright owner, Squoring Technologies.
Squoring Technologies reserves the right to revise this publication and to make changes from time to time without obligation to notify authorised users of such changes. Consult Squoring Technologies to determine whether any such changes have been made.
The terms and conditions governing the licensing of Squoring Technologies software consist solely of those set forth in the written contracts between Squoring Technologies and its customers.
All third-party products are trademarks or registered trademarks of their respective companies.
Warranty
Squoring Technologies makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Squoring Technologies shall not be liable for errors contained herein nor for incidental or consequential damages in connection with the furnishing, performance or use of this material.
Abstract
This edition of the Command Line Interface applies to Squore 18.0.18 and to all subsequent releases and modifications until otherwise indicated in new editions.
Table of Contents
The following conventions are used in this manual.
Typeface or Symbol | Meaning |
Bold | Book titles, important items, or items that can be selected including buttons and menu choices. For example: Click the Next button to continue |
Italic | A name of a user defined textual element. For example:
Username
: admin
|
Courier New
| Files and directories; file extensions, computer output. For example:
Edit the config.xml file |
Courier Bold
| Commands, screen messages requiring user action. For example:
Username
: admin
|
> | Menu choices. For example: Select
File > Open
. This means select the File menu, then select the Open
command from it. |
<...> | Generic terms. For example:
<SQUORE_HOME> refers to the Squore
installation directory. |
Notes
Screenshots displayed in this manual may differ slightly from the ones in the actual product.
The following acronyms and abbreviations are used in this manual.
CI | Continuous Integration |
CLI | Command Line Interface |
DP | Data Provider, a Squore module capable of handling input from various other systems and import information into Squore |
RC | Repository Connector, a Squore module capable of extracting source code from source code management systems. |
Table of Contents
This document was released by Squoring Technologies.
It is part of the user documentation of the Squore software product edited and distributed by Squoring Technologies.
This document is the Command Line Interface Guide for Squore.
It is indented as a follow up to the Squore Getting Started Guide and will help you understand how to use Squore CLI to create and update projects. It is divided into several chapters, as detailed below:
Chapter 2, Getting Started With the Squore CLI provides a basic introduction to Squore CLI and the examples provided with your Squore installation.
Chapter 3, Command Line Reference provides a complete reference of all the command line options and parameters for creating projects.
Chapter 4, Repository Connectors covers the default Repository Connectors and the parameters to pass to Squore to use them.
Chapter 5, Data Providers is a reference guide to all the Data Providers shipped with Squore.
If you are already familiar with Squore, you can navigate this manual by looking for what has changed since the previous version. New functionality is tagged with (new in 18.0) throughout this manual. A summary of the new features described in this manual is available in the entry *** What's New in Squore 18.0? of this manual's Index.
For information on how to use and configure Squore, the full suite of manuals includes:
Squore Installation Checklist
Squore Installation and Administration Guide
Squore Getting Started Guide
Squore Command Line Interface
Squore Configuration Guide
Squore Eclipse Plugin Guide
Squore Reference Manual
If the information provided in this manual is erroneous or inaccurate, or if you encounter problems during your installation, contact Squoring Technologies Product Support: https://support.squoring.com/
You will need a valid Squore customer account to submit a support request. You can create an account on the support website if you do not have one already.
For any communication:
support@squoring.com
Squoring Technologies Product Support
76, allées Jean Jaurès / 31000 Toulouse - FRANCE
Approval of this version of the document and any further updates are the responsibility of Squoring Technologies.
The version of this manual included in your Squore installation may have been updated. If you would like to check for updated user guides, consult the Squoring Technologies documentation site to consult or download the latest Squore manuals at https://support.squoring.com/documentation/18.0.18. Manuals are constantly updated and published as soon as they are available.
Table of Contents
Squore CLI is a package that is installed on every client computer that needs to perform local code analyses or trigger a remote analysis on Squore Server. It contains the client (squore-engine.jar), its libraries, configuration files and some sample job files to help you get started. In this section, you will learn more about the different setup configurations supported by the CLI, its installation and integration into a Continuous Integration environment.
Squore CLI accepts commands and parameters to communicate with Squore Server. Inside the installation folder, some scripts are provided as examples to create projects, save encrypted credentials to disk, and synchronise the client's configuration with the server.
There are two ways to contemplate the deployment of Squore CLI:
As a way to analyse code and process data on a client machine and send the results to the server.
As a way to instruct the server to carry out an analysis of code and other input data.
Squore CLI and Squore Server must always be the same version in order to work together.
The following is a list of the officially supported and tested operating systems:
CentOS 6
CentOS 7
Fedora 19
Ubuntu Server 16.04
Windows 8
Windows 10
Windows Server 2012 R2
The following is a list of the operating systems that are not regularly tested but are known to be working:
RedHat EL 6
RedHat EL 7
SuSe Linux 11.1
Ubuntu Server 10.04
Ubuntu Server 14.04
Windows 7
Windows Server 2008 R2
For a successful installation of Squore, you will need:
The latest version of the Squore CLI installer, which can be downloaded from https://support.squoring.com/download_area.php
The Oracle Java Runtime Environment version 8 (other versions are not supported)
At least 4 GB of space available on the disk for a full installation with demo projects
The java
executable should be in the machine's PATH
environment variable for Squore CLI to run successfully.
A JRE is required for Squore CLI. The Windows installer contains the tcl and perl runtimes needed. It will allow you to obtain the configuration needed to create projects from the server.
On Linux platforms, the following must be installed before installing Squore:
Perl version 5.10.1 or greater including the following extra-modules:
Mandatory packages:
Date::Calc [module details]
Digest::SHA [module details]
HTTP::Request [module details]
JSON [module details]
LWP [module details]
LWP::UserAgent [module details]
Time::HiRes [module details]
XML::Parser [module details]
Optional packages for working with RTRT (new in 18.0):
XML::Simple [module details]
Optional packages for working with Microsoft Excel:
HTML::Entities [module details]
Spreadsheet::BasicRead [module details]
Optional packages for working with OSLC systems:
Date::Parse [module details]
WWW::Mechanize [module details]
XML::LibXML [module details]
Optional packages for Advanced CSV Export Management:
Text::CSV [module details]
Optional packages for working with Mantis, Jira and other ticket management software:
Date::Parse [module details]
JSON::XS [module details]
Spreadsheet::ParseExcel [module details]
Spreadsheet::BasicRead [module details]
Text::CSV [module details]
WWW::Mechanize [module details]
XML::LibXML [module details]
If some of these modules are not available as packages on your operating system, use your perl installation's cpan to install the modules. Using the OS packages is recommended, as it avoids having to reinstall via cpan after upgrading your version of perl.
Tcl version 8.5 or greater,
On Red Hat Enterprise Linux and CentOS (6.5 and 7.1), the dependencies are satisfied by the following packages:
Mandatory packages:
java-1.8.0-openjdk
perl
perl-Date-Calc
perl-Digest-SHA
perl-JSON
perl-libwww-perl
perl-Time-HiRes
perl-XML-Parser
tcl
Optional packages for working with RTRT (new in 18.0):
perl-XML-Simple
Optional packages for working with Microsoft Excel:
perl-HTML-Parser
perl-CPAN (CPAN utility requirement)
perl-Spreadsheet-ParseExcel (available in the EPEL repository)
perl-Spreadsheet-XLSX (available in the EPEL repository)
The module Spreadsheet::BasicRead is not available as a package and must therefore be installed using cpan (make sure cpan is properly configured, by running cpan without arguments first):
sudo cpan -i Spreadsheet::BasicRead
Optional packages for working with OSLC systems:
perl-TimeDate
perl-WWW-Mechanize (available in the EPEL repository)
perl-XML-LibXML
Optional packages for Advanced CSV Export Management:
perl-Text-CSV
Optional packages for working with Mantis, Jira and other ticket management software:
perl-TimeDate
perl-JSON-XS
perl-Spreadsheet-ParseExcel (available in the EPEL repository)
perl-Text-CSV
perl-WWW-Mechanize (available in the EPEL repository)
perl-XML-LibXML
The module Spreadsheet::BasicRead is not available as a package and must therefore be installed using cpan (make sure cpan is properly configured, by running cpan without arguments first):
sudo cpan -i Spreadsheet::BasicRead
For more information about how to install the Extra Packages for Enterprise Linux (EPEL) repository, consult https://fedoraproject.org/wiki/EPEL.
On Ubuntu 16.04.3 LTS, the dependencies are satisfied by the following packages:
Mandatory packages:
libdate-calc-perl
libhttp-message-perl
libjson-perl
libwww-perl
libxml-parser-perl
openjdk-8-jre
perl
tcl
Optional packages for working with RTRT (new in 18.0):
libxml-simple-perl
Optional packages for working with Microsoft Excel:
make (CPAN utility requirement)
libhtml-parser-perl
libspreadsheet-parseexcel-perl
libspreadsheet-xlsx-perl
The module Spreadsheet::BasicRead is not available as a package and must therefore be installed using cpan (make sure cpan is properly configured, by running cpan without arguments first):
sudo cpan -i Spreadsheet::BasicRead
Optional packages for working with OSLC systems:
libtimedate-perl
libwww-mechanize-perl
libxml-libxml-perl
Optional packages for Advanced CSV Export Management:
libtext-csv-perl
Optional packages for working with Mantis, Jira and other ticket management software:
libtimedate-perl
libjson-perl
libspreadsheet-parseexcel-perl (available in the EPEL repository)
libtext-csv-perl
libwww-mechanize-perl (available in the EPEL repository)
libxml-libxml-perl
The module Spreadsheet::BasicRead is not available as a package and must therefore be installed using cpan (make sure cpan is properly configured, by running cpan without arguments first):
sudo cpan -i Spreadsheet::BasicRead
Note that Oracle's Java Runtime Environment 8 (other versions are not supported) is required on the client machine for the CLI to run.
There is currently no Squore CLI installation package for Squore 17.0. If you need to install Squore CLI, download the latest version of the previous release and perform an upgrade after installing following the steps in the section called “Upgrading Squore CLI”.
After verifying that you meet the prerequisites detailed in the section called “Installation Prerequisites”, log on with an account that has administrator privileges and launch Squore CLI installer. Each of the wizard screens is documented below in the order that you will see them.
The data and temporary folders must be excluded from the scope of virus scanners, malware protectors and search indexers to avoid any errors during an analysis.
Squore CLI installer Welcome screen
On the Welcome screen, click the Next button to start the installation.
Squore CLI licence agreement screen
Click the I Agree button after reviewing the terms of the licence to continue the installation.
Squore CLI components screen
Select the components you want to install and click the Next button to proceed to the next step of the installation.
Squore CLI destination folder screen
Browse for the folder where you want to deploy Squore CLI and click the Next button to proceed to the next step of the installation.
Squore CLI installation parameters screen
Specify the path of the Java installation on your system. Specify the details of Squore Server that the client should connect to. if you check the Update and synchronise with server
box, the installer will attempt to retrieve the up-to-date client binaries from the server as well as the configuration. Click the Next button to start copying the installation files onto your hard disk.
If an error happens during the installation process, a log file is available in the destination folder you selected during the installation.
Before installing Squore CLI on a Linux platform, verify that all prerequisites are met, as described in the section called “Installation Prerequisites”
Copy the installation package (a compressed tar.bz2 archive) into the location where you want to install Squore CLI (For example: /opt/squore/
).
Extract the contents of the archive into the selected installation directory.
The folder now contains a new folder called squore-cli
, which we will refer to as
<SQUORE_HOME>
.
Run the installation script in a command shell:
<SQUORE_HOME>/bin/install
-v
-s
http://localhost:8180/SQuORE_Server
-u
user
-p
password
For more details on install options, refer to install(1).
When installing Squore CLI, a connection to Squore Server is automatically attempted to retrieve the most up-to-date client
and configuration. You can disable this synchronisation attempt by passing -N
to the installation script.
If you have deployed some third-party tools on Squore Server, they will automatically be downloaded to your client when you launch the client synchronisation script.
AntiC and Cppcheck on Linux also require special attention: Cppcheck must be installed and available in the path, and antiC must be compiled with the command:
cd <SQUORE_HOME>/addons/Antic_auto/bin/ && gcc antic.c -o antic
For more information, refer to the Command Line Interface Manual, which contains the full details about special installation procedures for Data Providers and Repository Connectors.
After the CLI installation is successful, you can familiarise yourself will the structure of the installation directory:
<SQUORE_HOME>/addons A folder containing the Data Providers of the product.
<SQUORE_HOME>/bin A folder containing sample projects creation scripts and utilities.
<SQUORE_HOME>/configuration A configuration of the product containing the tools, wizards and analysis models.
<SQUORE_HOME>/docs A folder containing the Command Line Interface manual.
<SQUORE_HOME>/lib A folder containing the main engine and its client libraries.
<SQUORE_HOME>/samples A folder containing sample source code to be used with the sample launchers supplied in <SQUORE_HOME>/bin.
<SQUORE_HOME>/share: A folder containing specific perl libraries used by the CLI to launch jobs.
<SQUORE_HOME>/tools A folder containing the perl and tclsh distributions on Windows. This folder does not exist in the Linux version, since the system installations of perl and tclsh are used.
<SQUORE_HOME>/config.xml An XML configuration file that the CLI uses to find its configuration.
After installing Squore CLI, the credentials for the user you specified during the installation have been saved, and the scripts in <SQUORE_HOME>/bin will use the username and password specified.
The file config.xml
contains information about the Squore CLI installation.. Here is the default
config.xml
:
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <squore type="client" version="1.3"> <paths> <path name="perl.dir" path="path/to/perl"/> <path name="tclsh.dir" path="path/to/tclsh"/> </paths> <configuration> <path directory="<SQUORE_HOME>/configuration"/> </configuration> <addons> <path directory="<SQUORE_HOME>/addons"/> </addons> </squore>
You can extend your config.xml
by specifying where you want the temporary and data files to be
stored on your system, as shown below:
Folder used to store temporary log files: <tmp directory="${java.io.tmpdir}/squore-${user.name}"/>
Folder used to run analyses and store project files before they are sent to the server: <projects directory="${user.home}/.squore/projects"/>
Folder used when extracting files from SCM systems: <sources directory="${java.io.tmpdir}/sources"/>
Using java system properties to specify
the paths to the tmp
, projects
and sources
folders is useful
if you want the Squore CLI installation to work for multiple users.
Note that all three elements are optional, and will use the values shown above by default if you do not specify them in
config.xml
.
Here is an example of a full config.xml
:
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <squore type="client" version="1.3> <paths> <path name="perl.dir" path="path/to/perl"/> <path name="tclsh.dir" path="path/to/tclsh"/> </paths> <configuration> <path directory="<INSTALLDIR>/configuration"/> </configuration> <addons> <path directory="<INSTALLDIR>/addons"/> </addons> <tmp directory="${java.io.tmpdir}/squore-${user.name}"/> <projects directory="${user.home}/.squore/projects"/> <sources directory="${java.io.tmpdir}/sources"/> </squore>
In order to upgrade Squore CLI to a new version, simply run <SQUORE_HOME>\bin\synchronise.bat
(on Windows)
or <SQUORE_HOME>/bin/synchronise
(on Linux) script to retrieve the latest version of the binaries
from Squore Server.
You can remove Squore Server from your machine by going through the uninstaller wizard, as described below:
Launch the uninstaller wizard from the Add/Remove Programs dialog in
the control panel or directly by double-clicking <SQUORE_HOME>/Squore_CLI_Uninst.exe
. The wizard opens:
Click Uninstall to procede with the removal of the software.
This operation cannot be interrupted or rolled-back.
The wizard will notify you when the uninstallation finishes, as shown below:
Click Finish to exit the wizard.
Squore CLI includes a small utility called add-credentials that can save your credentials to disk. This avoids typing your password every time you create a project, and also avoids having to save the password in your script files.
add-credentials is located in <SQUORE_HOME>/bin and allows saving passwords for Squore users and the various Repository Connectors known to Squore. To start saving credentials, simply run add-credentials.sh on Linux or add-credentials.bat on Windows. You are presented with a choice of several types of credentials you can save:
In order to save user credentials for Squore Server, select 1, then type the login and associated password.
In order to save credentials for a SVN server, select 2. add-credentials.sh will prompt you for the
URL of the SVN repository, for example https://svnserver/var/svn
. Upon confirming, you will be prompted for your username and password to access this SVN URL.
Note that the saved credentials are only used by Squore CLI. When you use Squore's web interface, you will need to enter your password again to log in or browse source code.
Credentials are only saved for the current user. If you want to clear the credentials saved for a user profile, remove the file $HOME/.squorerc
on linux or %USERPROFILE%\.squorerc
on Windows.
Adding credentials can be done from the command line by running the following command:
java -cp /path/to/squore-engine.jar -Dsquore.home.dir=$SQUORE_HOME com.squoring.squore.client.credentials.MakeCredentials --type squore --login demo --password demo --url http://localhost:8180/SQuORE_Server
The <SQUORE_HOME>/bin
folder contains scripts that use the source code in the folder <SQUORE_HOME>/samples
to create demo projects.
You can also copy the command lines in these scripts to start creating your own projects.
A sample job instruction is a call to squore-engine.jar
with some arguments and parameters to specify the Data Providers, Repository Connectors and attribute values you want to use, for example:
java -Dsquore.home.dir="<SQUORE_HOME>" -jar squore-engine.jar --url="<server_url>" --login="<LOGIN>" --password="<password> --name="myProject" --wizardId="ANALYTICS" -r "type=FROMPATH,path=/path/to/java/sources" --commands "DELEGATE_CREATION"
To learn more about command line parameters, refer to Chapter 3, Command Line Reference
Squore can be used in a continuous integration environment using the commands detailed in Chapter 3, Command Line Reference
Below is an example of a native call to the client using the ant exec task:
<project name="CIproject" default="build" basedir="."> <property name="server.url" value="http://localhost:8180/SQuORE_Server> <property name="cli.home" value="D:\CLI"/> <target name="build"> <exec executable="java"> <arg value="-Dsquore.home.dir=${cli.home}"/> <arg value="-jar"/> <arg value="${cli.home}\lib\squore-engine.jar"/> <arg value="--url=${server.url}" /> <arg value="--version=${label}" /> <arg value="--repository type=FROMPATH,path=${source.dir}" /> <arg value="--color=rgb(255,0,0)" /> <arg value="--name=${project.name}" /> <arg value="--login=demo" /> <arg value="--password=demo" /> <arg value="--wizardId=Software Analytics" /> <arg value="--tag APPLY_TO_BE_TESTED=0" /> <arg value="--tag VOCF_THRESHOLD=10" /> <arg value="--commands=PROCESS_CREATION" /> </exec> </target> </project>
You can use also java calls to squore-engine.jar in any automation server software.
There are two ways to build direct links to projects in Squore:
Each method supports different parameters to build direct links to a tab of the Explorer for the specified project, as explained below.
Links to the Squore Explorer using IDs. The URL accepts the following parameters:
modelId to link to a model node in the portfolio
projectId to link to the latest version of the specified project
versionId to link to a specific version of a project
artefactId to link to a specific artefact in a project (can be combined with projectId and versionId)
tabName to display a specific tab of the Explorer. This parameter can be combined with any of the ones above and must be one of:
Users can copy a RestoreContext link from the Home page, the Projects page, or generate one using the Share... dialog in an artefact's context menu, which is the only way to find an artefactId. Model IDs are not exposed anywhere in the web interface.
Project and version IDs are printed in the project's output XML file, making it easy to parse and build a URL dynamically when using continuous integration to launch analyses.
Links to the Squore Explorer using names instead of IDs. The URL accepts the following parameters:
application (mandatory) to specify the project to link to
version (optional) to specify which version of the project to display. When not specified, the latest version fo the project is displayed
artefactId (optional) to link to a specific artefact in the project
tabName to display a specific tab of the Explorer. This parameter can be combined with any of the ones above and must be one of:
The following is a URL that links to the version called V5 in the project called Earth. Since no artefactId and tabName are specified, the Dashboard tab will be displayed for the root node of the project: http://localhost:8180/SQuORE_Server/XHTML/MyDashboard/dashboard/LoadDashboard.xhtml?application=Earth&version=V5.
Table of Contents
In this chapter, you will learn about the commands and options you can use with squore-engine.jar
In order to run a command, you always need to specify at least:
-Dsquore.home.dir=<SQUORE_HOME> to tell java there Squore CLI is installed
--url=http://localhost:8180/SQuORE_Server to tell Squore CLI which Squore Server to connect to.
--login=demo to tell Squore CLI which user to connect with.
--commands="..." to tell Squore CLI what action you need it to perform.
squore.home.dir is used to set the location of Squore CLI's config.xml
to ${squore.home.dir}/config.xml
. If your config.xml
is in a different location, you can specify it on the command line with the option: -Dsquore.configuration=/path/to/config.xml.
This section details the list of commands you can use with Squore CLI and their meaning.
You will generally use a combination of these commands rather than a single command at a time.
If you intend to use the client as a remote control to trigger project creations on the server, use -c='DELEGATE_CREATION'.
A more common configuration is for the client to carry out the analysis and send the results to the server to create the project. This can be done by passing the commands -c='SYNCHRONISE;PROCESS_CREATION'.
Using the SYNCHRONISE command is optional but ensures that the client and the server are using the same model to produce analysis results.
Retrieves the full up-to-date package of the Engine and its libraries from the server.
Retrieves the up-to-date configuration from the server.
Generates the command line options associated to all parameters found in the specified configuration file. It requires the 'projectConfFile' option to be defined.
Checks the validity of a model's configuration. It requires the 'outputCheckModelsFile' option.
Process project creation on client-side, it is shortcut for PROCESS_TOOLS;GENERATE_TOOLS_DATA_ZIP;SEND_TOOLS_DATA.
Generates data for the Data Providers specified in a project. It should always be called before any other generation command is called.
Creates a zip archive of the data generated by the PROCESS_TOOLS command. It should be called after the PROCESS_TOOLS command.
Sends the zip archive generated by the GENERATE_TOOLS_DATA_ZIP command and the project settings to the server, to request a project creation (analysis model computation and database update). It should be called after the GENERATE_TOOLS_DATA_ZIP command.
Sends the project settings to the server to request a project creation
Performs the analysis model and the decision model computation on the data generated by the PROCESS_TOOLS command. It should be called after the PROCESS_TOOLS command.
Generates output data and statistics of the project's creation. It should always be called after all other commands.
"project_name"
Deletes the project project_name
. This operation cannot be undone and must be called separately from any other command.
"project_name"
--version="version_to_delete_from"
Deletes the versions project_name
from version_to_delete_from
until the latest version. This operation cannot be undone and must be called separately from any other command.
Parameters are used to define the environment in which commands are processed. The list of parameters is as follows:
"COMMAND"
,
-c "COMMAND"
optional, default=''
The list of commands to launch. This list is a semicolon-separated string defining the commands to launch. Use -commands="GET_COMMANDS_LIST"
to obtain the list of available commands. For more information about the available commands, refer to the section called “Squore CLI Commands”.
"url"
,
-s "url"
optional, default='http://localhost:8180/SQuORE_Server'
The URL of Squore Server to interact with.
"path/to/output.xml"
,
-o "path/to/output.xml"
optional, default='null'
The absolute path to the output file generated by the analysis
"path/to/validator.xml"
,
-m "path/to/validator.xml"
optional, default='null'
Defines the absolute path to the output check models file generated by the CHECK_MODELS
command.
"true|false"
,
-print "true|false"
optional, default='false'
Redirect the engine's output to the standard output.
optional, default='false'
Displays help and exits.
optional, default='false'
Displays help about the available commands.
"true|false"
,
-sub "true|false"
optional, default='false'
Loops on the repository path to create a version for each sub-folder using the sub-folder name as the version name. This options is only supported when using the FROMPATH Repository Connector.
"path/to/project_conf.xml"
,
-x "path/to/project_conf.xml"
optional, default='null'
The XML file defining the project settings. When using a combination of a project file and some parameters passed from the command line, the command line parameters override the project file ones.
"path/to/ruleset.xml"
,
-um "path/to/ruleset.xml"
optional, default='null'
The XML file listing the changes to be applied to the standard analysis model for this analysis. The XML file contains a list of rules with their status and categories, as shown below:
<UpdateRules> <UpdateRule measureId="R_NOGOTO" disabled="true" categories="SCALE_SEVERITY.CRITICAL"/> </UpdateRules>
This parameter is only read and applied when creating the first version of a project, for models where editing the ruleset is allowed. You may find it more flexible to work with named templates created in the Analysis Model Editor and specified on the command line with the --rulesetTemplate parameter, as described in the section called “Project Parameters”.
In order to create a project, you need to pass project parameters to Squore CLI. The following is a list of the parameters and their meaning:
"demo"
,
-u "demo"
mandatory
The ID of the user requesting the project creation
"demo"
,
-k "demo"
optional, default: ''
The password of the user requesting the project creation. If you do not want to specify a password in your command line, refer to the section called “Saving Credentials to Disk”.
"MyProject"
,
-n "MyProject"
mandatory
Defines the name of the project that will be created.
"ANALYTICS"
,
-w "ANALYTICS"
mandatory
The id of the wizard used to create the project.
"MyGroup"
optional, default: ''
Defines the group that the project belongs to. Projects from the same group are displayed together in the project portfolios and the group can optionally be rated as a whole. Note that you can specify subgroups by adding a / in your group name: --group="prototype/phase1" will create a phase1 group under a prototype group.
"rgb(130,196,240)"
optional, default: randomly assigned
Defines the color used to identify the project in the Squore user interface after its creation. The numbers define the numbers define the values for red, green and blue respectively. Note that if you do not specify a colour on the command line, a random colour will be picked.
"true|false"
,
-b "true|false"
optional, default: true
Instructs Squore CLI to build a baseline version that will not be overwritten by a subsequent analysis. When set to false, every analysis overwrites the previous one, until a new baseline is created. If not set, this parameter defaults to true.
"true|false"
optional, default: false
Instructs Squore to keep or discard analysis files from old versions or only for the latest analysis. Note that this behaviour only affects disk space on the server, not the analysis results
"V1"
,
-v "V1"
optional, default: null
Defines the label used for the version of this project. If not specified, the version pattern parameter is used to generate the version label instead.
"V#.N#"
optional, default: null
Defines the pattern used to label the version automatically if no version parameter was passed.
The versionPattern parameter allows specifying a pattern to create the version name automatically for every analysis. It supports the following syntax:
#N#: A number that is automatically incremented
#Nn#: A number that is automatically incremented using n digits
#Y2#: The current year in 2-digit format
#Y4#: The current year in 4-digit format
#M#: The current month in two digit format
#D#: The current day in two digit format
#H#: The current hour in 24 hour format
#MN#: The current minute in two digit format
#S#: The current second in two digit format
Any character other than # is allowed in the pattern. As an example, if you want to
produce versions labelled build-198.2013-07-28_13h07m
(where 198 is an auto-incremented number and the date and time are the
timestamp of the project creation), you would use the pattern: build-#N3#.#Y4#-#M#-#D#_#H#h#MN#m
"YYYY-MM-DDTHH:MM:SS"
optional, default: actual analysis time
Allows specifying a date for the version that is different from the current date. This is useful when the charts on your dashboard haves axes or intervals that show dates instead of version names. Note that for every new analysis, the date must be after the date of the previous analysis.
"mike,DEVELOPER;john,TESTER;peter,PROJECT_MANAGER"
,
-q "mike,DEVELOPER;john,TESTER;peter,PROJECT_MANAGER"
optional, default: ''
This semicolon-separated list of login,roleID
pairs is used to define a list of users who will be able to access the project when it is created.
Note that this option is taken into account when creating a new project but is ignored when creating a new version. In order to edit the list of users in a project team, you must use the Squore web interface.
Refer to the list of available roleIDs in Squore by clicking Administration > Roles. This option can be combined with the teamGroup parameter if needed.
"devUsers,DEVELOPER;management,GUEST"
,
-g "devUsers,DEVELOPER;management,GUEST"
optional, default: ''
This semicolon-separated list of group,roleID
pairs used to define a list of groups who will be able to access the project when it is created.
Note that this option is taken into account when creating a new project but is ignored when creating a new version. In order to edit the list of groups in a project team, you must use the Squore web interface.
Refer to the list of available roleIDs in Squore by clicking Administration > Roles. This option can be combined with the teamUser parameter if needed.
"my template"
optional, default: null
The name of the ruleset template created in the Analysis Model Editor that should be used for this analysis. For more information about ruleset templates, consult the Getting Started Guide.
"TAGNAME=tagValue"
,
-t TAGNAME="tagValue"
optional, multiple
If the wizard allows tags (i.e. project attributes), then use the this parameter to inform the CLI of the tag values to use for this project.
"type=REPOTYPE,opt1=value1,opt2=value2"
,
-r "type=REPOTYPE,opt1=value1,opt2=value2"
optional, multiple
Used to specify repositories for sources. For more information about repositories syntax, refer to Chapter 4, Repository Connectors. When using multiple source code repositories, each one musy have an alias=NodeName parameter that is used to create a folder containing the source code for the repository in the Artefact Tree.
"type=DPName,dp_opt=dp_opt_value"
,
-d "type=DPName,dp_opt=dp_opt_value"
optional, multiple
Used to specify information for Data Providers. For more information about individual Data Provider syntax, refer to Chapter 5, Data Providers.
"FILTER_OPTS"
,
-f "FILTER_OPTS"
optional, default: ''
This semicolon-separated string of triplets {artefactType,filterType,filterValue}. In order to export the measure LC
, a DESCR
info the indicator MAIN
at application level, pass -f "APPLICATION,MEASURE,LC;APPLICATION,INFO,DESCR;APPLICATION,INDICATOR_LEVEL,MAIN;".
The artefact type ALL_TYPES and the filter types ALL_DEFECT_REPORTS, ALL_MEASURES, ALL_INDICATORS_LEVELS, ALL_INDICATORS_RANKS, ALL_BASE_FINDINGS, ALL_BASE_LINKS, ALL_COMPUTED_LINKS and ALL_INFOS (new in 18.0) can also be used, followed by an empty filter value. In order to export all measures at application level in the output file, pass the parameter --filter="APPLICATION,ALL_MEASURES,;". In order to export all indicators for all artefact types in the output file, pass the parameter --filter="ALL_TYPES,ALL_INDICATORS_LEVELS,;"
"/path/to/project/data"
,
-ez "/path/to/project/data"
optional, default: null
This parameter can be used to import a debug zip file as a new project. When this parameter is used, a new project is created using the paremeters in the conf.xml
file inside the debug package, with any other parameter passed on the command line overriding the ones from the configuration file. Instead of a debug zip file, you can pass an absolute path to a project data folder to launch an analysis of all the version folders contained in that folder.
When using this option, you should pass --strictMode="false", -S false to disable some internal data integrity checks and change the project owner if needed using the --owner="admin", -O "admin" to se the new project owner (requires admin privileges).
This functionality is mostly used to test out how a project is rated in a different model, however it is not recommended for use in production since it will not replicate the following data from the old project to the new project:
All comments and discussion threads
Action Item statuses
History of changes in forms and relaxation comments
Relaxations and exclusions statuses of artefacts and findings
"id=BETA_RELEASE,date=2015/05/31,PROGRESS=95"
optional, multiple
Allows you to define a milestone in the project. This parameter accepts a date and a series of metrics with their values to specify the goals for this milestone. Note that this parameter allows you to add milestones or modify existing ones (if the ID provided already exists), but removing a milestone from a project can only be done from the web interface.
The rest of the parameters that you will pass to the Engine to create projects are specific to Repository Connectors and Data Providers and are detailed respectively in the Chapter 4, Repository Connectors and Chapter 5, Data Providers.
After a successful or unsuccessful run, the CLI returns an exit code from this list:
OK - The operation completed successfully.
Client creation error - There was an error launching the client process.
Configuration error - This could be due to an unreachable configuration file or a parameter set to an invalid value.
Problem while launching one of the commands - One of the commands failed to complete successfully. The console should provide information about what exactly failed.
Engine error - The client failed to launch the analysis. More details about this error are available in the client console and in the server logs.
Table of Contents
The simplest method to analyse source code in Squore is to provide a path to a folder contining your code.
Remember that the path supplied for the analysis is a path local to the machine running the analysis, which may be different from your
local machine. If you analyse source code on your local machine and then send results to the server, you will not be able to view the
source code directly in Squore, since it will not have access to the source code on the other machine. A common workaround to
this problem is to use UNC paths (\\Server\Share
, smb://server/share
...) or a mapped server drive
in Windows.
Folder Path has the following options:
Datapath (path, mandatory) Specify the absolute path to the folder containing the files you want to include in the analysis. The path specified must be accessible from the server.
The full command line syntax for Folder Path is:
-r "type=FROMPATH,path=[text]"
This Repository Connector allows you to upload a zip file containing your sources to analyse. Select a file to upload in the project wizard and it will be extracted and analysed on the server.
The contents of the zip file are extracted into Squore Server's temp folder. If you want to uploaded files to persist, contact your Squore administrator so that the uploaded zip files and extracted sources are moved to a location that is not deleted at each server restart.
The Concurrent Versions System (CVS), is a client-server free software revision control system in the field of software development.
For more details, refer to http://savannah.nongnu.org/projects/cvs.
The following is a list of commands used by the CSV Repository Connector to retrieve sources:
cvs -d $repository export [-r $branch] $project
cvs -d $repository co -r $artefactPath -d $tmpFolder
CVS has the following options:
The full command line syntax for CVS is:
-r "type=CVS,repository=[text],project=[text],branch=[text]"
IBM Rational ClearCase is a software configuration management solution that provides version control, workspace management, parallel development support, and build auditing. The command executed on the server to check out source code is: $cleartool $view_root_path $view $vob_root_path.
For more details, refer to http://www-03.ibm.com/software/products/en/clearcase.
The ClearCase tool is configured for Linux by default. It is possible to make it work for Windows by editing the configuration file
ClearCase has the following options:
View root path ( view_root_path
, mandatory, default: /view) Specify the absolute path of the ClearCase view.
Vob Root Path ( vob_root_path
, mandatory, default: /projets) Specify the absolute path of the ClearCase vob.
View ( view
) Specify the label of the view to analyse sources from. If no view is specified, the current ClearCase view will be used automatically, as retrieved by the command cleartool pwv -s.
Server Display View ( server_display_view
) When viewing source code from the Explorer after building the project, this parameter is used instead of the view parameter specified earlier. Leave this field empty to use the same value as for view.
Sources Path ( sub_path
) Specify a path in the view to restrict the scope of the source code to analyse. The value of this field must not contain the vob nor the view. Leave this field empty to analyse the code in the entire view. This parameter is only necessary if you want to restrict to a directory lower than root.
The full command line syntax for ClearCase is:
-r "type=ClearCase,view_root_path=[text],vob_root_path=[text],view=[text],server_display_view=[text],sub_path=[text]"
The Perforce server manages a central database and a master repository of file versions. Perforce supports both Git clients and clients that use Perforce's own protocol.
For more details, refer to http://www.perforce.com/.
The Perforce repository connector assumes that the specified depot exists on the specified Perforce server, that Squore can access this depot and that the Perforce user defined has the right to access it. The host where the analysis takes place must have a Perforce command-line client (p4) installed and fully functional. The P4PORT environment variable is not read by Squore. You have to set it in the form. The path to the p4 command can be configured in the perforce_conf.tcl file located in the configuration/repositoryConnectors/Perforce folder. The following is a list of commands used by the Perforce Repository Connector to retrieve sources:
p4 -p $p4port [-u username] [-P password] client -i <$tmpFolder/p4conf.txt
p4 -p $p4port [-u username] [-P password] -c $clientName sync "$depot/...@$label"
p4 -p $p4port [-u username] [-P password] client -d $clientName
p4 -p $p4port [-u username] [-P password] print -q -o $outputFile $artefactPath
The format of the p4conf.txt file is:
Client: $clientName Root: $tmpFolder Options: noallwrite noclobber nocompress unlocked nomodtime normdir SubmitOptions: submitunchanged view: $depot/... //$clientName/...
Perforce has the following options:
P4PORT ( p4port
, mandatory) Specify the value of P4PORT using the format [protocol:]host:port (the protocol is optional). This parameter is necessary even if you have specified an environment variable on the machine where the analysis is running.
Depot ( depot
, mandatory) Specify the name of the depot (and optionnally subforders) containing the sources to be analysed.
Revision ( label
) Specify a label, changelist or date to retrieve the corresponding revision of the sources. Leave this field empty to analyse the most recent revision fo the sources.
Authentication ( useAccountCredentials
, default: NO_CREDENTIALS)
The full command line syntax for Perforce is:
-r "type=Perforce,p4port=[text],depot=[text],label=[text],useAccountCredentials=[multipleChoice],username=[text],password=[password]"
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
For more details, refer to http://git-scm.com/.
The following is a list of commands used by the Git Repository Connector to retrieve sources:
git clone [$username:$password@]$url $tmpFolder
git checkout $commit
git log -1 "--format=%H"
git config --get remote.origin.url
git clone [$username:$password@]$url $tmpFolder
git checkout $commit
git fetch
git --git-dir=$gitRoot show $artefactPath
Git 1.7.1 is known to fail with a fatal: HTTP request failed
error on CentOS 6.9. For this OS, it is recommended to upgrade to git 2.9 as provided by software collections on https://www.softwarecollections.org/en/scls/rhscl/rh-git29/ and point to the new binary in git_config.tcl
or make the change permanent as described on https://access.redhat.com/solutions/527703.
Git has the following options:
URL ( url
, mandatory) URL of the git repository to get files from. The local, HTTP(s), SSH and Git protocols are supported.
Branch or commit ( commit
) This field allows specifying the SHA1 of a commit or a branch name. If a SHA1 is specified, it will be retieved from the default branch. If a branch label is specified, then its latest commit is analysed. Leave this field empty to analyse the latest commit of the default branch.
Sub-directory ( subDir
) Specify a subfolder name if you want to restrict the analysis to a subpath of the repository root.
Authentication ( useAccountCredentials
, default: NO_CREDENTIALS)
The full command line syntax for Git is:
-r "type=Git,url=[text],commit=[text],subDir=[text],useAccountCredentials=[multipleChoice],username=[text],password=[password]"
This Repository Connector allows analysing sources hosted in PTC Integrity, a software system lifecycle management and application lifecycle management platform developed by PTC.
For more details, refer to http://www.ptc.com/products/integrity/.
You can modify some of the settings of this repository connector if the si.exe and mksAPIViewer.exe binaries are not in your path. For versions that do not support the --xmlapi option, you can also turn off this method of retrieving file information. These settings are available by editing mks_conf.tcl in the repository connector's configuration folder.
PTC Integrity has the following options:
Server Hostname ( hostname
, mandatory) Specify the name of the Integrity server. This value is passed to the command line using the parameter --hostname.
Port ( port
) Specify the port used to connect to the Integrity server. This value is passed to the command line using the parameter --port.
Project ( project
) Specify the name of the project containing the sources to be analysed. This value is passed to the command line using the --project parameter.
Revision ( revision
) Specify the revision number for the sources to be analysed. This value is passed to the command line using the --projectRevision parameter.
Scope ( scope
, default: name:*.c,name:*.h) Specifies the scope (filter) for the Integrity sandbox extraction. This value is passed to the command line using the --scope parameter.
Authentication ( useAccountCredentials
, default: NO_CREDENTIALS)
The full command line syntax for PTC Integrity is:
-r "type=MKS,hostname=[text],port=[text],project=[text],revision=[text],scope=[text],useAccountCredentials=[multipleChoice],username=[text],password=[password]"
Team Foundation Server (TFS) is a Microsoft product which provides source code management, reporting, requirements management, project management, automated builds, lab management, testing and release management capabilities. This Repository Connector provides access to the sources hosted in TFS's revision control system.
For more details, refer to https://www.visualstudio.com/products/tfs-overview-vs.
The TFS repository connector (Team Foundation Server - Team Foundation Version Control) assumes that a TFS command-line client (Visual Studio Client or Team Explorer Everywhere) is installed on the Squore server and fully functional. The configuration of this client must be set up in the tfs_conf.tcl file. The repository connector form must be filled according to the TFS standard (eg. the Project Path must begin with the '$' character...). Note that this repository connector works with a temporary workspace that is deleted at the end of the analysis. The following is a list of commands used by the TFS Repository Connector to retrieve sources:
tf.exe workspace [/login:$username,$password] /server:$url /noprompt /new $workspace
tf.exe workfold [/login:$username,$password] /map $path $tempFolder /workspace:$workspace
tf.exe get [/login:$username,$password] /version:$version /recursive /force $path
tf.exe workspace [/login:$username,$password] /delete $workspace
tf.exe view [/login:$username,$password] /server:$artefactPath
When using the Java Team Explorer Everywhere client, / is replaced by - and the view command is replaced by print.
TFS has the following options:
Path ( path
, mandatory) Path the project to be analysed. This path usually starts with $.
Version ( version
) Specify the version of the sources to analyse. This field accepts a changeset number, date, or label. Leave the field empty to analyse the most recent revision of the sources.
Authentication ( useAccountCredentials
, default: NO_CREDENTIALS)
The full command line syntax for TFS is:
-r "type=TFS,URL=[text],path=[text],version=[text],useAccountCredentials=[multipleChoice],username=[text],password=[password]"
Rational Synergy is a software tool that provides software configuration management (SCM) capabilities for all artifacts related to software development including source code, documents and images as well as the final built software executable and libraries.
For more details, refer to http://www-03.ibm.com/software/products/en/ratisyne.
The Synergy repository connector assumes that a project already exists and that the Synergy user defined has the right to access it. The host where the analysis takes place must have Synergy installed and fully functional. Note that, as stated in IBM's documentation on http://pic.dhe.ibm.com/infocenter/synhelp/v7m2r0/index.jsp?topic=%2Fcom.ibm.rational.synergy.manage.doc%2Ftopics%2Fsc_t_h_start_cli_session.html, using credentials is only supported on Windows, so use the NO_CREDENTIALS option when Synergy runs on a Linux host. The following is a list of commands used by the Synergy Repository Connector to retrieve sources:
ccm start -d $db -nogui -m -q [-s $server] [-pw $password] [-n $user -pw password]
ccm prop "$path@$projectSpec"
ccm copy_to_file_system -path $tempFolder -recurse $projectSpec
ccm cat "$artefactPath@$projectSpec"
ccm stop
Synergy has the following options:
Server URL ( server
) Specify the Synergy server URL, if using a distant server. If specified, the value is used by the Synergy client via the -s parameter.
Database ( db
, mandatory) Specify the database path to analyse the sources it contains.
Project Specification ( projectSpec
, mandatory) Specify the project specification for the analysis. Source code contained in this project specification will be analysed recursively.
Subfolder ( subFolder
) Specify a subfolder name if you want to restrict the scope of the analysis to a particular folder.
Authentication: ( useAccountCredentials
, default: NO_CREDENTIALS) Note that, as stated in IBM's documentation, using credentials is only supported on Windows. The "No Credentials" must be used option when Synergy runs on a Linux host. For more information, consult http://pic.dhe.ibm.com/infocenter/synhelp/v7m2r0/index.jsp?topic=%2Fcom.ibm.rational.synergy.manage.doc%2Ftopics%2Fsc_t_h_start_cli_session.html.
The full command line syntax for Synergy is:
-r "type=Synergy,server=[text],db=[text],projectSpec=[text],subFolder=[text],useAccountCredentials=[multipleChoice],name=[text],password=[password]"
Connecting to an SVN server is supported using svn over ssh, or by using a username and password.
For more details, refer to https://subversion.apache.org/.
The following is a list of commands used by the SVN Repository Connector to retrieve sources (you can edit the common command base or the path to the executable in <SQUORE_HOME>/configuration/repositoryConnectors/SVN/svn_conf.tcl
if needed):
svn info --xml --non-interactive --trust-server-cert --no-auth-cache [--username $username] [--password $password] [-r $revision] $url
svn export --force --non-interactive --trust-server-cert --no-auth-cache [--username $username] [--password $password] [-r $revision] $url
This Repository Connector now includes a hybrid SVN mode saves you an extra checkout of your source tree when using the local_path
attribute (new in 18.0). Consult the reference below for more details.
SVN has the following options:
URL ( url
, mandatory) Specify the URL of the SVN repository to export and analyse. The following protocols are supported: svn://, svn+ssh://, http://, https://.
Revision ( rev
) Specify a revision number in this field, or leave it blank to analyse files at the HEAD revision.
External references ( externals
, default: exclude) Specify if when extracting sources from SVN the system should also extract external references.
Sources are already extracted in ( local_path
) Specify a path to a folder where the sources have already been extracted. When using this option, sources are analysed in the specified folder instead of being checked out from SVN. At the end of the analysis, the url and revision numbers are attached to the analysed sources, so that any source code access from the web interface always retrieves files from SVN. This mode is mostly used to save an extra checkout in some continuous integration scenarios.
Authentication ( useAccountCredentials
, default: NO_CREDENTIALS)
The full command line syntax for SVN is:
-r "type=SVN,url=[text],rev=[text],externals=[multipleChoice],local_path=[text],useAccountCredentials=[multipleChoice],username=[text],password=[password]"
Retrieve Sources from a folder on the server, use GNATHub to limit the files (compatible with GNAT Pro versions 7.4.2 up to 18.2).
This Repository Connector will only be available after you configure your server or client config.xml with the path to your gnathub executable with a <path name="gnatub" path="C:\tools\GNAThub\gnathub.exe" /> definition. Consult the Configuration Manual for more information about referencing external executables.
Folder (use GNATHub) has the following options:
Path of the source files ( path
) Specify the absolute path to the files you want to include in the analysis. The path specified must be accessible from the server.
Path of the gnathub.db file ( gnatdb
) Specify the absolute path of the gnathub.db file.
Root path for sources in the GNAT DB ( gnat_root
) Specify the root path for sources in the GNAT DB
The full command line syntax for Folder (use GNATHub) is:
-r "type=GNAThub,path=[text],gnatdb=[text],gnat_root=[text]"
Squore allows using multiple repositories in the same analysis. If your project consists of some code that is spread over two distinct servers or SVN repositories, you can set up your project so that it includes both locations in the project analysis. This is done by labelling each source code node before specifying parameters, as shown below
-r "type=FROMPATH,alias=Node1,path=/home/projects/client-code" -r "type=FROMPATH,alias=Node2,path=/home/projects/common/lib"
Note that only alpha-numeric characters are allowed to be used as labels. In the artefact tree, each node will appear as a separate top-level folder with the label provided at project creation.
Using multiple nodes, you can also analyse sources using different Repository Connectors in the same analysis:
-r "type=FROMPATH,alias=Node1,path=/home/projects/common-config" -r "type=SVN,alias=Node2,url=svn+ssh://10.10.0.1/var/svn/project/src,rev=HEAD"
Table of Contents
This chapter describe the available Data Providers and the default parameters that they accept via the Command Line Interface.
AntiC is a part of the jlint static analysis suite and is launched to analyse C and C++ source code and produce findings.
For more details, refer to http://jlint.sourceforge.net/.
On Linux, the antiC executable must be compiled manually before you run it for the first time by running the command:
# cd <SQUORE_HOME>/addons/tools/Antic_auto/bin/ && gcc antic.c -o antic
Automotive Coverage Import provides a generic import mechanism for coverage results at function level.
Automotive Coverage Import has the following options:
The full command line syntax for Automotive Coverage Import is:
-d "type=Automotive_Coverage,csv=[text]"
BullseyeCoverage is a code coverage analyzer for C++ and C. The coverage report file is used to generate metrics.
For more details, refer to http://www.bullseye.com/.
CPD is an open source tool which generates Copy/Paste metrics. The dectection of duplicated blocks is set to 100 tokens. CPD provides an XML file which can be imported to generate metrics as well as findings.
For more details, refer to http://pmd.sourceforge.net/pmd-5.3.0/usage/cpd-usage.html.
Cppcheck is a static analysis tool for C/C++ applications. The tool provides an XML output which can be imported to generate findings.
For more details, refer to http://cppcheck.sourceforge.net/.
Cppcheck is a static analysis tool for C/C++ applications. The tool provides an XML output which can be imported to generate findings.
For more details, refer to http://cppcheck.sourceforge.net/.
On Windows, this data provider requires an extra download to extract the Cppcheck binary in <SQUORE_HOME>/addons/tools/CPPCheck_auto/
and the MS Visual C++ 2010 Redistributable Package available from Microsoft. On Linux, you can install the cppcheck application anywhere you want. The path to the Cppcheck binary for Linux can be configured in config.tcl. For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.
Cppcheck (plugin) has the following options:
Source code folder ( dir
) Specify the folder containing the source files to analyse. If you want to analyse all of source repositories specified for the project, leave this field empty.
Ignore List ( ignores
) Specify a semi-colon-separated list of source files or source file directories to exclude from the check. For example: "lib/;folder2/". Leave this field empty to deactivate this option and analyse all files with no exception.
The full command line syntax for Cppcheck (plugin) is:
-d "type=CPPCheck_auto,dir=[text],ignores=[text]"
Parasoft C/C++test is an integrated solution for automating a broad range of best practices proven to improve software development team productivity and software quality for C and C++. The tool provides an XML output file which can be imported to generate findings and metrics.
For more details, refer to http://www.parasoft.com/product/cpptest/.
Cantata is a Test Coverage tool. It provides an XML output file which can be imported to generate coverage metrics at function level.
For more details, refer to http://www.qa-systems.com/cantata.html.
CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.
For more details, refer to http://checkstyle.sourceforge.net/.
CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.
For more details, refer to http://checkstyle.sourceforge.net/.
This data provider requires an extra download to extract the CheckStyle binary in <SQUORE_HOME>/addons/tools/CheckStyle_auto/
. For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.. You may also deploy your own version of CheckStyle and force the Data Provider to use it by editing <SQUORE_HOME>/configuration/tools/CheckStyle_auto/config.tcl
.
CheckStyle (plugin) has the following options:
Configuration file ( configFile
) A Checkstyle configuration specifies which modules to plug in and apply to Java source files. Modules are structured in a tree whose root is the Checker module. Specify the name of the configuration file only, and the data provider will try to find it in the CheckStyle_auto folder of your custom configuration. If no custom configuration file is found, a default configuration will be used.
Xmx ( xmx
, default: 1024m) Maximum amount of memory allocated to the java process launching Checkstyle.
Excluded directory pattern ( excludedDirectoryPattern
) Java regular expression of directories to exclude from CheckStyle, for example: ^test|generated-sources|.*-report$ or ou ^lib$
The full command line syntax for CheckStyle (plugin) is:
-d "type=CheckStyle_auto,configFile=[text],xmx=[text],excludedDirectoryPattern=[text]"
CheckStyle is an open source tool that verifies that Java applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.
For more details, refer to http://checkstyle.sourceforge.net/.
This data provider requires an extra download to extract the CheckStyle binary in <SQUORE_HOME>/addons/tools/CheckStyle_auto_for_SQALE/
. For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.
CheckStyle for SQALE (plugin) has the following options:
Configuration file ( configFile
, default: config_checkstyle_for_sqale.xml) A Checkstyle configuration specifies which modules to plug in and apply to Java source files. Modules are structured in a tree whose root is the Checker module. Specify the name of the configuration file only, and the data provider will try to find it in the CheckStyle_auto folder of your custom configuration. If no custom configuration file is found, a default configuration will be used.
Xmx ( xmx
, default: 1024m) Maximum amount of memory allocated to the java process launching Checkstyle.
The full command line syntax for CheckStyle for SQALE (plugin) is:
-d "type=CheckStyle_auto_for_SQALE,configFile=[text],xmx=[text]"
Cobertura is a free code coverage library for Java. Its XML report file can be imported to generate code coverage metrics for your Java project.
For more details, refer to http://cobertura.github.io/cobertura/.
Codesonar is a static analysis tool for C and C++ code designed for zero tolerance defect environments. It provides an XML output file which is imported to generate findings.
For more details, refer to http://www.grammatech.com/codesonar.
Compiler has the following options:
The full command line syntax for Compiler is:
-d "type=Compiler,txt=[text]"
Coverity is a static analysis tool for C, C++, Java and C#. It provides an XML output which can be imported to generate findings.
For more details, refer to http://www.coverity.com/.
ESLint is an open source tool that verifies that JavaScript applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.
For more details, refer to https://eslint.org/.
Findbugs is an open source tool that looks for bugs in Java code. It produces an XML result file which can be imported to generate findings.
For more details, refer to http://findbugs.sourceforge.net/.
Findbugs is an open source tool that looks for bugs in Java code. It produces an XML result file which can be imported to generate findings. Note that the data provider requires an extra download to extract the Findbugs binary in [INSTALLDIR]/addons/tools/Findbugs_auto/. You are free to use FindBugs 3.0 or FindBugs 2.0 depending on what your standard is. For more information, refer to the Installation and Administration Manual's "Third-Party Plugins and Applications" section.
For more details, refer to http://findbugs.sourceforge.net/.
This data provider requires an extra download to extract the Findbugs binary in <SQUORE_HOME>/addons/tools/Findbugs_auto/
. For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.
FindBugs (plugin) has the following options:
Classes ( class_dir
, mandatory) Specify the folders and/or jar files for your project in classpath format, or point to a text file that contains one folder or jar file per line.
Auxiliary Class path ( auxiliarypath
) Specify a list of folders and/or jars in classpath format, or specify the path to a text file that contains one folder or jar per line. This information will be passed to FindBugs via the -auxclasspath parameter.
Memory Allocation ( xmx
, default: 1024m) Maximum amount of memory allocated to the java process launching FindBugs.
The full command line syntax for FindBugs (plugin) is:
-d "type=Findbugs_auto,class_dir=[text],auxiliarypath=[text],xmx=[text]"
FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. FxCop generates an XML results file which can be imported to generate findings.
For more details, refer to https://msdn.microsoft.com/en-us/library/bb429476(v=vs.80).aspx.
GCov is a Code coverage program for C application. GCov generates raw text files which can be imported to generate metrics.
For more details, refer to http://gcc.gnu.org/onlinedocs/gcc/Gcov.html.
GCov has the following options:
The full command line syntax for GCov is:
-d "type=GCov,dir=[text],ext=[text]"
GNATcheck is an extensible rule-based tool that allows developers to completely define a coding standard. The results are output to a log file or an xml file that can be imported to generate findings.
For more details, refer to http://www.adacore.com/gnatpro/toolsuite/gnatcheck/.
GNATCompiler is a free-software compiler for the Ada programming language which forms part of the GNU Compiler Collection. It supports all versions of the language, i.e. Ada 2012, Ada 2005, Ada 95 and Ada 83. It creates a log file that can be imported to generate findings.
For more details, refer to http://www.adacore.com/gnatpro/toolsuite/compilation/.
JSHint is an open source tool that verifies that JavaScript applications adhere to certain coding standards. It produces an XML file which can be imported to generate findings.
For more details, refer to http://jshint.com/.
JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks. JUnit XML result files are imported as test artefacts and links to tested classes are generated in the project.
For more details, refer to http://junit.org/.
JUnit Format has the following options:
Results folder ( resultDir
, mandatory) Specify the path to the folder containing the JUnit results (or by a tool able to produce data in this format). The data provider will parse subfolders recursively. Note that the minimum support version of JUnit is 4.10.
File Pattern ( filePattern
, mandatory, default: TEST-*.xml) Specify the pattern for files to read reports from.
Root Artefact ( root
, mandatory, default: tests[type=TEST_FOLDER]/junit[type=TEST_FOLDER]) Specify the name and type of the artefact under which the test artefacts will be created.
The full command line syntax for JUnit Format is:
-d "type=JUnit,resultDir=[text],filePattern=[text],root=[text]"
JaCoCo is a free code coverage library for Java. Its XML report file can be imported to generate code coverage metrics for your Java project.
For more details, refer to http://www.eclemma.org/jacoco/.
JaCoCo has the following options:
The full command line syntax for JaCoCo is:
-d "type=Jacoco,xml=[text]"
Klocwork is a static analysis tool. Its XML result file can be imported to generate findings.
For more details, refer to http://www.klocwork.com.
The Logiscope suite allows the evaluation of source code quality in order to reduce maintenance cost, error correction or test effort. It can be applied to verify C, C++, Java and Ada languages and produces a CSV results file that can be imported to generate findings.
For more details, refer to http://www.kalimetrix.com/en/logiscope.
MS-Test automates the process of testing Windows applications. It combines a Windows development language, Basic, with a testing-oriented API.
For more details, refer to https://en.wikipedia.org/wiki/Visual_Test.
MSTest has the following options:
The full command line syntax for MSTest is:
-d "type=MSTest,resultDir=[text],filePattern=[text]"
NCover is a Code coverage program for C# application. NCover generates an XML results file which can be imported to generate metrics.
For more details, refer to http://www.ncover.com/.
This data provider reads an Oracle compiler log file and imports the warnings as findings. Findings extracted from the log file are filtered using a prefix parameter.
For more details, refer to http://www.oracle.com/.
Oracle PLSQL compiler Warning checker has the following options:
Prefixes ( prefix
) Prefixes and their replacements are specified as pairs using the syntax [prefix1|node1;prefix2|node2]. Leave this field empty to disable filtering.
The parsing algorithm looks for lines fitting this pattern:
[PATH;SCHEMA;ARTE_ID;ARTE_TYPE;LINE;COL;SEVERITY_TYPE;WARNING_ID;SEVERITY_ID;DESCR] and keeps lines where [PATH] begins with one of the input prefixes. In each kept [PATH], [prefix] is replaced by [node]. If [node] is empty, [prefix] is removed from [PATH], but not replaced. Some valid syntaxes for prefix:
One prefix to remove: svn://aaaa:12345/valid/path/from/svn
One prefix to replace: svn://aaaa:12345/valid/path/from/svn|node1
Two prefixes to remove: svn://aaaa:12345/valid/path/from/svn|;svn://bbbb:12345/valid/path/from/other_svn|
Two prefixes to remove: svn://aaaa:12345/valid/path/from/svn;svn://bbbb:12345/valid/path/from/other_svn
Two prefixes to replace: svn://aaaa:12345/valid/path/from/svn|node1;svn://bbbb:12345/valid/path/from/other_svn|node2
The full command line syntax for Oracle PLSQL compiler Warning checker is:
-d "type=Oracle_PLSQLCompiler,log=[text],prefix=[text]"
PC-lint is a static code analyser. The PC-lint data provider reads an PC-lint log file and imports MISRA violations as findings.
For more details, refer to http://www.gimpel.com/html/pcl.htm.
MISRA Rule Checking using PC-lint has the following options:
The full command line syntax for MISRA Rule Checking using PC-lint is:
-d "type=PC_Lint_MISRA,logDir=[text],excludedExtensions=[text]"
PMD scans Java source code and looks for potential problems like possible bugs, dead code, sub-optimal code, overcomplicated expressions, duplicate code... The XML results file it generates is read to create findings.
For more details, refer to http://pmd.sourceforge.net.
PMD scans Java source code and looks for potential problems like possible bugs, dead code, sub-optimal code, overcomplicated expressions, duplicate code ... The XML results file it generates is read to create findings.
For more details, refer to http://pmd.sourceforge.net.
This data provider requires an extra download to extract the PMD binary in <SQUORE_HOME>/addons/tools/PMD_auto/
. For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.. You may also deploy your own version of PMD and force the Data Provider to use it by editing <SQUORE_HOME>/configuration/tools/PMD_auto/config.tcl
.
PMD (plugin) has the following options:
The full command line syntax for PMD (plugin) is:
-d "type=PMD_auto,configFile=[text]"
Polyspace is a static analysis tool which includes a MISRA checker. It produces an XML output which can be imported to generate findings. Polyspace Verifier detects RTE (RunTime Error) such as Division by zero, Illegal Deferencement Pointer, Out of bound array index... Such information is turned into statistical measures at function level. Number of Red (justified/non-justified), Number of Grey (justified/non-justified), Number of Orange (justified/non-justified), Number of Green.
For more details, refer to http://www.mathworks.com/products/polyspace/index.html.
Polyspace has the following options:
DocBook results file ( xml
) Specify the path to the DocBook (main xml file) generated by Polyspace .
Ignore source file path ( ignoreSourceFilePath
, default: false) Removes all path elements when doing the mapping between files in Squore project and files in the Pomyspace report. Becareful this can work only if file names in Squore project are unique.
The full command line syntax for Polyspace is:
-d "type=Polyspace,xml=[text],ignoreSourceFilePath=[booleanChoice]"
QAC identifies problems in C source code caused by language usage that is dangerous, overly complex, non-portable, difficult to maintain, or simply diverges from coding standards. Its CSV results file can be imported to generate findings.
For more details, refer to http://www.phaedsys.com/principals/programmingresearch/pr-qac.html.
MISRA Rule Checking with QAC has the following options:
Code Folder ( logDir
) Specify the path to the folder that contains the annotated files to process.
For the findings to be successfully linked to their corresponding artefact, several requirements have to be met:
- The annotated file name should be [Original source file name].txt
e.g. The annotation of file "controller.c" should be called "controller.c.txt"
- The annotated file location in the annotated directory should match the associated source file location in the source directory.
e.g. The annotation for source file "[SOURCE_DIR]/subDir1/subDir2/controller.c" should be located in "[ANNOTATIONS_DIR]/subDir1/subDir2/controller.c.txt"
The previous comment suggests that the source and annotated directory are different.
However, these directories can of course be identical, which ensures that locations of source and annotated files are the same.
Extension ( ext
, default: html) Specify the extension used by QAC to create annotated files.
The full command line syntax for MISRA Rule Checking with QAC is:
-d "type=QAC_MISRA,logDir=[text],ext=[text]"
Rational Test RealTime is a cross-platform solution for component testing and runtime analysis of embedded software. This Data Provider extracts coverage results, as well as tests and their status
For more details, refer to http://www-01.ibm.com/software/awdtools/test/realtime/.
Unit Test Status from Rational Test RealTime has the following options:
The full command line syntax for Unit Test Status from Rational Test RealTime is:
-d "type=RTRT,logDir=[text],excludedExtensions=[text],generateTests=[booleanChoice]"
RIF/ReqIF (Requirements Interchange Format) is an XML file format that can be used to exchange requirements, along with its associated metadata, between software tools from different vendors.
For more details, refer to http://www.omg.org/spec/ReqIF/.
ReqIF has the following options:
The full command line syntax for ReqIF is:
-d "type=ReqIf,dir=[text],objType=[text]"
SQL Code Guard is a free solution for SQL Server that provides fast and comprehensive static analysis for T-Sql code, shows code complexity and objects dependencies.
For more details, refer to http://www.sqlcodeguard.com.
Squan Sources provides basic-level analysis of your source code.
For more details, refer to https://support.squoring.com.
The analyser can output info and warning messages in the build logs. Recent additions to those logs include better handling of structures in C code, which will produce these messages:
[Analyzer] Unknown syntax declaration for function XXXXX at line yyy to indicate that we whould have found a function but, probably due to preprocessing directives, we are not able to parse it.
[Analyzer] Unbalanced () blocks found in the file. Probably due to preprocessing directives, parenthesis in the file are not well balanced.
[Analyzer] Unbalanced {} blocks found in the file. Probably due to preprocessing directives, curly brackets in the file are not well balanced.
You can specify the languages for your source code by passing pairs of language and extensions to the languages parameter. Extensions are case-sensitive and cannot be used for two different languages. For example, a project mixing php and javascript files can be analysed with:
--dp "type=SQuORE,languages=php:.php;javascript:.js,.JS"
In order to launch an analysis using all the available languages by default, do not specify the languages parameter in your command line.
Squan Sources has the following options:
Languages ( languages
, default: ada;c;cpp;csharp;cobol;java;fortran77;fortran90;php;python;vbnet) Check the boxes for the languages used in the specified source repositories. Adjust the list of file extensions as necessary. Note that two languages cannot use the same file extension, and that the list of extensions is case-sensitive. Tip: Leave all the boxes unchecked and Squan Sources will auto-detect the language parser to use.
Force full analysis ( rebuild_all
, default: false) Analyses are incremental by default. Check this box if you want to force the source code parser to analyse all files instead of only the ones that have changed since the previous analysis. This is useful if you added new rule files or text parsing rules and you want to re-evaluate all files based on your modifications.
Generate control graphs ( genCG
, default: true) This option allows generating a control graph for every function in your code. The control graph is visible in the dashboard of the function when the analysis completes.
Use qualified names ( qualified
, default: false) Note: This option cannot be modified in subsequent runs after you create the first version of your project.
Limit analysis depth ( depth
, default: false) Use this option to limit the depth of the analysis to file-level only. This means that Squan Sources will not create any class or function artefacts for your project.
Add a 'Source Code' node ( scnode
, default: false) Using this options groups all source nodes under a common source code node instead of directly under the APPLICATION node. This is useful if other data providers group non-code artefacts like tests or requirements together under their own top-level node. This option can only be set when you create a new project and cannot be modified when creating a new version of your project.
'Source Code' node label ( scnode_name
, default: Source Code) Specify a custom label for your main source code node. Note: this option is not modifiable. It only applies to projects where you use the "Add a 'Source Code' node" option. When left blank, it defaults to "Source Code".
Compact folders ( compact_folder
, default: true) When using this option, folders with only one son are aggregates together. This avoids creating many unnecessary levels in the artefact tree to get to the first level of files in your project. This option cannot be changed after you have created the first version of your project.
Content exclusion via regexp ( pattern
) Specify a PERL regular expression to automatically exclude files from the analysis if their contents match the regular expression. Leave this field empty to disable content-based file exclusion.
File Filtering ( files_choice
, default: Exclude) Specify a pattern and an action to take for matching file names. Leave the pattern empty to disable file filtering.
pattern ( pattern_files
) Use a shell-like wildcard e.g. '*-test.c'. * Matches any sequence of characters in string, including a null string.
? Matches any single character in string.
[chars] Matches any character in the set given by chars. If a sequence of the form x-y appears in chars, then any character between x and y, inclusive, will match. On Windows, this is used with the -nocase option, meaning that the end points of the range are converted to lower case first. Whereas {[A-z]} matches '_' when matching case-sensitively ('_' falls between the 'Z' and 'a'), with -nocase this is considered like {[A-Za-z]}.
\x Matches the single character x. This provides a way of avoiding the special interpretation of the characters *?[] in pattern. Tip: Use ; to separate multiple patterns.
Folder Filtering ( dir_choice
, default: Exclude) Specify a pattern and an action to take for matching folder names. Leave the pattern empty to disable folder filtering.
pattern ( pattern_dir
) Use a shell-like wildcard e.g. 'Test_*'. * Matches any sequence of characters in string, including a null string.
? Matches any single character in string.
[chars] Matches any character in the set given by chars. If a sequence of the form x-y appears in chars, then any character between x and y, inclusive, will match. On Windows, this is used with the -nocase option, meaning that the end points of the range are converted to lower case first. Whereas {[A-z]} matches '_' when matching case-sensitively ('_' falls between the 'Z' and 'a'), with -nocase this is considered like {[A-Za-z]}.
\x Matches the single character x. This provides a way of avoiding the special interpretation of the characters *?[] in pattern. Tip: Use ; to separate multiple patterns.
Exclude files whose size exceeds ( size_limit
, default: 500000) Provide the size in bytes above which files are excluded automatically from the Squore project (Big files are usually generated files or test files). Leave this field empty to deactivate this option.
Detect algorithmic cloning ( clAlg
, default: true) When checking this box, Squan Sources launches a cloning detection tool capable of finding algorithmic cloning in your code.
Detect text cloning ( clTxt
, default: true) When checking this box, Squan Sources launches a cloning detection tool capable of finding text duplication in your code.
Ignore blank lines ( clIgnBlk
, default: true) When checking this box, blanks lines are ignored when searching for text duplication
Ignore comment blocks ( clIgnCmt
, default: true) When checking this box, blocks of comments are ignored when searching for text duplication
Minimum size of duplicated blocks ( clRSlen
, default: 10) This threshold defines the minimum size (number of lines) of blocks that can be reported as cloned.
Textual Cloning fault ratio ( clFR
, default: 0.1) This threshold defines how much cloning between two artefacts is necessary for them to be considered as clones by the text duplication tool. For example, a fault ratio of 0.1 means that two artefacts are considered clones if less than 10% of their contents differ.
Algorithmic cloning fault ratio ( clAlgFR
, default: 0.1) This threshold defines how much cloning between two artefacts is necessary for them to be considered as clones by the algorithmic cloning detection tool.
Compute Textual stability ( genTs
, default: true) This option allows keeping track of the stability of the code analysed for each version. The computed stability is available on the dashboard as a metric called and can be interpreted as 0% meaning completely changed and 100% meaning not changed at all.
Compute Algorithmic stability ( genAs
, default: true) This option allows keeping track of the stability of the code analysed for each version. The computed stability is available on the dashboard as a metric called Stability Index (SI) and can be interpreted as 0% meaning completely changed and 100% meaning not changed at all.
Detect artefact renaming ( clRen
, default: true) This option allows Squan Sources to detect artefacts that have been moved since the previous version, ensuring that the stability metrics of the previous artefact are passed to the new one. This is typically useful if you have moved a file to a different folder in your source tree and do not want to lose the previous metrics generated for this file. If you do not use this option, moved artefacts will be considered as new artefacts.
Mark relaxed findings as suspicious ( susp
, default: MODIFIED_BEFORE) This option sets the suspicious flag on relaxed findings depending of the selected option. Applies on source code artifacts only.
Additional parameters ( additional_param
) These additional parameters can be used to pass instructions to external processes started by this data provider. This value is generally left empty in most cases.
The full command line syntax for Squan Sources is:
-d "type=SQuORE,languages=[multipleChoice],rebuild_all=[booleanChoice],genCG=[booleanChoice],qualified=[booleanChoice],depth=[booleanChoice],scnode=[booleanChoice],scnode_name=[text],compact_folder=[booleanChoice],pattern=[text],files_choice=[multipleChoice],pattern_files=[text],dir_choice=[multipleChoice],pattern_dir=[text],size_limit=[text],clAlg=[booleanChoice],clTxt=[booleanChoice],clIgnBlk=[booleanChoice],clIgnCmt=[booleanChoice],clRSlen=[text],clFR=[text],clAlgFR=[text],genTs=[booleanChoice],genAs=[booleanChoice],clRen=[booleanChoice],susp=[multipleChoice],additional_param=[text]"
Squore Import is a data provider used to import the results of another data provider analysis. It is generally only used for debugging purposes.
For more details, refer to https://support.squoring.com.
Squore Virtual Project is a data provider that can use the output of several projects to compile metrics in a meta-project composed of the import sub-projects.
For more details, refer to https://support.squoring.com.
Squore Virtual Project has the following options:
The full command line syntax for Squore Virtual Project is:
-d "type=SQuOREVirtualProject,output=[text]"
StyleCop is a C# code analysis tool. Its XML output is imported to generate findings.
For more details, refer to https://stylecop.codeplex.com/.
StyleCop is a C# code analysis tool. Its XML output is imported to generate findings.
For more details, refer to https://stylecop.codeplex.com/.
Note that this data provider is not supported on Linux. On windows, this data provider requires an extra download to extract the StyleCop binary in <SQUORE_HOME>/addons/tools/StyleCop_auto/
and .NET framework 3.5 to be installed on your machine (run Net.SF.StyleCopCmd.Console.exe
manually once to install .NET automatically). For more information, refer to the Installation and Administration Guide's Third-Party Plugins and Applications section.
Tessy is a tool automating module/unit testing of embedded software written in dialects of C/C++. Tessy generates an XML results file which can be imported to generate metrics. This data provider supports importing files that have a xml_version="1.0" attribute in their header.
For more details, refer to https://www.hitex.com/en/tools/tessy/.
Tessy has the following options:
The full command line syntax for Tessy is:
-d "type=Tessy,resultDir=[text]"
The VectorCAST Data Provider extracts coverage results, as well as tests and their status
For more details, refer to https://www.vectorcast.com/.
VectorCAST has the following options:
The full command line syntax for VectorCAST is:
-d "type=VectorCAST,html_report=[text],generateTests=[booleanChoice]"
CodeSniffer is a rulecker for PHP and Javascript
For more details, refer to http://www.squizlabs.com/php-codesniffer.
Use this tool to check for duplicated files or XML Elements between a custom configuration and the standard configuration.
Csv Coverage Import provides a generic import mechanism for coverage results at fnuction level
Csv Coverage Import has the following options:
The full command line syntax for Csv Coverage Import is:
-d "type=csv_coverage,csv=[text]"
CSV Findings has the following options:
The full command line syntax for CSV Findings is:
-d "type=csv_findings,csv=[text]"
Imports artefacts, metrics, findings, textual information and links from one or more CSV files. The expected CSV format for each of the input files is described in the user manuals in the csv_import framework reference.
Consult csv_import Reference for more details about the expected CSV format.
CSV Import has the following options:
CSV Separator ( separator
, default: ;) Specify the CSV Separator used in the CSV file.
CSV Delimiter ( delimiter
, default: ") CSV Delimiter is used when the separator is used inside a cell value. If a delimiter is used as a char in a cell it has to be doubled.The ' char is not allowed as a delimiter.
Artefact Path Separator ( pathSeparator
, default: /) Specify the character used as a separator in an artefact's path in the input CSV file.
Case-sensitive artefact lookup ( pathAreCaseSensitive
, default: true) When this option is turned on, artefacts in the CSV file are matched with existing source code artefacts in a case-sensitive manner.
Ignore source file path ( ignoreSourceFilePath
, default: false) When ignoring source file path it is your responsbility to ensure that file names are unique in the project.
Create missing files ( createMissingFile
, default: false) Automatically creates the artefacts declared in the CSV file if they do not exist.
Ignore finding if artefact not found ( ignoreIfArtefactNotFound
, default: true) If a finding can not be attached to any artefact then it is either ignored (checked) or it is attached to the project node instead (unchecked).
Unknown rule ID ( unknownRuleId
) For findings of a type that is not in your ruleset, set a default rule ID. The value for this parameter must be a valid rule ID from your analysis model.
Measure ID for orphan artifacts count ( orphanArteCountId
) To save the total count of orphan findings as a metric at application level, specify the ID of the measure to use in your analysis model.
Measure ID for unknown rules count ( orphanRulesCountId
) To save the total count of unknown rules as a metric at application level, Specify the ID of the measure to use in your analysis model.
Information ID receiving the list of unknown rules IDs ( orphanRulesListId
) To save the list of unknown rule IDs as textual information at application level, specify the ID of the textual information to use in your analysis model.
CSV File ( csv
) Specify the path to the input CSV file containing artefacts, metrics, findings, textual information, links and keys.
Metrics CSV File ( metrics
) Specify the path to the CSV file containing metrics.
Infos CSV File ( infos
) Specify the path to the CSV file containing textual information.
Findings CSV File ( findings
) Specify the path to the CSV file containing findings.
Keys CSV File ( keys
) Specify the path to the CSV file containing artefact keys.
Links CSV File ( links
) Specify the path to the CSV file containing links.
Reports artefacts mapping problem as ( level
, default: info) When an artefact referenced in the csv file can not be found in the project, reports the problem as an information or as a warning.
The full command line syntax for CSV Import is:
-d "type=csv_import,separator=[text],delimiter=[text],pathSeparator=[text],pathAreCaseSensitive=[booleanChoice],ignoreSourceFilePath=[booleanChoice],createMissingFile=[booleanChoice],ignoreIfArtefactNotFound=[booleanChoice],unknownRuleId=[text],orphanArteCountId=[text],orphanRulesCountId=[text],orphanRulesListId=[text],csv=[text],metrics=[text],infos=[text],findings=[text],keys=[text],links=[text],level=[multipleChoice]"
CPU Data Import provides a generic import mechanism for CPU data from a CSV or Excel file.
CPU Data Import has the following options:
Data File ( xls_file
) Specify the path to the file containing CPU information.
Sheet Name ( xls_sheetname
) Specify the name of the Excel sheet that contains the CPU list.
CPU Column name ( xls_key
) Specify the header name of the column which contains the CPU key.
Grouping Structure ( xls_groups
) Specify the headers for Grouping Structure, separated by ";".
Filtering ( xls_filters
) Specify the list of Header for filtering For example: "column_name_1=regex1;column_name_2=regex2;
( cpu_idle_column_name
, default: Average idle Time per loop [ms])
( cpu_worst_column_name
, default: Worse case idle Time per loop [ms])
The full command line syntax for CPU Data Import is:
-d "type=import_cpu,root_node=[text],xls_file=[text],xls_sheetname=[text],xls_key=[text],xls_groups=[text],xls_filters=[text],csv_separator=[text],cpu_loop_column_name=[text],cpu_idle_column_name=[text],cpu_worst_column_name=[text],createOutput=[booleanChoice]"
Memory Data Import provides a generic import mechanism for memory data from a CSV or Excel file.
Memory Data Import has the following options:
Data File ( xls_file
) Specify the path to the file containing Memory information.
Sheet Name ( xls_sheetname
) Specify the name of the Excel sheet that contains the Memory list.
Memory Column name ( xls_key
) Specify the header name of the column which contains the Memory key.
Grouping Structure ( xls_groups
) Specify the headers for Grouping Structure, separated by ";".
Filtering ( xls_filters
) Specify the list of Header for filtering For example: "column_name_1=regex1;column_name_2=regex2;
The full command line syntax for Memory Data Import is:
-d "type=import_memory,root_node=[text],xls_file=[text],xls_sheetname=[text],xls_key=[text],xls_groups=[text],xls_filters=[text],csv_separator=[text],memory_size_column_name=[text],memory_used_column_name=[text],memory_type_column_name=[text],createOutput=[booleanChoice]"
Stack Data Import provides a generic import mechanism for stack data from a CSV or Excel file.
Stack Data Import has the following options:
Data File ( xls_file
) Specify the path to the file containing Stack information.
Sheet Name ( xls_sheetname
) Specify the sheetname that contains the Stack list.
Stack Column name ( xls_key
) Specify the header name of the column which contains the Stack key.
Grouping Structure ( xls_groups
) Specify the headers for Grouping Structure, separated by ";".
Filtering ( xls_filters
) Specify the list of Header for filtering For example: "column_name_1=regex1;column_name_2=regex2;
( stack_average_column_name
, default: Average Stack Size used [Bytes])
( stack_worst_column_name
, default: Worse Case Stack Size used [Bytes])
The full command line syntax for Stack Data Import is:
-d "type=import_stack,root_node=[text],xls_file=[text],xls_sheetname=[text],xls_key=[text],xls_groups=[text],xls_filters=[text],csv_separator=[text],stack_size_column_name=[text],stack_average_column_name=[text],stack_worst_column_name=[text],createOutput=[booleanChoice]"
Ticket Data Import provides a generic import mechanism for tickets from a CSV, Excel or JSON file. Additionnally, it generates findings when the imported tickets have an unknown status or type.
This Data Provider is new in Squore 18.0
This Data Provider provides fields so you can map all your tickets as Enhancements and defects and spread them over the following statuses: Open, In Implementation, In Verification, Closed. Overlapping statuses and types will cause an error, but if a ticket's type or status is not declared in the definition, the ticket will still be imported, and a finding will be created.
Ticket Data Import has the following options:
Root Node ( root_node
, default: Tickets) Specify the name of the node to attach tickets to.
Data File ( input_file
) Specify the path to the CSV, Excel or JSON file containing tickets.
Excel Sheet Name ( xls_sheetname
) Specify the sheet name that contains the ticket list if your import file is in Excel format.
Ticket ID ( artefact_id
) Specify the header name of the column which contains the ticket ID.
Ticket Name ( artefact_name
) Specify the pattern used to build the name of the ticket. The name can use any information collected from the CSV file as a parameter. Example: ${ID} : ${Summary}
Ticket UID ( artefact_uid
) Specify the pattern used to build the ticket Unique ID. The UID can use any information collected from the CSV file as a parameter. Example: TK#${ID}
Grouping Structure ( artefact_groups
) Specify the headers for Grouping Structure, separated by ";". For example: "column_name_1=regex1;column_name_2=regex2;
Filtering ( artefact_filters
) Specify the list of Header for filtering For example: "column_name_1=regex1;column_name_2=regex2;
Open Ticket Pattern ( definition_open
) Specify the pattern applied to define tickets as open. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=[Open|New]
In Development Ticket Pattern ( definition_rd_progress
) Specify the pattern applied to define tickets as in development. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Implementing
Fixed Ticket Pattern ( definition_vv_progress
) Specify the pattern applied to define tickets as fixed. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Verifying;Resolution=[fixed;removed]
Closed Ticket Pattern ( definition_close
) Specify the pattern applied to define tickets as closed. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Closed
Defect Pattern ( definition_defect
) Specify the pattern applied to define tickets as defects. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Type=Bug
Enhancement Pattern ( definition_enhancement
) Specify the pattern applied to define tickets as enhancements. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Type=Enhancement
TODO Pattern ( in_todo_list
) Specify the pattern applied to include tickets in the TODO list. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Sprint=2018-23
Creation Date Column ( creation_date
) Enter the name of the column containing the creation date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Due Date Column ( due_date
) Enter the name of the column containing the due date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Last Updated Date Column ( last_updated_date
) Enter the name of the column containing the last updated date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Closure Date Column ( closure_date
) Enter the name of the column containing the closure date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
URL ( url
) Specify the pattern used to build the ticket URL. The URL can use any information collected from the CSV file as a parameter. Example: https://example.com/bugs/${ID}
Description Column ( description
) Specify the header of the column containing the description of the ticket.
Reporter Column ( reporter
) Specify the header of the column containing the reporter of the ticket.
Handler Column ( handler
) Specify the header of the column containing the handler of the ticket.
Priority Column ( priority
) Specify the header of the column containing priority data.
Severity Column ( severity
) Specify the header of the column containing severity data.
CSV Separator ( csv_separator
) Specify the character used in the CSV file to separate columns.
Information Fields ( informations
) Specify the list of extra textual information to import from the CSV file. This parameter expects a list of headers separated by ";" characters. For example: Company;Country;Resolution
The full command line syntax for Ticket Data Import is:
-d "type=import_ticket,root_node=[text],input_file=[text],xls_sheetname=[text],artefact_id=[text],artefact_name=[text],artefact_uid=[text],artefact_groups=[text],artefact_filters=[text],definition_open=[text],definition_rd_progress=[text],definition_vv_progress=[text],definition_close=[text],definition_defect=[text],definition_enhancement=[text],in_todo_list=[text],creation_date=[text],due_date=[text],last_updated_date=[text],closure_date=[text],url=[text],description=[text],reporter=[text],handler=[text],priority=[text],severity=[text],csv_separator=[text],informations=[text],createOutput=[booleanChoice]"
This Data Provider extracts tickets and their attributes from a Jira instance to create ticket artefacts in your project.
For more details, refer to https://www.atlassian.com/software/jira.
This Data Provider is new in Squore 18.0
The extracted JSON from Jira is then passed to the Ticket Data Import Data Provider (described in the section called “Ticket Data Import”). Finer configuration of the data passed from this Data Provider to Ticket Data Import is available by editing (or overriding) <SQUORE_HOME>/addons/tools/jira/jira_config.xml
.
Jira has the following options:
Jira REST API URL ( url
, mandatory) The URL used to connect to yout Jira instance's REST API URL (e.g: https://jira.domain.com/rest/api/2)
Jira User login ( login
, mandatory) Specyfy your Jira User login.
Jira User password ( pwd
, mandatory) Specify your Jira User password.
Number of queried tickets ( max_results
, mandatory, default: -1) Maximum number of queried tickets returned by the query (default is -1, meaning 'retrieve all tickets').
Grouping Structure ( artefact_groups
, default: fields/components[0]/name) Specify the headers for Grouping Structure, separated by ";". For example: "column_name_1=regex1;column_name_2=regex2;
Creation Date Field ( creation_date
, default: fields/created) Enter the name of the column containing the creation date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Closure Date Field ( closure_date
, default: fields/resolutiondate) Enter the name of the column containing the closure date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Due Date Field ( due_date
, default: fields/duedate) Enter the name of the column containing the due date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
Last Updated Date Field ( last_updated_date
, default: fields/updated) Enter the name of the column containing the last updated date of the ticket. For example: column_name{format="dd/mm/yyyy"}). If format is not specified, the following is used by default: dd/mm/yyyy.
JQL Request ( jql_request
) Specify a JQL request (see JIRA documentation) in order to limit the number of elements sent by the JIRA server. For example: project=MonProjet.This parameter is optional.
Filtering ( artefact_filters
, default: fields/issuetype/name=(Task|Bug|Improvement|New Feature)) Specify the list of Header for filtering For example: "column_name_1=regex1;column_name_2=regex2;
Open Ticket Pattern ( definition_open
, default: fields/status/name=[To Do|Open|Reopened]) Specify the pattern applied to define tickets as open. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=[Open|New]
In Development Ticket Pattern ( definition_rd_progress
, default: fields/status/name=[In Progress|In Review]) Specify the pattern applied to define tickets as in development. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Implementing
Fixed Ticket Pattern ( definition_vv_progress
, default: fields/status/name=[Verified]) Specify the pattern applied to define tickets as fixed. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Verifying;Resolution=[fixed;removed]
Closed Ticket Pattern ( definition_close
, default: fields/status/name=[Resolved|Closed|Done]) Specify the pattern applied to define tickets as closed. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Status=Closed
Defect Pattern ( definition_defect
, default: fields/issuetype/name=[Bug]) Specify the pattern applied to define tickets as defects. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Type=Bug
Enhancement Pattern ( definition_enhancement
, default: fields/issuetype/name=[Improvement|New Feature]) Specify the pattern applied to define tickets as enhancements. This field accepts a regular expression to match one or more column headers with a list of possible values. Example: Type=Enhancement
Information Fields ( informations
, default: fields/environment;fields/votes/votes) Specify the list of extra textual information to import from the CSV file. This parameter expects a list of headers separated by ";" characters. For example: Company;Country;Resolution
The full command line syntax for Jira is:
-d "type=jira,url=[text],login=[text],pwd=[password],max_results=[text],artefact_groups=[text],creation_date=[text],closure_date=[text],due_date=[text],last_updated_date=[text],jql_request=[text],artefact_filters=[text],definition_open=[text],definition_rd_progress=[text],definition_vv_progress=[text],definition_close=[text],definition_defect=[text],definition_enhancement=[text],in_todo_list=[text],informations=[text]"
The Mantis Data Provider extracts tickets and their attributes from a Mantis installation and creates ticket artefacts. Prerequisites: This Data Provider queries Mantis tickets using the Mantis BT REST API. An API token is required to access this API. The Mantis server should be configured to avoid filtering 'Authorization' headers. See http://docs.php.net/manual/en/features.http-auth.php#114877 for further details.
For more details, refer to https://www.mantisbt.com.
This Data Provider is new in Squore 18.0
The extracted JSON from Mantis BT is then passed to the Ticket Data Import Data Provider (described in the section called “Ticket Data Import”). Finer configuration of the data passed from this Data Provider to Ticket Data Import is available by editing (or overriding) <SQUORE_HOME>/addons/tools/mantis/mantis_config.xml
.
Mantis has the following options:
Mantis URL ( url
, mandatory) Specify the URL of the Mantis instance (e.g: https://www.mantisbt.org/bugs/api/rest)
Mantis API Token ( api_token
, mandatory) Copy the Mantis API Token generated from your Account Settings in Mantis.
Number of queried tickets ( max_results
, mandatory, default: 50) Maximum number of queried tickets returned by the query (default is 50. value=-1 means 'retrieve all tickets').
The full command line syntax for Mantis is:
-d "type=mantis,url=[text],api_token=[text],max_results=[text]"
OSLC-CM allows retrieving information from Change Management systems following the OSLC standard. Metrics and artefacts are created by connecting to the OSLC system and retrieving issues with the specified query.
For more details, refer to http://open-services.net/.
OSLC has the following options:
Change Server ( server
) Specify the URL of the project you want to query on the OSLC server. Typically the URL will look like this: http://myserver:8600/change/oslc/db/3454a67f-656ddd4348e5/role/User/
Query ( query
) Specify the query to send to the OSLC server (e.g.: release="9TDE/TDE_00_01_00_00"). It is passed to the request URL via the ?oslc_cm.query= parameter.
Query Properties ( properties
, default: request_type,problem_number,crstatus,severity,submission_area,functionality...) Specify the properties to add to the query. They are passed to the OSLC query URL using the ?oslc_cm.properties= parameter.
The full command line syntax for OSLC is:
-d "type=oslc_cm,server=[text],query=[text],properties=[text],login=[text],password=[password]"
pep8 is a tool to check your Python code against some of the style conventions in PEP 88. Its CSV report file is imported to generate findings.
For more details, refer to https://pypi.python.org/pypi/pep8.
Style Guide for Python Code. Pep8 results are imported to produce findings on Python code. This data provider requires having pycodestyle or pep8 installed on the machine running the analysis and the pycodestyle or pep8 command to be available in the path. It is compatible with pycodestyle 2.4 or pep8 1.7 and may also work with older versions.
For more details, refer to https://pypi.org/project/pycodestyle.
Library that provides collection, processing, and rendering functionality for PHP code coverage information.
For more details, refer to https://github.com/sebastianbergmann/php-code-coverage.
Pylint is a Python source code analyzer which looks for programming errors, helps enforcing a coding standard and sniffs for some code smells (as defined in Martin Fowler's Refactoring book). Pylint results are imported to generate findings for Python code.
For more details, refer to http://www.pylint.org/.
Coding Guide for Python Code. Pylint results are imported to produce findings on Python code. This data provider requires having pylint installed on the machine running the analysis and the pylint command to be available in the path. It is known to work with pylint 1.7.0 and may also work with older versions.
QA-C is a static analysis tool for MISRA checking.
For more details, refer to http://www.programmingresearch.com/static-analysis-software/qac-qacpp-static-analyzers/.
QA-C is a static analysis tool for MISRA and CERT checking.
For more details, refer to http://www.programmingresearch.com/static-analysis-software/qac-qacpp-static-analyzers/.
This data provider imports findings from SonarQube. Note that versions prior to 6.2 may not be supported.
For more details, refer to https://www.sonarqube.org/.
SonarQube has the following options:
The full command line syntax for SonarQube is:
-d "type=sonarqube,sonar=[text],key=[text],version=[text],login=[text],password=[password]"
Squan Sources can handle files written in languages that are not officially supported with a bit of extra configuration (new in 18.0). In this mode, only a basic analysis of the file is carried out so that an artefact is created in the project and findings can be attached to it. A subset of the base metrics from Squan Sources is optionally recorded for the artefact so that line counting, stability and text duplication metrics are available at file level for the new language.
The example below shows how you can add TypeScript files to your analysis:
Copy <SQUORE_HOME>/configuration/tools/SQuORE/form.xml
and its .properties
files into your own configuration
Edit form.xml
to add a new language key and associated file extensions:
<?xml version="1.0" encoding="UTF-8"?> <tags baseName="SQuORE" ...> <tag type="multipleChoice" key="languages" ... defaultValue="...;typescript"> ... <value key="typescript" option=".ts,.TS" /> </tag> </tags>
Files with extensions matching the typescript language will be added to your project as TYPESCRIPT_FILE artefacts
Edit the defaultValue
of the additional_param
field to specify how Squan Sources
should count source code lines and comment lines in the new language, based on another language officially supported by Squore. This step is optional, and is only needed if you want the to record basic line counting metrics for the artefacts.
<?xml version="1.0" encoding="UTF-8"?> <tags baseName="SQuORE" ...> ... <tag type="text" key="additional_param" defaultValue="typescript=javascript" /> ... </tags>
Lines in TypeScript files will be counted as they would for Javascript code.
Add translations for the new language key to show in the web UI in Squan Sources's form_en.properties
OPT.typescript.NAME=TypeScript
Add translations for the new artefact type in one of the properties files imported by your Description Bundle:
T.TYPESCRIPT_FILE.NAME=TypeScript File
The new artefact type should also be declared as a type in your model. The easiest way to do this is to add it to the GENERIC_FILE alias in your analysis model, which is pre-configured to record the line counting metrics for new artefacts. You should also define a root indicator for you new artefact type. The following snippet shows a minimal configuration using a dummy indicator:
<!-- <configuration>/MyModel/Analysis/Bundle.xml --> <?xml version="1.0" encoding="UTF-8"?> <Bundle> ... <ArtefactType id="GENERIC_FILE" heirs="TYPESCRIPT_FILE" /> <RootIndicator artefactTypes="TYPESCRIPT_FILE" indicatorId="DUMMY" /> <Indicator indicatorId="DUMMY" scaleId="SCALE_INFO" targetArtefactTypes="TYPESCRIPT_FILE" displayTypes="IMAGE" /> <Measure measureId="DUMMY"> <Computation targetArtefactTypes="TYPESCRIPT_FILE" result="0" /> </Measure> ... </Bundle>
Reload your configuration and analyse a project, checking the box for TypeScript in Squan Sources's options to get Typescrypt artefacts in your project.
If you are launchin an analysis from the command line, use the language key defined in step 2 to analyse TypeScript files:
-d "type=SQuORE,languages=typescript,additional_param=typescript=javascript"
After the analysis finishes and you can see your artefacts in the tree, use the Dashboard Editor to build a dashboard for your new artefact type.
Finally, create a handler for the source code viewer to display your new file type into your configuration folder, by
copying <SQUORE_HOME>/configuration/sources/javascript_file.properties
into your own configuration as
<SQUORE_HOME>/configuration/sources/typescript_file.properties
.
By default, Squan Sources generates artefacts for all PROGRAMs in COBOL source files. It is possible to configure the parser to also generate artefacts for all SECTIONs and PARAGRAPHs in your source code. This feature can be enabled with the following steps:
Open <SQUORE_HOME>/configuration/tools/SQuORE/Analyzer/artifacts/cobol/ArtifactsList.txt
Edit the list of artefacts to generate and add the section and paragraph types:
program section paragraph
Save your changes
If you create a new project, you will see the new artefacts straight away. For already-existing projects, make sure to launch a new analysis and check Squan Sources's Force full analysis option to parse the entire code again and generate the new artefacts.
Input files for Squore's Data Providers, like source code, can be located in your version control system. When this is the case, you need to specify a variable in the input field for the Data Provider instead of an absolute path to the input file.
The variable to use varies depending on your scenario:
You have only one node of source code in your project
In this case, the variable to use is $src.
You have more than one node of source code in your project
In this case, you need to tell Squore in which node the input file is located. This is done using a variable that has the same name as the alias you defined for the source code node in the previous step of the wizard. For example, if your nodes are labelled Node1
and Node2
(the default names), then you can refer to them using the $Node1 and $Node2 variables.
When using these variables from the command line on a linux system, the $ symbol must be escaped:
-d "type=PMD,configFile=\$src/pmd_data.xml"
When transforming an XML results file with an XSL stylesheet, the XML parser used by Squore will try to validate the XML file against the DTD declared in the XML header. In cases where the XSL transformation is running on a machine with no internet access, this can result in the execution of the Data Provider
failing with a No route to host
error message.
You can fix this issue by modifying the data provider to use a catalog file that will provide an alternate location for the DTD used to validate the XML. This feature can be used by all Data Providers that include an XSL transformation [1] .
The following example adds this functionality to the Cobertura Data Provider:
Add a catalog.xml file in the Data Provider's configuration folder:
<configuration>/tools/cobertura/catalog.xml: <?xml version="1.0"?> <catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog"> <rewriteSystem systemIdStartString="http://cobertura.sourceforge.net/xml" rewritePrefix="./DTD"/> </catalog>
Copy the dtd that the XML needs to validate again inside a DTD
folder in <configuration>/tools/cobertura/
.
The catalog file will be used the next time the Data Provider is executed and the DTD declaration will dynamically be changed from:
<!DOCTYPE coverage SYSTEM "http://cobertura.sourceforge.net/xml/coverage-04.dtd">
to:
<!DOCTYPE coverage SYSTEM "<configuration>/tools/cobertura/DTD/coverage-04.dtd">">
For more information about how to write your catalog file, refer to https://xerces.apache.org/xerces2-j/faq-xcatalogs.html.
[1] The list includes:
Cantata
Cobertura
CodeSonar
Coverity
CPD
CPPCheck
CPPTest
FindBugs
JaCoCo
Klocwork
NCover
Polyspace
sqlcodeguard
All Data Providers are utilities that run during an analysis. They usually take an input file to parse or parameters specified by the user to generate output files containing violations or metrics to add to your project. Here is a non-exhaustive list of what some of them do:
Use XSLT files to transform XML files
Read information from Microsoft Excel files
Parse HTML test results
Query web services
Export data from OSLC systems
Launch external processes
Repository Connectors are based on the same model and are used to specifically retrieve source code and other data from source code management systems.
Read on to learn about how to configure your Data Provider and make it available in the web interface, and then understand how to implement the scripted part of a Data Provider that is executed during an analysis.
The last part fo this section also introduces two frameworks that you can base your Data Providers on depending on whether you prefer to produce CSV or XML files for Squore.
A Data Provider's parameters are defined in a file called form.xml
. The following is an example of form.xml
for a Data Provider extending the GenericPerl framework:
<?xml version="1.0" encoding="UTF-8"?> <tags baseName="GenericPerl" needSources="true" image="CustomDP.png" projectStatusOnFailure="ERROR"> <tag type="multipleChoice" displayType="checkbox" optionTitle=" " key="tests"> <value key="ux" option="usability" /> <value key="it" option="integration" /> <value key="ut" option="unit" /> </tag> <tag type="booleanChoice" key="ignore_missing_sources" defaultValue="false" /> <tag type="text" key="input_file" defaultValue="myFile.xml" changeable="false" /> <tag type="multipleChoice" key="old_results" style="margin-left:10px" displayType="radioButton" defaultValue="Exclude"> <value key="Exclude" /> <value key="Include" /> </tag> <tag type="text" key="java_path" defaultValue="/usr/bin/java" hide="true" /> <tag type="password" required="true" key="password" /> </tags>
The
tags
element accepts the following attributes:
baseName
(mandatory if you are not using an exec-phase
) indicates on which framework you are basing this Data Provider. The value of this attribute must match a folder from the addons
folder of your installation.
needSources
(optional, default: false) allows specifying whether the Data Provider requires sources or not. When set to true, an error will be displayed if you try to select this Data Provider without adding any Repository Connector location to your project.
image
(optional, default: none) allows displaying a logo in the web UI for the Data Provider
projectStatusOnFailure
(optional, default: ERROR) defines what status the project ends in when this Data Provider produces an error. The following values are allowed:
IGNORE
WARNING
ERROR
projectStatusOnWarning
(optional, default: WARNING) defines what status the project ends in when this Data Provider produces a warning. The following values are allowed:
IGNORE
WARNING
ERROR
Each
tag
element is a Data Provider option and allows the following attributes:
key
(mandatory) is the option's key that will be passed to the perl script, or can be used to specify the parameter's value from the command line
type
(mandatory) defines the type of the parameter.
The following values are accepted:
text for free text entry
password for password fields
booleanChoice for a boolean
multipleChoice for offering a selection of predefined values
displayType
(optional) allows specifying how
to display a multipleChoice
parameter by using one of:
comboBox
radioButton
checkbox
defaultValue
(optional, default: empty) is the value used for the parameter when not specified
hide
(optional, default: false) allows hiding a parameter from the web UI, which is useful when combining it with a default value
changeable
(optional, default: true) allows making a parameter configurable only when creating the project but read-only for following analyses when set to true
style
(optional, default: empty) allows setting basic css for the attribute in the web UI
required
(optional, default: false) allows showing a red asterisk next to the field in the web UI to make it visibly required.
In order to display your Data Provider parameters in different languages in the web UI, yout Data Provider's form.xml
does not
contain any hard-coded strings. Instead, Squore uses each parameter's key
attribute to dynamically
retrieve a translation from a form_xx.properties
file located next to form.xml
.
When you create a Data Provider, it is mandatory to include at least an English version of the strings in a file called form_en.properties
. You are free to add other languages as needed. Here is a sample .properties
for for the CustomDP you created in the previous section:
FORM.GENERAL.NAME = CustomDP FORM.DASHBOARD.NAME = Test Status FORM.GENERAL.DESCR = CustomDP imports test results for my project FORM.GENERAL.URL = http://example.com/CustomDP TAG.tests.NAME = Test Types TAG.tests.DESCR = Check the boxes next to the types of test results contained in the results TAG.ignore_missing_sources.NAME = Ignore Missing Sources TAG.input_file.NAME = Test Results TAG.input_file.DESCR = Specify the absolute path to the file containing the test results TAG.old_results.NAME = Old Test Results TAG.old_results.DESCR = If the previous analysis contained results that are not in this results file, what do you want to do with the old results? OPT.Exclude.NAME = discard OPT.Include.NAME = keep TAG.password.NAME = File Password TAG.password.DESCR = Specify the password to decrypt the test results file
The syntax for the .properties
file is as follows:
FORM.GENERAL.NAME is the display name of the Data Provider in the project wizard
FORM.DASHBOARD.NAME is the display name of the Data Provider in the Explorer
FORM.GENERAL.DESCR is the description displayed in the Data Provider's tooltip in the web UI
FORM.GENERAL.URL is a reference URL for the Data Provider. Note that it is not displayed in ther web UI yet.
TAG.tag_name.NAME allows setting the display name of a parameter
TAG.tag_name.DESCR is a help text displayed in a tooltip next to the Data Provider option in the web UI
OPT.option_name.NAME allows setting the display name of an option
Using the form_en.properties
above for CustomDP results in the following being displayed in the web UI when launching an analysis:
Not all wizards display all Data Providers by default. If your Data Provider does not appear after refreshing your configuration, make sure that your wizard bundle allows displaying all Data Providers
by reviewing the tools
element of Bundle.xml
:
<?xml version="1.0" encoding="UTF-8"?> <Bundle> <Wizard ... > ... <tools all="true"> ... </tools> ... </Wizard> </Bundle>
For more information about the wizard bundle, consult the the chapter called "Project Wizards" in the Configuration Guide.
If you have made this change and your Data Provider still does not appear in your wizard, consult the Validator to find out if it was disabled because of an error in its configuration.
Now that you have a new Data Provider available in the web interface (and the command line), this section will show you how to use these parameters and pass them to one or more scripts or executables in order to eventually write data in the format that Squore expects to import during the analysis.
At the end of a Data Provider execution, Squore expects a file named input-data.xml
to be written in a specific location. The syntax of the XML file to generate is as follows:
<!-- input-data.xml syntax --> <bundle version="2.0"> <artifact [local-key=""] [local-parent=""|parent=""] > <artifact [id="<guid-stable-in-time-also-used-as-a-key>"] name="Component" type="REQ" [location=""] > <info name|n="DESCR" value="The description of the object"/> <key value="3452-e89b-ff82"/> <metric name="TEST_KO" value="2"/> <finding name="AR120" loc="xxx" p0="The message" /> <link name="TEST" local-src=""|src=""|local-dst=""|dst="" /> <artifact id="" name="SubComponent" type="REQ"> ... </artifact> </artifact> </artifact> <artifact id="" local-key="" name="" type="" local-parent=""|parent="" [location=""] /> ... <link name="" local-src=""|src="" local-dst=""|dst="" /> ... <info local-ref=""|ref="" name="" value=""/> ... <metric local-ref=""|ref="" name="" value=""/> ... <finding local-ref=""|ref="" [location=""] p0="" /> <finding local-ref=""|ref="" [location=""] p0=""> <location local-ref=""|ref="" [location=""] /> ... <relax status="RELAXED_DEROGATION|RELAXED_LEGACY|RELAXED_FALSE_POSITIVE"><![CDATA[My Comment]]></relax> </finding> ... </bundle>
Your Data Provider is configured by adding an exec-phase
element with a mandatory id="add-data"
attribute in form.xml
.
The basic syntax of an exec-phase
can be seen below:
<exec-phase id="add-data"> <exec name="tcl|perl|java|javascript or nashorn" | executable="/path/to/bin" | executable="executable_name"failOnError="true|false" failOnStdErr="true|false" warn="[WARN]" error="[ERROR|ERR]" fatal="[FATAL]"> <arg value="${<function>(<args>)}"/> <arg value="-freeText" /> <arg value="${<predefinedVars>}" /> <arg value="versions" /> <arg value="-myTag"/> <arg tag="myTag"/> <env key="MY_VAR" value="SOME_VALUE"/> </exec> <exec ... /> <exec-tool name="another_data_provider"> <param key="<tagName>" value="<value>" /> <param key="<tagName>" tag="<tag>" /> <param ... /> </exec-tool> <exec-tool ... > ... </exec-tool> </exec-phase>
The exec-phase
element accepts one or more launches of scripts or executables
specified in an exec
child element, that can receive arguments and environment
variables specified via arg
and env
elements.
There are four built-in languages for executables:
tcl
perl
java
javascript or nashorn
The scripts are launched using the tcl, perl, or java runtimes defined in your Squore installation. This is also the case for javascript, which is handled by Java's Nashorn engine.
The following attributes of the exec
element allow you to control error handling:
failOnError
(optional, default: true) marks the Data Provider execution as failed if the executable returns an error code
failOnStdErr
(optional, default: true) marks the Data Provider execution as failed if the executable prints something to stdErr during the execution
warn
,
error
and
fatal
(optional, default: see code block above) allow you to define patterns to look for in the executable's standard output to fine-tune the result of the execution.
Other executables can be called, as long as they are available on the system's PATH, or configured in
config.xml
Given the following config.xml
:
<!-- config.xml (server or cli) --> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <squore type="server" version="1.3"> <paths> <path name="python" path="C:\Python\python.exe" /> <path name="git" path="C:\Git\bin\git.exe" /> </paths> ... </squore>
git and python can be called in your Data Provider as follows:
<exec-phase id="add-data"> <exec executable="git"> ... </exec> <exec executable="python"> ... </exec> </exec-phase>
Argument values can be:
Free text passed in a value
tag, useful to specify a parameter for your script
<exec executable="perl"> <arg value="-V" /> </exec>
A tag key declared in form.xml
passed as a tag
attribute to retrieve the input specified by the user. If no input was specified, you can define a defaultValue
:
<arg tag="maxValue" defaultValue="50" /> <arg tag="configFile" defaultValue="${getToolConfigDir(default.xml)}" />
One of the predefined functions
${getOutputFile(<relative/path/to/file>,<abortIfMissing>)} returns the absolute path of an input-data.xml
file output by an exec-phase
. failIfMissing
is an optional boolean which aborts the execution when set to true if the file is missing.
${getTemporaryFile(<relative/path/to/file>)} returns the absolute path of a temporary file created by an exec
(only for add-data
and repo-add-data
phases)
${getToolAddonsDir(<relative/path/to/file>)} returns the absolute path of a file in the Data Provider's addons folder
${getToolConfigDir(<relative/path/to/file>)} returns the absolute path of a file in the Data Provider's configuration folder
${path(<executable_name>)} returns the absolute path of an executable configured in config.xml
, or just the executable name
if the executable is available from the system's PATH.
<exec executable="..."> <arg value="-git_path" /> <arg value="${path(git)}" />
One of the predefined variables
${tmpDirectory} to get an absolute path to a temp folder to create files
${sourcesList} to get a list of the aliases and locations containing the data extracted by the repository connectors used in the analysis
${outputDirectory} to get the absolute path of folder where the Data Provider needs to write the final input-data.xml
You can call and pass parameters to other Data Providers after your exec-phase
using an exec-tool
element.
The exec-tool
element uses a mandatory name
which is the name of the folder containing the other Data Provider
to launch in your configuration folder and supports passing the parameters expected by the other Data Provider via one or more param
elements where:
As an example, the following Data Provider generates a CSV file that is then passed to the pep8 Data Provider:
<exec-phase id="add-data"> <exec executable="python"> <arg value="consolidate-reports-recursive.py" /> <arg value="-folders" /> <arg tag="root_folder" /> <arg value="-outputFile" /> <arg value="output.csv" /> </exec> <exec-tool name="pep8"> <param key="csv" value="${getOutputFile(output.csv)}" /> <param key="separator" tag="separator" defaultValue=";" /> </exec-tool> </exec-phase>
In this other example, a perl script is launched to retrieves issues from a ticketing system and the export data is passed to the import_ticket Data Provider:
<exec-phase id="add-data"> <exec name="perl"> <arg value="${getToolConfigDir(export_ticket.pl)}" /> <arg value="-url" /> <arg tag="url" /> <arg value="-login" /> <arg tag="login" /> <arg value="-pwd" /> <arg tag="pwd" /> <arg value="-outputFile" /> <arg value="${getOutputFile(exportdata.csv,false)}" /> </exec> <exec-tool name="import_ticket"> <param key="input_file" value="${getOutputFile(exportdata.csv)}" /> <param key="csv_separator" value=";" /> </exec-tool> </exec-phase>
If your Data Provider uses a perl script, Squore provides a small library that makes it easy to retrieve script arguments called SQuORE::Args. Using it as part of your script, you can retrieve arguments using the get_tag_value() function, as shown below:
# name: export_ticket.pl
# description: exports issues to a CSV file
use SQuORE::Args;
# ...
# ...
my $url = get_tag_value("url");
my $login = get_tag_value("login");
my $pwd = get_tag_value("pwd");
my $outputFile = get_tag_value("outputFile");
# ...
exit 0;
If you want to find more examples of working Data Providers that use this syntax, check the following Data Providers in Squore's default configuration folder:
conf-checker calls a jar file to write an XML file in Squore's exchange format
import_ticket parses a file to translate it into a format that can then be passed to csv_import to import the tickets into Squore
jira retrieves data from Jira and passes it to import_ticket
The same syntax used to create Data Providers can be used to create Repository Connectors, and therefore instruct Squore to get source code from SCMs. Instead of using an
exec-phase
with the id="add-data"
, your Repository Connector should define the following phases:
id="import"
defines how you extract source code and make it available to Squan Sources so it can be analysed.
This phase is expected to return a path to a folder containing the sources to analyse or a data.properties
file listing
the path to the folder containing source and various other properties to be used in other executions:
directory=/path/to/sources-to-analyse data.<key1>=<value1> data.<key2>=<value2>
This phase is executed once per source code node in the project and allows you to use the following additional variables:
${outputSourceDirectory} is the folder containing the sources to analyse
${alias} is the alias used for the source code node (empty if there is only one source code node)
id="repo-add-data"
is similar to the add-data
phase described for Data Providers in
the section called “Running your Data Provider” and is expected to produce an input-data.xml
. The only difference in the case of a Repository Connector is that this
phase is executed once per source code node in the analysis.
id="display"
is the phase that is called when users request to view the source code for an artefact from the web UI. This phase
is expected to return a data.properties
file with the following keys:
filePath=/path/to/source/file displayPath=<Artefact Display Path (optional)>
The contents of filePath
will be loaded in the source code viewer, while the value of displayPath
will be used as the file path displayed in the header of the source code viewer.
This phase allows you to use the following additional variables:
${scaInfo} is text to display in the title bar of the source code viewer in the web interface
${artefactName} is the name of the file to display
${artefactPath} is the path (without the alias) of the file to display
During the display phase, you can retrieve any data set during the import phase for the repository using the ${getImportData(<key1>)} function
Consult SVN's form.xml
in <SQUORE_HOME>/configuration/repositoryConnectors/SVN
for a working example of a Repository Connector
that uses all the phases described above.
If you want your Data Provider to use the Squore toolkit to retrieve references to artefacts, the following variables are available
(in the add-data
and repo-add-data
phases only):
In order to use the toolkit, your exec
must use the tcl language. As an example, here is a sample
exec-phase
and associated tcl file to get you started:
<!-- form.xml --> <exec-phase id="repo-add-data"> <exec name="tcl"> <arg value="${getToolAddonsDir(repo-add-data.tcl)}" /> <arg value="${tclToolkitFile}" /> <arg value="${squanOutputDirectory}" /> <arg value="${outputDirectory}" /> <arg tag="xxx" /> </exec> </exec-phase>
#repo-add-data.tcl: set toolkitFile [lindex $argv 0] set sqOutputDir [lindex $argv 1] set outputDir [lindex $argv 2] set xxx [lindex $argv 3] # Initialise the toolkit puts "Initializing toolkit" source $toolkitFile toolkit::initialize $sqOutputDir $outputDir # Execute your code puts "Main execution" # yout code here # ... # Generate xml files (artefacts) puts "Generating xml files" toolkit::generate $outputDir {artefacts}
In order to help you import data into Squore, the following Data Provider frameworks are provided and can
write a valid input-data.xml
file for you:
csv_import (new in 18.0)
The csv_import framework allows you to write Data Providers that produce CSV files and then pass them on to the framework to be converted to an XML format that Squore understands. This framework allows you to import metrics, findings, textual information and links as well as generate your own artefacts. It is fully linked to the source code parser and therefore allows to locate existing source code artefacts generated by the source code parser (new in 18.0). Refer to the full csv_import Reference for more information.
xml (new in 18.0)
The xml framework is a sample implementation of a Data Provider that allows you to directly import an XML file or run it through an XSL transformation to that it matches the input format expected by Squore (input-data.xml). This framework therefore allows you to import metrics, findings, textual information and links as well as generate your own artefacts. Refer to the full xml Reference for more information.
If you are looking for the legacy Data Provider frameworks from previous versions of Squore, consult the section called “Legacy Frameworks”.
The legacy Data Provider frameworks are still supported, however using the new frameworks is recommended for developping new Data Providers, as they are more flexible and provide more functionality to interact with source code artefacts.
Table of Contents
install — Squore CLI install script
install
[
-v
] [
-s
] [
server_url
-u
] [
user
-p
] [
password
options
...]
Installs and configures Squore CLI.
The most common options when installing Squore CLI are -s
,
-u
and -p
, to configure the server URL, user and password
used to connect to the server. These details will be stored on the machine so that the password does not
have to be passed again on the command line for this user account. The -N
disables
the automatic synchronisation of the configuration folders with the server at the end of the installation. This can
also be launched manually later on if needed.
-s server_url
(default: http://localhost:8180/SQuORE_Server
) The URL of Squore Server that Squore CLI will connect to after installation.
-u user
(default: demo
) The username to use to connect to Squore Server.
-p password
(default: demo
) The username to use to connect to Squore Server.
-N
Do not synchronise client with server
-v
Turn on verbose mode
Table of Contents
The following Data Provider frameworks support importing all kinds of data into Squore. Whether you choose one or the other depends on the ability of your script or executable to produce CSV or XML data. Note that these frameworks are recommended over the legacy frameworks described in the section called “Legacy Frameworks”, which are deprecated as of Squore 18.0.18.
============== = csv_import = ============== The csv_import framework allows you to create Data Providers that produce CSV files that the framework will translate into XML files that can be imported in your analysts results. This framework is useful if writing XML files directly from your script is not practical. Using csv_import, you can import metrics, findings (including relaxed findings), textual information, and links between artefacts (including to and from source code artefacts). This framework replaces all the legacy frameworks that wrote CSV files in previous versions. Note that this framework can be called by your Data Provider simply by creating an exec-tool phase that calls the part of the framework located in the configuration folder: <exec-tool name="csv_import"> <param key="csv" value="${getOutputFile(output.csv)}" /> <param key="separator" value=";" /> <param key="delimiter" value=""" /> </exec-tool> For a full description of all the parameters that can be used, consult the section called "CSV Import" in the "Data Providers" chapter of this manual. ============================================ = CSV format expected by the data provider = ============================================ - Line to define an artefact (like a parent artefact for instance): Artefact - Line to add n metrics to an artefact: Artefact;(MetricId;Value)* - Line to add n infos to an artefact: Artefact;(InfoId;Value)* - Line to add a key to an artefact: Artefact;Value - Line to add a finding to an artefact: Artefact;RuleId;Message;Location - Line to add a relaxed finding to an artefact: Artefact;RuleId;Message;Location;RelaxStatus;RelaxMessage - Line to add a link between artefacts: Artefact;LinkId;Artefact where: - MetricId is the id of the metric as declared in the Analysis Model - InfoId is the id of the information to import - Value is the value of the metric or the information or the key to import (a key is a UUID used to reference an artefact) - RuleId is the id of the rule violated as declared in the Analysis Model - Message is the message of the finding, which is displayed after the rule description - Location is the location of the finding (a line number for findings attached source code artefacts, a url for findings attached to any other kind of artefact) - RelaxStatus is one of DEROGATION, FALSE_POSITIVE or LEGACY and defines the relaxation stat of the imported finding - RelaxMessage is the justification message for the relaxation state of the finding - LinkId is the id of the link to create between artefacts, as declared in the Analysis Model ========================== = Manipulating Artefacts = ========================== The following functions are available to locate and manipulate source code artefacts in the project: - ${artefact(type,path)} ==> Identify an artefact by its type and full path - ${artefact(type,path,uid)} ==> Identify an artefact by its type and full path and assign it the unique identifier uid - ${uid(value)} ==> Identify an artefact by its unique identifier (value) - ${file(path)} ==> Tries to find a source code file matching the "path" in the project - ${function(fpath,line)} ==> Tries to find a source code function at line "line" in file matching the "fpath" in the project - ${function(fpath,name)} ==> Tries to find a source code function whose name matches "name" in the file matching the "fpath" in the project - ${class(fpath,line)} ==> Tries to find a source code class at line "line" in the file matching the "fpath" in the project - ${class(fpath,name)} ==> Tries to find a source code class whose name matches "name" in the file matching the "fpath" in the project =============== = Input Files = =============== The data provider accepts the following files: Metrics file accepts: Artefact definition line Metrics line Findings file accepts: Artefact definition line Findings line Keys file accepts: Artefact definition line Keys line Information file accepts: Artefact definition line Information line Links file accepts: Artefact definition line Links line It is also possible to mix every kind of line in a single csv file, as long as each line is prefixed with the kind of data it contains. In this case, the first column must contain one of: DEFINE (or D): when the line is used to define an artefact METRIC (or M): to add a metric INFO (or I): to add an information KEY (or K): to add a key FINDING (or F): to add a finding, relaxed or not LINK (or L): to add link between artefacts The following is an example of a csv file containing mixed lines: D;${artefact(CR_FOLDER,/CRsCl)} M;${artefact(CR,/CRsCl/cr2727,2727)};NB;2 M;${artefact(CR,/CRsCl/cr1010,1010)};NB;4 I;${uid(1010)};NBI;Bad weather K;${artefact(CR,/CRsCl/cr2727,2727)};#CR2727 I;${artefact(CR,/CRsCl/cr2727,2727)};NBI;Nice Weather F;${artefact(CR,/CRsCl/cr2727,2727)};BAD;Malformed M;${uid(2727)};NB_EXT;3 I;${uid(2727)};NBI_EXT;Another Info F;${uid(2727)};BAD_EXT;Badlyformed F;${uid(2727)};BAD_EXT1;Badlyformed1;;FALSE_POSITIVE;Everything is in the title]]> F;${function(machine.c,41)};R_GOTO;"No goto; neither togo;";41 F;${function(machine.c,42)};R_GOTO;No Goto;42;LEGACY;Was done a long time ago L;${uid(1010)};CR2CR;${uid(2727)} L;${uid(2727)};CR2CR;${uid(1010)}
======= = xml = ======= The xml framework is an implementation of a data provider that allows to import an xml file, potentially after an xsl transformation. The transformed XML file is expected to follow the syntax expected by other data providers (see input-data.xml specification). This framework can be extended like the other frameworks, by creating a folder for your data provider in your configuration/tools folder and creating a form.xml. Following are three examples of the possible uses of this framework. Example 1 - User enters an xml path and an xsl path, the xml is transformed using the xsl and then imported ========= <?xml version="1.0" encoding="UTF-8"?> <tags baseName="xml"> <tag type="text" key="xml" /> <tag type="text" key="xslt" /> <exec-phase id="add-data"> <exec name="javascript" failOnError="true" failOnStdErr="true"> <arg value="main.js" /> <arg value="--" /> <arg value="${outputDirectory}" /> <arg tag="xml" /> <arg tag="xslt" /> </exec> </exec-phase> </tags> Example 2 - The user enter an xml path, the xsl file is predefined (input-data.xsl) and present in the same directory as form.xml ========= <?xml version="1.0" encoding="UTF-8"?> <tags baseName="xml"> <tag type="text" key="xml" /> <exec-phase id="add-data"> <exec name="javascript" failOnError="true" failOnStdErr="true"> <arg value="main.js" /> <arg value="--" /> <arg value="${outputDirectory}" /> <arg tag="xml" /> <arg value="${getToolConfigDir(input-data.xsl)}" /> </exec> </exec-phase> </tags> Example 3 - The user enter an xml path of a file already in the expected format ========= <?xml version="1.0" encoding="UTF-8"?> <tags baseName="xml"> <tag type="text" key="xml" /> <exec-phase id="add-data"> <exec name="javascript" failOnError="true" failOnStdErr="true"> <arg value="main.js" /> <arg value="--" /> <arg value="${outputDirectory}" /> <arg tag="xml" /> </exec> </exec-phase> </tags>
Csv
The Csv framework is used to import metrics or textual information and attach them to artefacts of type Application or File. While parsing one or more input CSV files, if it finds the same metric for the same artefact several times, it will only use the last occurrence of the metric and ignore the previous ones. Note that the type of artefacts you can attach metrics to is limited to Application and File artefacts. If you are working with File artefacts, you can let the Data Provider create the artefacts by itself if they do not exist already. Refer to the full Csv Reference for more information.
csv_findings
The csv_findings framework is used to import findings in a project and attach them to artefacts of type Application, File or Function. It takes a single CSV file as input and is the only framework that allows you to import relaxed findings directly. Refer to the full csv_findings Reference for more information.
CsvPerl
The CsvPerl framework offers the same functionality as Csv, but instead of dealing with the raw input files directly, it allows you to run a perl script to modify them and produce a CSV file with the expected input format for the Csv framework. Refer to the full CsvPerl Reference for more information.
FindingsPerl
The FindingsPerl framework is used to import findings and attach them to existing artefacts. Optionally, if an artefact cannot be found in your project, the finding can be attached to the root node of the project instead. When launching a Data Provider based on the FindingsPerl framework, a perl script is run first. This perl script is used to generate a CSV file with the expected format which will then be parsed by the framework. Refer to the full FindingsPerl Reference for more information.
Generic
The Generic framework is the most flexible Data Provider framework, since it allows attaching metrics, findings, textual information and links to artefacts. If the artefacts do not exist in your project, they will be created automatically. It takes one or more CSV files as input (one per type of information you want to import) and works with any type of artefact. Refer to the full Generic Reference for more information.
GenericPerl
The GenericPerl framework is an extension of the Generic framework that starts by running a perl script in order to generate the metrics, findings, information and links files. It is useful if you have an input file whose format needs to be converted to match the one expected by the Generic framework, or if you need to retrieve and modify information exported from a web service on your network. Refer to the full GenericPerl Reference for more information.
ExcelMetrics
The ExcelMetrics framework is used to extract information from one or more Microsoft Excel files (.xls or .xslx). A detailed configuration file allows defining how the Excel document should be read and what information should be extracted. This framework allows importing metrics, findings and textual information to existing artefacts or artefacts that will be created by the Data Provider. Refer to the full ExcelMetrics Reference for more information.
After you choose the framework to extend, you should follow these steps to make your custom Data Provider known to Squore:
Create a new configuration tools
folder to save your work in your
custom configuration folder: MyConfiguration/configuration/tools
.
Create a new folder for your data provider inside the new tools
folder: CustomDP. This folder needs to contain the following files:
form.xml defines the input parameters for the Data Provider, and the base framework to use, as described in the section called “Data Provider Parameters”
form_en.properties contains the strings displayed in the web interface for this Data Provider, as described in the section called “Localising your Data Provider”
config.tcl contains the parameters for your custom Data Provider that are specific to the selected framework
CustomDP.pl is the perl script that is executed automatically if your custom Data Provider uses one of the *Perl frameworks.
Edit Squore Server's configuration file to register your new configuration path, as described in the Installation and Administration Guide.
Log into the web interface as a Squore administrator and reload the configuration.
Your new Data Provider is now known to Squore and can be triggered in analyses. Note that you may have to modify your Squore configuration to make your wizard aware of the new Data Provider and your model aware of the new metrics it provides. Refer to the relevant sections of the Configuration Guide for more information.
======= = Csv = ======= The Csv framework is used to import metrics or textual information and attach them to artefacts of type Application, File or Function. While parsing one or more input CSV files, if it finds the same metric for the same artefact several times, it will only use the last occurrence of the metric and ignore the previous ones. Note that the type of artefacts you can attach metrics to is limited to Application, File and Function artefacts. If you are working with File artefacts, you can let the Data Provider create the artefacts by itself if they do not exist already. ============ = form.xml = ============ You can customise form.xml to either: - specify the path to a single CSV file to import - specify a pattern to import all csv files matching this pattern in a directory In order to import a single CSV file: ===================================== <?xml version="1.0" encoding="UTF-8"?> <tags baseName="Csv" needSources="true"> <tag type="text" key="csv" defaultValue="/path/to/mydata.csv" /> </tags> Notes: - The csv key is mandatory. - Since Csv-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project. In order to import all files matching a pattern in a folder: =========================================================== <?xml version="1.0" encoding="UTF-8"?> <tags baseName="Csv" needSources="true"> <!-- Root directory containing Csv files to import--> <tag type="text" key="dir" defaultValue="/path/to/mydata" /> <!-- Pattern that needs to be matched by a file name in order to import it--> <tag type="text" key="ext" defaultValue="*.csv" /> <!-- search for files in sub-folders --> <tag type="booleanChoice" defaultValue="true" key="sub" /> </tags> Notes: - The dir and ext keys are mandatory - The sub key is optional (and its value set to false if not specified) ============== = config.tcl = ============== Sample config.tcl file: ======================= # The separator used in the input CSV file # Usually \t or ; set Separator "\t" # The delimiter used in the input CSV file # This is normally left empty, except when you know that some of the values in the CSV file # contain the separator itself, for example: # "A text containing ; the separator";no problem;end # In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4. # To include the delimiter itself in a value, you need to escape it by duplicating it, for example: # "A text containing "" the delimiter";no problemo;end # Default: none set Delimiter \" # ArtefactLevel is one of: # Application: to import data at application level # File: to import data at file level. In this case ArtefactKey has to be set # to the value of the header (key) of the column containing the file path # in the input CSV file. # Function : to import data at function level, in this case: # ArtefactKey has to be set to the value of the header (key) of the column containing the path of the file # FunctionKey has to be set to the value of the header (key) of the column containing the name and signature of the function # Note that the values are case-sensitive. set ArtefactLevel File set ArtefactKey File # Should the File paths be case-insensitive? # true or false (default) # This is used when searching for a matching artefact in already-existing artefacts. set PathsAreCaseInsensitive "false" # Should file artefacts declared in the input CSV file be created automatically? # true (default) or false set CreateMissingFile "true" # FileOrganisation defines the layout of the input CSV file and is one of: # header::column: values are referenced from the column header # header::line: NOT AVAILABLE # alternate::line: lines are a sequence of {Key Value} # alternate::column: columns are a sequence of {Key Value} # There are more examples of possible CSV layouts later in this document set FileOrganisation header::column # Metric2Key contains a case-sensitive list of paired metric IDs: # {MeasureID KeyName [Format]} # where: # - MeasureID is the id of the measure as defined in your analysis model # - KeyName, depending on the FileOrganisation, is either the name of the column or the name # in the cell preceding the value to import as found in the input CSV file # - Format is the optional format of the data, the only accepted format # is "text" to attach textual information to an artefact, for normal metrics omit this field set Metric2Key { {BRANCHES Branchs} {VERSIONS Versions} {CREATED Created} {IDENTICAL Identical} {ADDED Added} {REMOV Removed} {MODIF Modified} {COMMENT Comment text} } ========================== = Sample CSV Input Files = ========================== Example 1: ========== FileOrganisation : header::column ArtefactLevel : File ArtefactKey : Path Path Branchs Versions ./foo.c 15 105 ./bar.c 12 58 Example 2: ========== FileOrganisation : alternate::line ArtefactLevel : File ArtefactKey : Path Path ./foo.c Branchs 15 Versions 105 Path ./bar.c Branchs 12 Versions 58 Example 3: ========== FileOrganisation : header::column ArtefactLevel : Application ChangeRequest Corrected Open 27 15 11 Example 4: ========== FileOrganisation : alternate::column ArtefactLevel : Application ChangeRequest 15 Corrected 11 Example 5: ========== FileOrganisation : alternate::column ArtefactLevel : File ArtefactKey : Path Path ./foo.c Branchs 15 Versions 105 Path ./bar.c Branchs 12 Versions 58 Example 6: ========== FileOrganisation : header::column ArtefactLevel : Function ArtefactKey : Path FunctionKey : Name Path Name Decisions Tested ./foo.c end_game(int*,int*) 15 3 ./bar.c bar(char) 12 6 Working With Paths: =================== - Path seperators are unified: you do not need to worry about handling differences between Windows and Linux - With the option PathsAreCaseInsensitive, case is ignored when searching for files in the Squore internal data - Paths known by Squore are relative paths starting at the root of what was specified in the repository connector durign the analysis. This relative path is the one used to match with a path in a csv file. Here is a valid example of file matching: 1. You provide C:\A\B\C\D as the root folder in a repository connector 2. C:\A\B\C\D contains E\e.c then Squore will know E/e.c as a file 3. You provide a csv file produced on linux and containing /tmp/X/Y/E/e.c as path, then Squore will be able to match it with the known file. Squore uses the longest possible match. In case of conflict, no file is found and a message is sent to the log.
================ = csv_findings = ================ The csv_findings data provider is used to import findings (rule violations) and attach them to artefacts of type Application, File or Function. The format of the csv file given as parameter has to be: FILE;FUNCTION;RULE_ID;MESSAGE;LINE;COL;STATUS;STATUS_MESSAGE;TOOL where: ===== FILE : is the full path of the file where the finding is located FUNCTION : is the name of the function where the finding is located RULE_ID : is the Squore ID of the rule which is violated MESSAGE : is the specific message of the violation LINE: is the line number where the violation occurs COL: (optional, leave empty if not provided) is the column number where the violation occurs STATUS: (optional, leave empty if not provided) is the staus of the relaxation if the violation has to be relaxed (DEROGATION, FALSE_POSITIVE, LEGACY) STATUS_MSG: (optional, leave empty if not provided) is the message for the relaxation when relaxed TOOL: is the tool providing the violation The header line is read and ignored (it has to be there) The separator (semicolon by default) can be changed in the config.tcl file (see below) The delimiter (no delimiter by default) can be changed in the config.tcl (see below) ============== = config.tcl = ============== Sample config.tcl file: ======================= # The separator used in the input CSV file # Usually ; or \t set Separator \; # The delimiter used in the CSV input file # This is normally left empty, except when you know that some of the values in the CSV file # contain the separator itself, for example: # "A text containing ; the separator";no problem;end # In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4. # To include the delimiter itself in a value, you need to escape it by duplicating it, for example: # "A text containing "" the delimiter";no problemo;end # Default: none set Delimiter \"
=========== = CsvPerl = =========== The CsvPerl framework offers the same functionality as Csv, but instead of dealing with the raw input files directly, it allows you to run a perl script to modify them and produce a CSV file with the expected input format for the Csv framework. ============ = form.xml = ============ In your form.xml, specify the input parameters you need for your Data Provider. Our example will use two parameters: a path to a CSV file and another text parameter: <?xml version="1.0" encoding="UTF-8"?> <tags baseName="CsvPerl" needSources="true"> <tag type="text" key="csv" defaultValue="/path/to/csv" /> <tag type="text" key="param" defaultValue="MyValue" /> </tags> - Since Csv-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project. ============== = config.tcl = ============== Refer to the description of config.tcl for the Csv framework. For CsvPerl one more option is possible: # The variable NeedSources is used to request the perl script to be executed once for each # repository node of the project. In that case an additional parameter is sent to the # perl script (see below for its position) #set ::NeedSources 1 ========================== = Sample CSV Input Files = ========================== Refer to the examples for the Csv framework. =============== = Perl Script = =============== The perl scipt will receive as arguments: - all parameters defined in form.xml (as -${key} $value) - the input directory to process (only if ::NeedSources is set to 1 in the config.tcl file) - the location of the output directory where temporary files can be generated - the full path of the csv file to be generated For the form.xml we created earlier in this document, the command line will be: perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.csv Example of perl script: ====================== #!/usr/bin/perl use strict; use warnings; $|=1 ; ($csvKey, $csvValue, $paramKey, $paramValue, $output_folder, $output_csv) = @ARGV; # Parse input CSV file # ... # Write results to CSV open(CSVFILE, ">" . ${output_csv}) || die "perl: can not write: $!\n"; binmode(CSVFILE, ":utf8"); print CSVFILE "ChangeRequest;15"; close CSVFILE; exit 0;
=========== = Generic = =========== The Generic framework is the most flexible Data Provider framework, since it allows attaching metrics, findings, textual information and links to artefacts. If the artefacts do not exist in your project, they will be created automatically. It takes one or more CSV files as input (one per type of information you want to import) and works with any type of artefact. ============ = form.xml = ============ In form.xml, allow users to specify the path to a CSV file for each type of data you want to import. You can set needSources to true or false, depending on whether or not you want to require the use of a repository connector when your custom Data Provider is used. Example of form.xml file: ========================= <?xml version="1.0" encoding="UTF-8"?> <tags baseName="Generic" needSources="false"> <!-- Path to CSV file containing Metrics data --> <tag type="text" key="csv" defaultValue="mydata.csv" /> <!-- Path to CSV file containing Findings data: --> <tag type="text" key="fdg" defaultValue="mydata_fdg.csv" /> <!-- Path to CSV file containing Information data: --> <tag type="text" key="inf" defaultValue="mydata_inf.csv" /> <!-- Path to CSV file containing Links data: --> <tag type="text" key="lnk" defaultValue="mydata_lnk.csv" /> </tags> Note: All tags are optional. You only need to specify the tag element for the type of data you want to import with your custom Data Provider. ============== = config.tcl = ============== Sample config.tcl file: ======================= # The separator used in the input csv files # Usually \t or ; or , # In our example below, a space is used. set Separator " " # The delimiter used in the input CSV file # This is normally left empty, except when you know that some of the values in the CSV file # contain the separator itself, for example: # "A text containing ; the separator";no problem;end # In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4. # To include the delimiter itself in a value, you need to escape it by duplicating it, for example: # "A text containing "" the delimiter";no problemo;end # Default: none set Delimiter \" # The path separator in an artefact's path # in the input CSV file. # Note that artefact is spellt with an "i" # and not an "e" in this option. set ArtifactPathSeparator "/" # If the data provider needs to specify a different toolName (optional) set SpecifyToolName 1 # Metric2Key contains a case-sensitive list of paired metric IDs: # {MeasureID KeyName [Format]} # where: # - MeasureID is the id of the measure as defined in your analysis model # - KeyName is the name in the cell preceding the value to import as found in the input CSV file # - Format is the optional format of the data, the only accepted format # is "text" to attach textual information to an artefact. Note that the same result can also # be achieved with Info2Key (see below). For normal metrics omit this field. set Metric2Key { {CHANGES Changed} } # Finding2Key contains a case-sensitive list of paired rule IDs: # {FindingID KeyName} # where: # - FindingID is the id of the rule as defined in your analysis model # - KeyName is the name in the finding name in the input CSV file set Finding2Key { {R_NOTLINKED NotLinked} } # Info2Key contains a case-sensitive list of paired info IDs: # {InfoID KeyName} # where: # - InfoID is the id of the textual information as defiend in your analysis model # - KeyName is the name of the information name in the input CSV file set Info2Key {SPECIAL_LABEL Label} } # Ignore findings for artefacts that are not part of the project (orphan findings) # When set to 1, the findings are ignored # When set to 0, the findings are imported and attached to the APPLICATION node # (default: 1) set IgnoreIfArtefactNotFound 1 # If data in csv concerns source code artefacts (File, Class or Function), the way to # match file paths can be case-insensitive # true or false (default) # This is used when searching for a matching artefact in already-existing artefacts. set PathsAreCaseInsensitive "false" # For findings of a type that is not in your ruleset, set a default rule ID. # The value for this parameter must be a valid rule ID from your analysys model. # (default: empty) set UnknownRuleId UNKNOWN_RULE # Save the total count of orphan findings as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanArteCountId NB_ORPHANS # Save the total count of unknown rules as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesCountId NB_UNKNOWN_RULES # Save the list of unknown rule IDs as textual information at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesListId UNKNOWN_RULES_INFO ==================== = CSV File Format = ==================== All the examples listed below assume the use of the following config.tcl: set Separator "," set ArtifactPathSeparator "/" set Metric2Key { {CHANGES Changed} } set Finding2Key { {R_NOTLINKED NotLinked} } set Info2Key {SPECIAL_LABEL Label} } How to reference an artefact: ============================ ==> artefact_type artefact_path Example: REQ_MODULES,Requirements REQ_MODULE,Requirements/Module REQUIREMENT,Requirements/Module/My_Req References the following artefact Application Requirements (type: REQ_MODULES) Module (type: REQ_MODULE) My_Req (type: REQUIREMENT) Note: For source code artefacts there are 3 special artefact kinds: ==> FILE file_path ==> CLASS file_path (Name|Line) ==> FUNCTION file_path (Name|Line) Examples: FUNCTION src/file.c 23 references the function which contains line 23 in the source file src/file.c, if no function found the line whole line of the csv file is ignored. FUNCTION src/file.c foo() references a function named foo in source file src/file.c. If more than one function foo is defined in this file, then the signature of the function (which is optional) is used to find the best match. Layout for Metrics File: ======================== ==> artefact_type artefact_path (Key Value)* When the parent artefact type is not given it defaults to <artefact_type>_FOLDER. Example: REQ_MODULE,Requirements/Module REQUIREMENT,Requirements/Module/My_Req,Changed,1 will produce the following artefact tree: Application Requirements (type: REQ_MODULE_FOLDER) Module (type: REQ_MODULE) My_Req : (type: REQUIREMENT) with 1 metric CHANGES = 1 Note: the key "Changed" is mapped to the metric "CHANGES", as specified by the Metric2Key parameter, so that it matches what is expected by the model. Layout for Findings File: ========================= ==> artefact_type artefact_path key message When the parent artefact type is not given it defaults to <artefact_type>_FOLDER. Example: REQ_MODULE,Requirements/Module REQUIREMENT,Requirements/Module/My_Req,NotLinked,A Requiremement should always been linked will produce the following artefact tree: Application Requirements (type: REQ_MODULE_FOLDER) Module (type: REQ_MODULE) My_Req (type: REQUIREMENT) with 1 finding R_NOTLINKED whose description is "A Requiremement should always been linked" Note: the key "NotLinked" is mapped to the finding "R_NOTLINKED", as specified by the Finding2Key parameter, so that it matches what is expected by the model. Layout for Textual Information File: ==================================== ==> artefact_type artefact_path label value When the parent artefact type is not given it defaults to <artefact_type>_FOLDER. Example: REQ_MODULE,Requirements/Module REQUIREMENT,Requirements/Module/My_Req,Label,This is the label of the req will produce the following artefact tree: Application Requirements (type: REQ_MODULE_FOLDER) Module (type: REQ_MODULE) My_Req (type: REQUIREMENT) with 1 information of type SPECIAL_LABEL whose content is "This is the label of the req" Note: the label "Label" is mapped to the finding "SPECIAL_LABEL", as specified by the Info2Key parameter, so that it matches what is expected by the model. Layout for Links File: ====================== ==> artefact_type artefact_path dest_artefact_type dest_artefact_path link_type When the parent artefact type is not given it defaults to <artefact_type>_FOLDER Example: REQ_MODULE Requirements/Module TEST_MODULE Tests/Module REQUIREMENT Requirements/Module/My_Req TEST Tests/Module/My_test TESTED_BY will produce the following artefact tree: Application Requirements (type: REQ_MODULE_FOLDER) Module (type: REQ_MODULE) My_Req (type: REQUIREMENT) ------> Tests (type: TEST_MODULE_FOLDER) | Module (type: TEST_MODULE) | My_Test (type: TEST) <------------+ link (type: TESTED_BY) The TESTED_BY relationship is created with My_Req as source of the link and My_test as the destination CSV file organisation when SpecifyToolName is set to 1 ====================================================== When the variable SpecifyToolName is set to 1 (or true) a column has to be added at the beginning of each line in each csv file. This column can be empty or filled with a different toolName. Example: ,REQ_MODULE,Requirements/Module MyReqChecker,REQUIREMENT,Requirements/Module/My_Req Label,This is the label of the req The finding of type Label will be set as reported by the tool "MyReqChecker".
=============== = GenericPerl = =============== The GenericPerl framework is an extension of the Generic framework that starts by running a perl script in order to generate the metrics, findings, information and links files. It is useful if you have an input file whose format needs to be converted to match the one expected by the Generic framework, or if you need to retrieve and modify information exported from a web service on your network. ============ = form.xml = ============ In your form.xml, specify the input parameters you need for your Data Provider. Our example will use two parameters: a path to a CSV file and another text parameter: <?xml version="1.0" encoding="UTF-8"?> <tags baseName="CsvPerl" needSources="false"> <tag type="text" key="csv" defaultValue="/path/to/csv" /> <tag type="text" key="param" defaultValue="MyValue" /> </tags> ============== = config.tcl = ============== Refer to the description of config.tcl for the Generic framework for the basic options. Additionally, the following options are available for the GenericPerl framework, in order to know which type of information your custom Data Provider should try to import. # If the data provider needs to specify a different toolName (optional) #set SpecifyToolName 1 # Set to 1 to import metrics csv file, 0 otherwise # ImportMetrics # When set to 1, your custom Data Provider (CustomDP) will try to import # metrics from a file called CustomDP.mtr.csv that your perl script # should generate according to the expected format described in the # documentation of the Generic framework. set ImportMetrics 1 # ImportInfos # When set to 1, your custom Data Provider (CustomDP) will try to import # textual information from a file called CustomDP.inf.csv that your perl script # should generate according to the expected format described in the # documentation of the Generic framework. set ImportInfos 0 # ImportFindings # When set to 1, your custom Data Provider (CustomDP) will try to import # findings from a file called CustomDP.fdg.csv that your perl script # should generate according to the expected format described in the # documentation of the Generic framework. set ImportFindings 1 # ImportLinks # When set to 1, your custom Data Provider (CustomDP) will try to import # artefact links from a file called CustomDP.lnk.csv that your perl script # should generate according to the expected format described in the # documentation of the Generic framework. set ImportLinks 0 # Ignore findings for artefacts that are not part of the project (orphan findings) # When set to 1, the findings are ignored # When set to 0, the findings are imported and attached to the APPLICATION node # (default: 1) set IgnoreIfArtefactNotFound 1 # For findings of a type that is not in your ruleset, set a default rule ID. # The value for this parameter must be a valid rule ID from your analysys model. # (default: empty) set UnknownRuleId UNKNOWN_RULE # Save the total count of orphan findings as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanArteCountId NB_ORPHANS # Save the total count of unknown rules as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesCountId NB_UNKNOWN_RULES # Save the list of unknown rule IDs as textual information at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesListId UNKNOWN_RULES_INFO ==================== = CSV File Format = ==================== Refer to the examples in the Generic framework. =============== = Perl Script = =============== The perl scipt will receive as arguments: - all parameters defined in form.xml (as -${key} $value) - the location of the output directory where temporary files can be generated - the full path of the metric csv file to be generated (if ImportMetrics is set to 1 in config.tcl) - the full path of the findings csv file to be generated (if ImportFindings is set to 1 in config.tcl) - the full path of the textual information csv file to be generated (if ImportInfos is set to 1 in config.tcl) - the full path of the links csv file to be generated (if ImportLinks is set to 1 in config.tcl) - the full path to the output directory used by this data provider in the previous analysis For the form.xml and config.tcl we created earlier in this document, the command line will be: perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.mtr.csv <output_folder>/CustomDP.fdg.csv <previous_output_folder> The following perl functions are made available in the perl environment so you can use them in your script: - get_tag_value(key) (returns the value for $key parameter from your form.xml) - get_output_metric() - get_output_finding() - get_output_info() - get_output_link() - get_output_dir() - get_input_dir() (returns the folder containing sources if needSources is set to 1) - get_previous_dir() Example of perl script: ====================== #!/usr/bin/perl use strict; use warnings; $|=1 ; # Parse input CSV file my $csvFile = get_tag_value("csv"); my $param = get_tag_value("param"); # ... # Write metrics to CSV open(METRICS_FILE, ">" . get_output_metric()) || die "perl: can not write: $!\n"; binmode(METRICS_FILE, ":utf8"); print METRICS_FILE "REQUIREMENTS;Requirements/All_Requirements;NB_REQ;15"; close METRICS_FILE; # Write findings to CSV open(FINDINGS_FILE, ">" . get_output_findings()) || die "perl: can not write: $!\n"; binmode(FINDINGS_FILE, ":utf8"); print FINDINGS_FILE "REQUIREMENTS;Requirements/All_Requirements;R_LOW_REQS;\"The minimum number of requirement should be at least 25.\""; close FINDINGS_FILE; exit 0;
================ = FindingsPerl = ================ The FindingsPerl framework is used to import findings and attach them to existing artefacts. Optionally, if an artefact cannot be found in your project, the finding can be attached to the root node of the project instead. When launching a Data Provider based on the FindingsPerl framework, a perl script is run first. This perl script is used to generate a CSV file with the expected format which will then be parsed by the framework. ============ = form.xml = ============ In your form.xml, specify the input parameters you need for your Data Provider. Our example will use two parameters: a path to a CSV file and another text parameter: <?xml version="1.0" encoding="UTF-8"?> <tags baseName="CsvPerl" needSources="true"> <tag type="text" key="csv" defaultValue="/path/to/csv" /> <tag type="text" key="param" defaultValue="MyValue" /> </tags> - Since FindingsPerl-based data providers commonly rely on artefacts created by Squan Sources, you can set the needSources attribute to force users to specify at least one repository connector when creating a project. ============== = config.tcl = ============== Sample config.tcl file: ======================= # The separator to be used in the generated CSV file # Usually \t or ; set Separator ";" # The delimiter used in the input CSV file # This is normally left empty, except when you know that some of the values in the CSV file # contain the separator itself, for example: # "A text containing ; the separator";no problem;end # In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4. # To include the delimiter itself in a value, you need to escape it by duplicating it, for example: # "A text containing "" the delimiter";no problemo;end # Default: none set Delimiter \" # Should the perl script execcuted once for each repository node of the project ? # 1 or 0 (default) # If true an additional parameter is sent to the # perl script (see below for its position) set ::NeedSources 0 # Should the violated rules definitions be generated? # true or false (default) # This creates a ruleset file with rules that are not already # part of your analysis model so you can review it and add # the rules manually if needed. set generateRulesDefinitions false # Should the File paths be case-insensitive? # true or false (default) # This is used when searching for a matching artefact in already-existing artefacts. set PathsAreCaseInsensitive false # Should file artefacts declared in the input CSV file be created automatically? # true (default) or false set CreateMissingFile true # Ignore findings for artefacts that are not part of the project (orphan findings) # When set to 0, the findings are imported and attached to the APPLICATION node instead of the real artefact # When set to 1, the findings are not imported at all # (default: 0) set IgnoreIfArtefactNotFound 0 # For findings of a type that is not in your ruleset, set a default rule ID. # The value for this parameter must be a valid rule ID from your analysis model. # (default: empty) set UnknownRuleId UNKNOWN_RULE # Save the total count of orphan findings as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanArteCountId NB_ORPHANS # Save the total count of unknown rules as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesCountId NB_UNKNOWN_RULES # Save the list of unknown rule IDs as textual information at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesListId UNKNOWN_RULES_INFO # The tool version to specify in the generated rules definitions # The default value is "" # Note that the toolName is the name of the folder you created # for your custom Data Provider set ToolVersion "" # FileOrganisation defines the layout of the CSV file that is produced by your perl script: # header::column: values are referenced from the column header # header::line: NOT AVAILABLE # alternate::line: NOT AVAILABLE # alternate::column: NOT AVAILABLE set FileOrganisation header::column # In order to attach a finding to an artefact of type FILE: # - Tool (optional) if present it overrides the name of the tool providing the finding # - Path has to be the path of the file # - Type has to be set to FILE # - Line can be either empty or the line in the file where the finding is located # Rule is the rule identifier, can be used as is or translated using Rule2Key # Descr is the description message, which can be empty # # In order to attach a finding to an artefact of type FUNCTION: # - Tool (optional) if present it overrides the name of the tool providing the finding # - Path has to be the path of the file containing the function # - Type has to be FUNCTION # - If line is an integer, the system will try to find an artefact function # at the given line of the file # - If no Line or Line is not an integer, Name is used to find an artefact in # the given file having name and signature as found in this column. # (Line and Name are optional columns) # Rule2Key contains a case-sensitive list of paired rule IDs: # {RuleID KeyName} # where: # - RuleID is the id of the rule as defined in your analysis model # - KeyName is the rule ID as written by your perl script in the produced CSV file # Note: Rules that are not mapped keep their original name. The list of unmapped rules is in the log file generated by your Data Provider. set Rule2Key { { ExtractedRuleID_1 MappedRuleId_1 } { ExtractedRuleID_2 MappedRuleId_2 } } ==================== = CSV File Format = ==================== According to the options defined earlier in config.tcl, a valid csv file would be: Path;Type;Line;Name;Rule;Descr /src/project/module1/f1.c;FILE;12;;R1;Rule R1 is violated because variable v1 /src/project/module1/f1.c;FUNCTION;202;;R4;Rule R4 is violated because function f1 /src/project/module2/f2.c;FUNCTION;42;;R1;Rule R1 is violated because variable v2 /src/project/module2/f2.c;FUNCTION;;skip_line(int);R1;Rule R1 is violated because variable v2 Working With Paths: =================== - Path seperators are unified: you do not need to worry about handling differences between Windows and Linux - With the option PathsAreCaseInsensitive, case is ignored when searching for files in the Squore internal data - Paths known by Squore are relative paths starting at the root of what was specified in the repository connector durign the analysis. This relative path is the one used to match with a path in a csv file. Here is a valid example of file matching: 1. You provide C:\A\B\C\D as the root folder in a repository connector 2. C:\A\B\C\D contains E\e.c then Squore will know E/e.c as a file 3. You provide a csv file produced on linux and containing /tmp/X/Y/E/e.c as path, then Squore will be able to match it with the known file. Squore uses the longest possible match. In case of conflict, no file is found and a message is sent to the log. =============== = Perl Script = =============== The perl scipt will receive as arguments: - all parameters defined in form.xml (as -${key} $value) - the input directory to process (only if ::NeedSources is set to 1) - the location of the output directory where temporary files can be generated - the full path of the findings csv file to be generated For the form.xml and config.tcl we created earlier in this document, the command line will be: perl <configuration_folder>/tools/CustomDP/CustomDP.pl -csv /path/to/csv -param MyValue <output_folder> <output_folder>/CustomDP.fdg.csv <output_folder>/CustomDP.fdg.csv Example of perl script: ====================== #!/usr/bin/perl use strict; use warnings; $|=1 ; ($csvKey, $csvValue, $paramKey, $paramValue, $output_folder, $output_csv) = @ARGV; # Parse input CSV file # ... # Write results to CSV open(CSVFILE, ">" . ${output_csv}) || die "perl: can not write: $!\n"; binmode(CSVFILE, ":utf8"); print CSVFILE "Path;Type;Line;Name;Rule;Descr"; print CSVFILE "/src/project/module1/f1.c;FILE;12;;R1;Rule R1 is violated because variable v1"; close CSVFILE; exit 0;
================ = ExcelMetrics = ================ The ExcelMetrics framework is used to extract information from one or more Microsoft Excel files (.xls or .xslx). A detailed configuration file allows defining how the Excel document should be read and what information should be extracted. This framework allows importing metrics, findings and textual information to existing artefacts or artefacts that will be created by the Data Provider. ============ = form.xml = ============ You can customise form.xml to either: - specify the path to a single Excel file to import - specify a pattern to import all Excel files matching this pattern in a directory In order to import a single Excel file: ===================================== <?xml version="1.0" encoding="UTF-8"?> <tags baseName="ExcelMetrics" needSources="false"> <tag type="text" key="excel" defaultValue="/path/to/mydata.xslx" /> </tags> Notes: - The excel key is mandatory. In order to import all files matching a patter in a folder: =========================================================== <?xml version="1.0" encoding="UTF-8"?> <tags baseName="ExcelMetrics" needSources="false"> <!-- Root directory containing Excel files to import--> <tag type="text" key="dir" defaultValue="/path/to/mydata" /> <!-- Pattern that needs to be matched by a file name in order to import it--> <tag type="text" key="ext" defaultValue="*.xlsx" /> <!-- search for files in sub-folders --> <tag type="booleanChoice" defaultValue="true" key="sub" /> </tags> Notes: - The dir and ext keys are mandatory - The sub key is optional (and its value set to false if not specified) ============== = config.tcl = ============== Sample config.tcl file: ======================= # The separator to be used in the generated csv file # Usually \t or ; or , set Separator ";" # The delimiter used in the input CSV file # This is normally left empty, except when you know that some of the values in the CSV file # contain the separator itself, for example: # "A text containing ; the separator";no problem;end # In this case, you need to set the delimiter to \" in order for the data provider to find 3 values instead of 4. # To include the delimiter itself in a value, you need to escape it by duplicating it, for example: # "A text containing "" the delimiter";no problemo;end # Default: none set Delimiter \" # The path separator in an artefact's path # in the generated CSV file. set ArtefactPathSeparator "/" # Ignore findings for artefacts that are not part of the project (orphan findings) # When set to 1, the findings are ignored # When set to 0, the findings are imported and attached to the APPLICATION node # (default: 1) set IgnoreIfArtefactNotFound 1 # For findings of a type that is not in your ruleset, set a default rule ID. # The value for this parameter must be a valid rule ID from your analysys model. # (default: empty) set UnknownRuleId UNKNOWN_RULE # Save the total count of orphan findings as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanArteCountId NB_ORPHANS # Save the total count of unknown rules as a metric at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesCountId NB_UNKNOWN_RULES # Save the list of unknown rule IDs as textual information at application level # Specify the ID of the metric to use in your analysys model # to store the information # (default: empty) set OrphanRulesListId UNKNOWN_RULES_INFO # The list of the Excel sheets to read, each sheet has the number of the first line to read # A Perl regexp pattern can be used instead of the name of the sheet (the first sheet matching # the pattern will be considered) set Sheets {{Baselines 5} {ChangeNotes 5}} # ###################### # # COMMON DEFINITIONS # # ###################### # # - <value> is a list of column specifications whose values will be concatened. When no column name is present, the # text is taken as it appears. Optional sheet name can be added (with ! char to separate from the column name) # Examples: # - {C:} the value will be the value in column C on the current row # - {C: B:} the value will be the concatenation of values found in column C and B of the current row # - {Deliveries} the value will be Deliveries # - {BJ: " - " BL:} the value will be the concatenation of value found in column BJ, # string " - " and the value found in column BL fo the current row # - {OtherSheet!C:} the value will be the value in column C from the sheet OtherSheet on the current row # # - <condition> is a list of conditions. An empty condition is always true. A condition is a column name followed by colon, # optionally followed by a perl regexp. Optional sheet name can be added (with ! char to separate from the column name) # Examples: # - {B:} the value in column B must be empty on the current row # - {B:.+} the value in column B can not be empty on the current row # - {B:R_.+} the value in column B is a word starting by R_ on the current row # - {A: B:.+ C:R_.+} the value in column A must be empty and the value in column B must contain something and # the column C contains a word starting with R_ on the current row # - {OtherSheet!B:.+} the value in column B from sheet OtherSheet on the current row can not be empty. # ############# # # ARTEFACTS # # ############# # The variable is a list of artefact hierarchy specification: # {ArtefactHierarchySpec1 ArtefactHierarchySpec2 ... ArtefactHierarchySpecN} # where each ArtefactHierarchySpecx is a list of ArtefactSpec # # An ArtefactSpec is a list of items, each item being: # {<(sheetName!)?artefactType> <conditions> <name> <parentType>? <parentName>?} # where: # - <(sheetName!)?artefactType>: allows specifying the type. Optional sheetName can be added (with ! char to separate from the type) to limit # the artefact search in one specific sheet. When Sheets are given with regexp, the same regexp has to be used # for the sheetName. # If the type is followed by a question mark (?), this level of artefact is optional. # If the type is followed by a plus char (+), this level is repeatable on the next row # - <condition>: see COMMON DEFINITIONS # - <value>: the name of the artefact to build, see COMMON DEFINITIONS # # - <parentType>: This element is optional. When present, it means that the current element will be attached to a parent having this type # - <parentValue>: This is a list like <value> to build the name of the artefact of type <parentType>. If such artefact is not found, # the current artefact does not match # # Note: to add metrics at application level, specify an APPLICATION artefact which will match only one line: # e.g. {APPLICATION {A:.+} {}} will recognize as application the line having column A not empty. set ArtefactsSpecs { { {DELIVERY {} {Deliveries}} {RELEASE {E:.+} {E:}} {SPRINT {O:SW_Software} {Q:}} } { {DELIVERY {} {Deliveries}} {RELEASE {O:SY_System} {Q:}} } { {WP {BL:.+ AF:.+} {BJ: " - " BL:} SPRINT {AF:}} {ChangeNotes!TASK {D:(added|changed|unchanged) T:imes} {W: AD:}} } { {WP {} {{Unplanned imes}} SPRINT {AF:}} {TASK {BL: D:(added|changed|unchanged) T:imes W:.+} {W: AD:}} } } # ########### # # METRICS # # ########### # Specification of metrics to be retreived # This is a list where each element is: # {<artefactTypeList> <metricId> <condition> <value> <format>} # Where: # - <artefactTypeList>: the list of artefact types for which the metric has to be used # each element of the list is (sheetName!)?artefactType where sheetName is used # to restrict search to only one sheet. sheetName is optional. # - <metricId>: the name of the MeasureId to be injected into Squore, as defined in your analysis model # - <confition>: see COMMON DEFINITIONS above. This is the condition for the metric to be generated. # - <value> : see COMMON DEFINITIONS above. This is the value for the metric (can be built from multi column) # - <format> : optional, defaults to NUMBER # Possible format are: # * DATE_FR, DATE_EN for date stored as string # * DATE for cell formatted as date # * NUMBER_FR, NUMBER_EN for number stored as string # * NUMBER for cell formatted as number # * LINES for counting the number of text lines in a cell # - <formatPattern> : optional # Only used by the LINES format. # This is a pattern (can contain perl regexp) used to filter lines to count set MetricsSpecs { {{RELEASE SPRINT} TIMESTAMP {} {A:} DATE_EN} {{RELEASE SPRINT} DATE_ACTUAL_RELEASE {} {S:} DATE_EN} {{RELEASE SPRINT} DATE_FINISH {} {T:} DATE_EN} {{RELEASE SPRINT} DELIVERY_STATUS {} {U:}} {{WP} WP_STATUS {} {BO:}} {{ChangeNotes!TASK} IS_UNPLAN {} {BL:}} {{TASK WP} DATE_LABEL {} {BP:} DATE_EN} {{TASK WP} DATE_INTEG_PLAN {} {BD:} DATE_EN} {{TASK} TASK_STATUS {} {AE:}} {{TASK} TASK_TYPE {} {AB:}} } # ############ # # FINDINGS # # ############ # This is a list where each element is: # {<artefactTypeList> <findingId> <condition> <value> <localisation>} # Where: # - <artefactTypeList>: the list of artefact type for which the metric has to be used # each element of the list is (sheetName!)?artefactType where sheetName is used # to restrict search to only one sheet. sheetName is optional. # - <findingId>: the name of the FindingId to be injected into Squore, as defined in your analysis model # - <confition>: see COMMON DEFINITIONS above. This is the condition for the finding to be triggered. # - <value>: see COMMON DEFINITIONS above. This is the value for the message of the finding (can be built from multi column) # - <localisation>: this a <value> representing the localisation of the finding (free text) set FindingsSpecs { {{WP} {BAD_WP} {BL:.+ AF:.+} {{This WP is not in a correct state } AF:.+} {A:}} } # ####################### # # TEXTUAL INFORMATION # # ####################### # This is a list where each element is: # {<artefactTypeList> <infoId> <condition> <value>} # Where: # - <artefactTypeList> the list of artefact types for which the info has to be used # each element of the list is (sheetName!)?artefactType where sheetName is used # to restrict search to only one sheet. sheetName is optional. # - <infoId> : is the name of the Information to be attached to the artefact, as defined in your analysis model # - <confition> : see COMMON DEFINITIONS above. This is the condition for the info to be generated. # - <value> : see COMMON DEFINITIONS above. This is the value for the info (can be built from multi column) set InfosSpecs { {{TASK} ASSIGN_TO {} {XB:}} } # ######################## # # LABEL TRANSFORMATION # # ######################## # This is a list value specification for MeasureId or InfoId: # <MeasureId|InfoId> { {<LABEL1> <value1>} ... {<LABELn> <valuen>}} # Where: # - <MeasureId|InfoId> : is either a MeasureId, an InfoId, or * if it is available for every measureid/infoid # - <LABELx> : is the label to macth (can contain perl regexp) # - <valuex> : is the value to replace the label by, it has to match the correct format for the metrics (no format for infoid) # # Note: only metrics which are labels in the excel file or information which need to be rewriten, need to be described here. set Label2ValueSpec { { STATUS { {OPENED 0} {ANALYZED 1} {CLOSED 2} {.* -1} } } { * { {FATAL 0} {ERROR 1} {WARNING 2} {{LEVEL:\s*0} 1} {{LEVEL:\s*1} 2} {{LEVEL:\s*[2-9]+} 3} } } } Note that a sample Excel file with its associated config.tcl is available in $SQUORE_HOME/addons/tools/ExcelMetrics in order to further explain available configuration options.
Table of Contents
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value="[A-Z_][A-Z0-9_]+" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="relax-status"> <xs:restriction base="id"> <xs:enumeration value="RELAXED_DEROGATION"/> <xs:enumeration value="RELAXED_LEGACY"/> <xs:enumeration value="RELAXED_FALSE_POSITIVE"/> </xs:restriction> </xs:simpleType> <xs:element name="bundle"> <xs:complexType> <xs:choice maxOccurs="unbounded"> <xs:element ref="artifact"/> <xs:element ref="finding"/> <xs:element ref="info"/> <xs:element ref="link"/> <xs:element ref="metric"/> </xs:choice> <xs:attribute name="version" use="required" type="xs:integer" fixed="2"/> </xs:complexType> </xs:element> <xs:element name="artifact"> <xs:complexType> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="artifact"/> <xs:element ref="finding"/> <xs:element ref="metric"/> <xs:element ref="key"/> <xs:element ref="info"/> <xs:element ref="link"/> <xs:element ref="milestone"/> </xs:choice> </xs:sequence> <xs:attribute name="alias"/> <xs:attribute name="art-location"/> <xs:attribute name="id"/> <xs:attribute name="local-art-location"/> <xs:attribute name="local-key"/> <xs:attribute name="local-parent"/> <xs:attribute name="location"/> <xs:attribute name="name"/> <xs:attribute name="parent"/> <xs:attribute name="path"/> <xs:attribute name="type" use="required" type="id"/> <xs:attribute name="view-path"/> </xs:complexType> </xs:element> <xs:element name="info"> <xs:complexType> <xs:attribute name="local-ref"/> <xs:attribute name="name" use="required" type="id"/> <xs:attribute name="ref"/> <xs:attribute name="tool"/> <xs:attribute name="value" use="required"/> </xs:complexType> </xs:element> <xs:element name="key"> <xs:complexType> <xs:attribute name="value" use="required"/> </xs:complexType> </xs:element> <xs:element name="metric"> <xs:complexType> <xs:attribute name="local-ref"/> <xs:attribute name="name" use="required" type="id"/> <xs:attribute name="ref"/> <xs:attribute name="tool"/> <xs:attribute name="value" type="xs:decimal" use="required"/> </xs:complexType> </xs:element> <xs:element name="link"> <xs:complexType> <xs:attribute name="dst"/> <xs:attribute name="local-dst" type="xs:integer"/> <xs:attribute name="local-src" type="xs:integer"/> <xs:attribute name="name" use="required" type="id"/> <xs:attribute name="src"/> </xs:complexType> </xs:element> <xs:element name="finding"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="location"/> <xs:element minOccurs="0" maxOccurs="1" ref="relax"/> </xs:sequence> <xs:attribute name="descr"/> <xs:attribute name="local-ref"/> <xs:attribute name="location" use="required"/> <xs:attribute name="name" use="required" type="id"/> <xs:attribute name="p0"/> <xs:attribute name="p1"/> <xs:attribute name="p2"/> <xs:attribute name="p3"/> <xs:attribute name="p4"/> <xs:attribute name="p5"/> <xs:attribute name="p6"/> <xs:attribute name="p7"/> <xs:attribute name="p8"/> <xs:attribute name="p9"/> <xs:attribute name="ref"/> <xs:attribute name="tool"/> </xs:complexType> </xs:element> <xs:element name="location"> <xs:complexType> <xs:attribute name="local-ref"/> <xs:attribute name="location" use="required"/> <xs:attribute name="ref"/> </xs:complexType> </xs:element> <xs:element name="relax"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="status" type="relax-status"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element name="milestone"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="goal"/> </xs:sequence> <xs:attribute name="date" type="xs:integer"/> <xs:attribute name="name" use="required" type="id"/> </xs:complexType> </xs:element> <xs:element name="goal"> <xs:complexType> <xs:attribute name="name" use="required" type="id"/> <xs:attribute name="value" use="required" type="xs:decimal"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="project-status"> <xs:restriction base="id"> <xs:enumeration value="IGNORE"/> <xs:enumeration value="WARNING"/> <xs:enumeration value="ERROR"/> </xs:restriction> </xs:simpleType> <xs:element name="tags"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="0" ref="tag"/> <xs:element maxOccurs="0" ref="exec-phase"/> </xs:sequence> <xs:attribute name="baseName"/> <xs:attribute name="deleteTmpSrc" type="xs:boolean"/> <xs:attribute name="image"/> <xs:attribute name="needSources" type="xs:boolean"/> <xs:attribute name="projectStatusOnFailure" type="project-status"/> </xs:complexType> </xs:element> <xs:element name="tag"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="value"/> </xs:sequence> <xs:attribute name="changeable" type="xs:boolean"/> <xs:attribute name="credentialType"/> <xs:attribute name="defaultValue"/> <xs:attribute name="displayType"/> <xs:attribute name="key" use="required"/> <xs:attribute name="optionTitle"/> <xs:attribute name="required" type="xs:boolean"/> <xs:attribute name="style"/> <xs:attribute name="type" use="required"/> </xs:complexType> </xs:element> <xs:element name="value"> <xs:complexType> <xs:attribute name="key" use="required"/> <xs:attribute name="option"/> </xs:complexType> </xs:element> <xs:element name="exec-phase"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" ref="exec"/> <xs:element minOccurs="0" ref="exec-tool"/> </xs:sequence> <xs:attribute name="id" use="required"/> </xs:complexType> </xs:element> <xs:element name="exec"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="0" ref="arg"/> </xs:sequence> <xs:attribute name="name" use="required"/> </xs:complexType> </xs:element> <xs:element name="arg"> <xs:complexType> <xs:attribute name="tag"/> <xs:attribute name="value"/> <xs:attribute name="defaultValue"/> </xs:complexType> </xs:element> <xs:element name="exec-tool"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="0" ref="param"/> </xs:sequence> <xs:attribute name="name" use="required"/> </xs:complexType> </xs:element> <xs:element name="param"> <xs:complexType> <xs:attribute name="key" use="required"/> <xs:attribute name="tag"/> <xs:attribute name="value"/> <xs:attribute name="defaultValue"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" version="1.0"> <xs:element name="Bundle" type="bundleType"/> <xs:complexType name="bundleType"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="help" type="helpType" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="hideObsoleteModels" type="obsoleteType" minOccurs="0" maxOccurs="1"/> <xs:element name="hideModel" type="hiddenType" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="explorerTabs" type="tabsType" minOccurs="0" maxOccurs="1"/> <xs:element name="explorerTrees" type="treesType" minOccurs="0" maxOccurs="1"/> <xs:element name="option" type="optionType" minOccurs="0" maxOccurs="unbounded"/> </xs:choice> </xs:sequence> <xs:attribute name="version" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="helpType"> <xs:attribute name="label" use="required" type="xs:string"/> <xs:attribute name="url" use="required" type="xs:anyURI"/> <xs:attribute name="profiles" use="optional" type="xs:string"/> </xs:complexType> <xs:complexType name="optionType"> <xs:attribute name="name" use="required" type="xs:string"/> <xs:attribute name="value" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="obsoleteType"> <xs:attribute name="value" use="optional" default="false" type="xs:boolean"/> </xs:complexType> <xs:complexType name="hiddenType"> <xs:attribute name="name" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="tabsType"> <xs:sequence maxOccurs="unbounded"> <xs:element name="tab" type="tabType" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="hideSettings" use="optional" default="false" type="xs:boolean"/> </xs:complexType> <xs:complexType name="tabType"> <xs:attribute name="name" use="required" type="xs:string"/> <xs:attribute name="default" use="optional" default="false" type="xs:boolean"/> <xs:attribute name="mandatory" use="optional" default="false" type="xs:boolean"/> <xs:attribute name="rendered" use="optional" default="true" type="xs:boolean"/> </xs:complexType> <xs:complexType name="treesType"> <xs:sequence maxOccurs="unbounded"> <xs:element name="tree" type="treeType" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="treeType"> <xs:attribute name="name" use="required" type="xs:string"/> <xs:attribute name="rendered" use="optional" default="true" type="xs:boolean"/> </xs:complexType> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" version="1.0"> <xs:element name="squore" type="squoreType"/> <xs:complexType name="squoreType"> <xs:sequence> <xs:element name="paths" type="pathsType"/> <xs:element name="database" type="databaseType" minOccurs="0"/> <xs:element name="phantomjs" type="phantomjsType" minOccurs="0"/> <xs:element name="configuration" type="directoriesType"/> <xs:element name="addons" type="directoriesType"/> <xs:element name="client" type="dataDirectoriesType" minOccurs="0"/> <xs:element name="tmp" type="directoryType" minOccurs="0"/> <xs:element name="projects" type="projectType" minOccurs="0"/> <xs:element name="sources" type="directoryType" minOccurs="0"/> <xs:element name="workspace" type="directoryType" minOccurs="0"/> </xs:sequence> <xs:attribute name="type" use="required" type="xs:string"/> <xs:attribute name="version" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="pathsType"> <xs:sequence maxOccurs="unbounded"> <xs:element name="path" type="pathType"/> </xs:sequence> </xs:complexType> <xs:complexType name="pathType"> <xs:attribute name="name" use="required" type="xs:string"/> <xs:attribute name="path" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="directoriesType"> <xs:sequence maxOccurs="unbounded"> <xs:element name="path" type="directoryType"/> </xs:sequence> </xs:complexType> <xs:complexType name="directoryType"> <xs:attribute name="directory" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="databaseType"> <xs:sequence> <xs:element name="postgresql" type="directoryType" minOccurs="0"/> <xs:element name="cluster" type="directoryType" minOccurs="0"/> <xs:element name="backup" type="directoryType"/> <xs:element name="security" type="securityType" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="phantomjsType"> <xs:sequence> <xs:element name="socket-binding" type="socketBindingType" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="socketBindingType"> <xs:attribute name="address" type="xs:string" default="127.0.0.1"/> <xs:attribute name="port" type="xs:short" default="3003"/> <xs:attribute name="squore-url" type="xs:string" default=""/> <xs:attribute name="distant-url" type="xs:string" default=""/> </xs:complexType> <xs:complexType name="securityType"> <xs:sequence> <xs:element name="user-name" type="xs:string"/> </xs:sequence> </xs:complexType> <xs:complexType name="dataDirectoriesType"> <xs:sequence> <xs:element name="tmp" type="directoryType" minOccurs="0"/> <xs:element name="projects" type="projectType" minOccurs="0"/> <xs:element name="sources" type="directoryType" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="projectType"> <xs:sequence> <xs:element name="data-providers" type="dpType" minOccurs="0"/> </xs:sequence> <xs:attribute name="directory" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="dpType"> <xs:attribute name="keep-data-files" use="required" type="xs:boolean"/> </xs:complexType> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="list-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="families"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z0-9_]+(;[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="categories"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+\.[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+\.[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="measure-type"> <xs:restriction base="id"> <xs:enumeration value="METRIC"/> <xs:enumeration value="RULE"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="format"> <xs:restriction base="id"> <xs:enumeration value="NUMBER"/> <xs:enumeration value="PERCENT"/> <xs:enumeration value="INTEGER"/> <xs:enumeration value="DATE"/> <xs:enumeration value="DATETIME"/> <xs:enumeration value="TIME"/> <xs:enumeration value="DAYS"/> <xs:enumeration value="HOURS"/> <xs:enumeration value="MINUTES"/> <xs:enumeration value="SECONDS"/> <xs:enumeration value="MILLISECONDS"/> <xs:enumeration value="MAN_DAYS"/> <xs:enumeration value="MAN_HOURS"/> <xs:enumeration value="MAN_MINUTES"/> <xs:enumeration value="MAN_SECONDS"/> <xs:enumeration value="MAN_MILLISECONDS"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="datetime-style"> <xs:restriction base="id"> <xs:enumeration value="DEFAULT"/> <xs:enumeration value="SHORT"/> <xs:enumeration value="MEDIUM"/> <xs:enumeration value="LONG"/> <xs:enumeration value="FULL"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="rounding-mode"> <xs:restriction base="id"> <xs:enumeration value="UP"/> <xs:enumeration value="DOWN"/> <xs:enumeration value="CEILING"/> <xs:enumeration value="FLOOR"/> <xs:enumeration value="HALF_UP"/> <xs:enumeration value="HALF_DOWN"/> <xs:enumeration value="HALF_EVEN"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="bounds-type"> <xs:restriction base="xs:string"> <xs:pattern value='[\[\]]((-)*[0-9](\.[0-9]+)?)*;((-)*[0-9](.[0-9]+)?)*[\[\]]' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="path-scope"> <xs:restriction base="id"> <xs:enumeration value="CHILDREN"/> <xs:enumeration value="DESCENDANTS"/> </xs:restriction> </xs:simpleType> <xs:complexType name="elements"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="ArtefactType"/> <xs:element ref="Indicator"/> <xs:element ref="Measure"/> <xs:element ref="Package"/> <xs:element ref="package"/> <xs:element ref="Scale"/> <xs:element ref="ScaleMacro"/> <xs:element ref="Constant"/> <xs:element ref="RootIndicator"/> <xs:element ref="UpdateRules"/> <xs:element ref="UpdateRule"/> <xs:element ref="Link"/> <xs:element ref="ComputedLink"/> </xs:choice> </xs:sequence> <xs:attribute name="providedBy"/> <xs:attribute name="name"/> <xs:attribute name="storedOnlyIfDisplayed" type="xs:boolean"/> </xs:complexType> <xs:element name="Bundle" type="elements" /> <xs:element name="Package" type="elements"/> <xs:element name="package" type="elements"/> <xs:element name="Constant"> <xs:complexType> <xs:attribute name="id" use="required" type="id"/> <xs:attribute name="value" use="required"/> </xs:complexType> </xs:element> <xs:element name="RootIndicator"> <xs:complexType> <xs:attribute name="artefactTypes" use="required" type="list-id"/> <xs:attribute name="indicatorId" use="required" type="id"/> </xs:complexType> </xs:element> <xs:element name="UpdateRules"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="UpdateRule"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="UpdateRule"> <xs:complexType> <xs:attribute name="categories" type="categories"/> <xs:attribute name="disabled" type="xs:boolean"/> <xs:attribute name="measureId" use="required" type="id"/> </xs:complexType> </xs:element> <xs:element name="Measure"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="Computation"/> </xs:sequence> <xs:attribute name="acceptMissingValue" type="xs:boolean"/> <xs:attribute name="categories" type="categories"/> <xs:attribute name="dataBounds" type="bounds-type"/> <xs:attribute name="dateStyle" type="datetime-style"/> <xs:attribute name="decimals" type="xs:integer"/> <xs:attribute name="defaultValue" type="xs:decimal"/> <xs:attribute name="excludingTypes" type="list-id"/> <xs:attribute name="invalidValue"/> <xs:attribute name="families" type="families"/> <xs:attribute name="format" type="format"/> <xs:attribute name="manual" type="xs:boolean"/> <xs:attribute name="measureId" use="required" type="id"/> <xs:attribute name="noValue"/> <xs:attribute name="pattern"/> <xs:attribute name="roundingMode" type="rounding-mode"/> <xs:attribute name="suffix"/> <xs:attribute name="targetArtefactTypes"/> <xs:attribute name="timeStyle" type="datetime-style"/> <xs:attribute name="toolName"/> <xs:attribute name="toolVersion"/> <xs:attribute name="type" type="measure-type"/> <xs:attribute name="usedForRelaxation" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="Computation"> <xs:complexType> <xs:attribute name="continueOnRelaxed" type="xs:boolean"/> <xs:attribute name="excludingTypes" type="list-id"/> <xs:attribute name="result" use="required"/> <xs:attribute name="stored" type="xs:boolean"/> <xs:attribute name="targetArtefactTypes" use="required" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="Indicator"> <xs:complexType> <xs:attribute name="displayedScale" type="id"/> <xs:attribute name="displayedValue" type="id"/> <xs:attribute name="displayTypes" type="list-id"/> <xs:attribute name="excludingTypes" type="list-id"/> <xs:attribute name="families" type="families"/> <xs:attribute name="indicatorId" use="required" type="id"/> <xs:attribute name="measureId" type="id"/> <xs:attribute name="scaleId" type="id"/> <xs:attribute name="targetArtefactTypes" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="Scale"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="ScaleLevel"/> </xs:sequence> <xs:attribute name="isDynamic" type="xs:boolean"/> <xs:attribute name="macro" type="id"/> <xs:attribute name="scaleId" use="required" type="id"/> <xs:attribute name="targetArtefactTypes" type="list-id"/> <xs:attribute name="vars"/> </xs:complexType> </xs:element> <xs:element name="ScaleMacro"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" ref="ScaleLevel"/> </xs:sequence> <xs:attribute name="id" use="required" type="id"/> <xs:attribute name="isDynamic" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="ArtefactType"> <xs:complexType> <xs:attribute name="heirs" type="list-id"/> <xs:attribute name="id" use="required" type="id"/> <xs:attribute name="manual" type="xs:boolean"/> <xs:attribute name="parents" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="ScaleLevel"> <xs:complexType> <xs:attribute name="bounds" use="required"/> <xs:attribute name="levelId" use="required" type="id"/> <xs:attribute name="rank" use="required"/> </xs:complexType> </xs:element> <xs:element name="Link"> <xs:complexType> <xs:attribute name="id" use="required" type="id"/> <xs:attribute name="inArtefactTypes" type="list-id"/> <xs:attribute name="outArtefactTypes" type="list-id"/> <xs:attribute name="srcArtefactTypes" type="list-id"/> <xs:attribute name="dstArtefactTypes" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="ComputedLink"> <xs:complexType> <xs:sequence> <xs:element minOccurs="1" maxOccurs="1" ref="StartPath"/> <xs:element minOccurs="0" maxOccurs="unbounded" ref="NextPath"/> </xs:sequence> <xs:attribute name="id" use="required" type="id"/> </xs:complexType> </xs:element> <xs:element name="StartPath"> <xs:complexType> <xs:attribute name="link" type="id"/> <xs:attribute name="scope" type="path-scope"/> <xs:attribute name="srcArtefactTypes" type="list-id"/> <xs:attribute name="dstArtefactTypes" type="list-id"/> <xs:attribute name="srcCondition"/> <xs:attribute name="dstCondition"/> <xs:attribute name="recurse" type="xs:boolean"/> <xs:attribute name="keepIntermediateLinks" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="NextPath"> <xs:complexType> <xs:attribute name="link" type="id"/> <xs:attribute name="scope" type="path-scope"/> <xs:attribute name="dstArtefactTypes" type="list-id"/> <xs:attribute name="dstCondition"/> <xs:attribute name="recurse" type="xs:boolean"/> <xs:attribute name="keepIntermediateLinks" type="xs:boolean"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="list-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="categories"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+\.[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+\.[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="criterion-type"> <xs:restriction base="id"> <xs:enumeration value="BENEFIT"/> <xs:enumeration value="COST"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="preference-level"> <xs:restriction base="id"> <xs:enumeration value="VERY_LOW"/> <xs:enumeration value="LOW"/> <xs:enumeration value="MEDIUM"/> <xs:enumeration value="HIGH"/> <xs:enumeration value="VERY_HIGH"/> </xs:restriction> </xs:simpleType> <xs:complexType name="elements"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="Package"/> <xs:element ref="package"/> <xs:element ref="DecisionCriteria"/> <xs:element ref="DecisionCriterion"/> <xs:element ref="FindingsActionPlan"/> </xs:choice> </xs:sequence> </xs:complexType> <xs:element name="Bundle" type="elements"/> <xs:element name="Package" type="elements"/> <xs:element name="package" type="elements"/> <xs:element name="DecisionCriteria" type="elements"/> <xs:element name="DecisionCriterion"> <xs:complexType> <xs:sequence> <xs:element ref="Triggers"/> </xs:sequence> <xs:attribute name="categories" type="categories"/> <xs:attribute name="dcId" use="required" type="id"/> <xs:attribute name="excludingTypes" type="list-id"/> <xs:attribute name="families" type="list-id"/> <xs:attribute name="roles" type="list-id"/> <xs:attribute name="targetArtefactTypes" use="required" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="Triggers"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" ref="Trigger"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Trigger"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" ref="Test"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Test"> <xs:complexType> <xs:attribute name="bounds"/> <xs:attribute name="descrId" type="id"/> <xs:attribute name="expr" use="required"/> <xs:attribute name="p0"/> <xs:attribute name="p1"/> <xs:attribute name="p2"/> <xs:attribute name="p3"/> <xs:attribute name="p4"/> <xs:attribute name="p5"/> <xs:attribute name="p6"/> <xs:attribute name="p7"/> <xs:attribute name="p8"/> <xs:attribute name="p9"/> <xs:attribute name="suspect"/> </xs:complexType> </xs:element> <xs:element name="FindingsActionPlan"> <xs:complexType> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="CategoryCriterion"/> <xs:element ref="OccurrencesCriterion"/> <xs:element ref="VariableCriterion"/> </xs:choice> </xs:sequence> <xs:attribute name="limit" type="xs:integer"/> <xs:attribute name="priorityScaleId" type="id"/> </xs:complexType> </xs:element> <xs:element name="CategoryCriterion"> <xs:complexType> <xs:attribute name="type" type="criterion-type"/> <xs:attribute name="preferenceLevel" type="preference-level"/> <xs:attribute name="scaleId" use="required" type="id"/> <xs:attribute name="excludeLevels" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="OccurrencesCriterion"> <xs:complexType> <xs:attribute name="type" type="criterion-type"/> <xs:attribute name="preferenceLevel" type="preference-level"/> <xs:attribute name="scaleId" type="id"/> <xs:attribute name="excludeLevels" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="VariableCriterion"> <xs:complexType> <xs:attribute name="type" type="criterion-type"/> <xs:attribute name="preferenceLevel" type="preference-level"/> <xs:attribute name="indicatorId" use="required" type="id"/> <xs:attribute name="excludeLevels" type="list-id"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:complexType name="elements"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="Package"/> <xs:element ref="package"/> <xs:element ref="Properties"/> </xs:choice> </xs:sequence> </xs:complexType> <xs:element name="Bundle"> <xs:complexType> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="Package"/> <xs:element ref="package"/> <xs:element ref="Properties"/> </xs:choice> </xs:sequence> <xs:attribute name="available"/> <xs:attribute name="default"/> </xs:complexType> </xs:element> <xs:element name="Package" type="elements"/> <xs:element name="package" type="elements"/> <xs:element name="Properties"> <xs:complexType> <xs:attribute name="src" use="required"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="type-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z0-9_]*' /> </xs:restriction> </xs:simpleType> <xs:element name="Bundle"> <xs:complexType> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element ref="Role" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Role"> <xs:complexType> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element ref="Export" /> </xs:sequence> <xs:attribute name="name" use="required" type="xs:string" /> </xs:complexType> </xs:element> <xs:element name="Export"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:element ref="ExportScript" /> </xs:sequence> <xs:attribute name="type" use="required" type="type-id" /> </xs:complexType> </xs:element> <xs:element name="ExportScript"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="arg" /> </xs:sequence> <xs:attribute name="name" use="required" type="xs:string" /> <xs:attribute name="script" use="required" type="xs:string" /> </xs:complexType> </xs:element> <xs:element name="arg"> <xs:complexType> <xs:attribute name="value" use="required" type="xs:string" /> <xs:attribute name="optional" use="optional" type="xs:boolean" default="false" /> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z0-9_]*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="measure-id"> <xs:restriction base="xs:string"> <xs:pattern value='([BD].)?[A-Z0-9_]*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="info-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z0-9_]*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="indicator-id"> <xs:restriction base="xs:string"> <xs:pattern value='([I].)?[A-Z0-9_]*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="bounds-type"> <xs:restriction base="xs:string"> <xs:pattern value='[\[\]]((-)*[0-9](\.[0-9]+)?)*;((-)*[0-9](.[0-9]+)?)*[\[\]]' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="top-order"> <xs:restriction base="xs:string"> <xs:enumeration value="ASC" /> <xs:enumeration value="DESC" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="result-size"> <xs:union> <xs:simpleType> <xs:restriction base="xs:positiveInteger" /> </xs:simpleType> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="*" /> </xs:restriction> </xs:simpleType> </xs:union> </xs:simpleType> <xs:simpleType name="header-display-type"> <xs:restriction base="xs:string"> <xs:enumeration value="MNEMONIC" /> <xs:enumeration value="NAME" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="display-type"> <xs:restriction base="xs:string"> <xs:enumeration value="VALUE" /> <xs:enumeration value="RANK" /> <xs:enumeration value="ICON" /> <xs:enumeration value="DATE" /> <xs:enumeration value="DATETIME" /> <xs:enumeration value="TIME" /> <xs:enumeration value="NAME" /> <xs:enumeration value="MNEMONIC" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="date-style"> <xs:restriction base="xs:string"> <xs:enumeration value="SHORT" /> <xs:enumeration value="MEDIUM" /> <xs:enumeration value="DEFAULT" /> <xs:enumeration value="LONG" /> <xs:enumeration value="FULL" /> </xs:restriction> </xs:simpleType> <xs:element name="Bundle"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:element ref="Role" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Role"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:element ref="Filters" /> </xs:sequence> <xs:attribute name="name" use="required" type="xs:string" /> <xs:attribute name="preSelectedType" use="optional" type="xs:string" /> </xs:complexType> </xs:element> <xs:element name="Filters"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:choice> <xs:element ref="TopArtefacts" /> <xs:element name="TopDeltaArtefacts" type="top-artefacts" /> <xs:element name="TopNewArtefacts" type="top-artefacts" /> </xs:choice> </xs:sequence> <xs:attribute name="type" use="required" type="xs:string" /> </xs:complexType> </xs:element> <xs:element name="TopArtefacts"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:choice> <xs:element minOccurs="0" maxOccurs="unbounded" ref="Column" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="Where" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="OrderBy" /> </xs:choice> </xs:sequence> <xs:attribute name="id" use="required" type="id" /> <xs:attribute name="name" use="optional" type="xs:string" /> <xs:attribute name="artefactTypes" use="optional" type="xs:string" /> <xs:attribute name="excludingTypes" use="optional" type="xs:string" /> <xs:attribute name="measureId" use="optional" default="LEVEL" type="measure-id" /> <xs:attribute name="order" use="optional" default="ASC" type="top-order" /> <xs:attribute name="altMeasureId" use="optional" type="measure-id" /> <xs:attribute name="altOrder" use="optional" type="top-order" /> <xs:attribute name="resultSize" use="required" type="result-size" /> </xs:complexType> </xs:element> <xs:element name="Column"> <xs:complexType> <xs:attribute name="measureId" use="optional" type="measure-id" /> <xs:attribute name="infoId" use="optional" type="info-id" /> <xs:attribute name="indicatorId" use="optional" type="indicator-id" /> <xs:attribute name="artefactTypes" use="optional" type="xs:string" /> <xs:attribute name="excludingTypes" use="optional" type="xs:string" /> <xs:attribute name="headerDisplayType" use="optional" default="NAME" type="header-display-type" /> <xs:attribute name="displayType" use="optional" default="VALUE" type="display-type" /> <xs:attribute name="decimals" use="optional" default="2" type="xs:integer" /> <xs:attribute name="dateStyle" use="optional" default="DEFAULT" type="date-style" /> <xs:attribute name="timeStyle" use="optional" default="DEFAULT" type="date-style" /> <xs:attribute name="datePattern" use="optional" type="xs:string" /> <xs:attribute name="suffix" use="optional" type="xs:string" /> <xs:attribute name="useBackgroundColor" use="optional" type="xs:boolean" /> </xs:complexType> </xs:element> <xs:element name="Where"> <xs:complexType> <xs:attribute name="measureId" use="optional" type="measure-id" /> <xs:attribute name="infoId" use="optional" type="info-id" /> <xs:attribute name="value" use="optional" type="xs:string" /> <xs:attribute name="bounds" use="optional" type="bounds-type" /> </xs:complexType> </xs:element> <xs:element name="OrderBy"> <xs:complexType> <xs:attribute name="measureId" use="required" type="measure-id" /> <xs:attribute name="order" use="optional" default="ASC" type="top-order" /> </xs:complexType> </xs:element> <xs:complexType name="top-artefacts"> <xs:sequence maxOccurs="unbounded"> <xs:choice> <xs:element minOccurs="0" maxOccurs="unbounded" ref="Column" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="Where" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="OrderBy" /> </xs:choice> </xs:sequence> <xs:attribute name="id" use="required" type="id" /> <xs:attribute name="name" use="optional" type="xs:string" /> <xs:attribute name="artefactTypes" use="optional" type="xs:string" /> <xs:attribute name="excludingTypes" use="optional" type="xs:string" /> <xs:attribute name="measureId" use="optional" default="LEVEL" type="measure-id" /> <xs:attribute name="order" use="optional" default="ASC" type="top-order" /> <xs:attribute name="resultSize" use="required" type="result-size" /> </xs:complexType> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="list-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:complexType name="elements"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="package"/> <xs:element ref="hideMeasure"/> <xs:element ref="findingsTab"/> <xs:element ref="actionItemsTab"/> <xs:element ref="rulesEdition"/> </xs:choice> </xs:sequence> </xs:complexType> <xs:element name="bundle" type="elements"/> <xs:element name="package" type="elements"/> <xs:element name="hideMeasure"> <xs:complexType> <xs:attribute name="path" use="required"/> <xs:attribute name="targetArtefactTypes" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="findingsTab"> <xs:complexType> <xs:attribute name="orderBy" type="list-id"/> <xs:attribute name="hideColumns" type="list-id"/> <xs:attribute name="hideCharacteristicsFilter" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="actionItemsTab"> <xs:complexType> <xs:attribute name="orderBy" type="list-id"/> <xs:attribute name="hideColumns" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="rulesEdition"> <xs:complexType> <xs:attribute name="scales" use="required" type="list-id"/> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:simpleType name="external-id"> <xs:restriction base="xs:string"> <xs:pattern value="[A-Z]{1}[A-Z0-9_]*" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="positive-integer"> <xs:restriction base="xs:string"> <xs:pattern value="[0-9]+" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="opacity"> <xs:restriction base="xs:string"> <xs:pattern value='(0|1){1}\.?[0-9]{0,2}' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="actions"> <xs:restriction base="xs:string"> <xs:enumeration value="EXPAND_PORTFOLIO_TREE" /> <xs:enumeration value="EXPAND_ARTEFACT_TREE" /> <xs:enumeration value="EXPAND_MEASURE_TREE" /> <xs:enumeration value="COLLAPSE_PORTFOLIO_TREE" /> <xs:enumeration value="COLLAPSE_ARTEFACT_TREE" /> <xs:enumeration value="COLLAPSE_MEASURE_TREE" /> <xs:enumeration value="SELECT_MODEL" /> <xs:enumeration value="SELECT_PROJECT" /> <xs:enumeration value="SELECT_ARTEFACT" /> <xs:enumeration value="SELECT_ARTEFACT_LEAF" /> <xs:enumeration value="CLOSE_MEASURE_POPUP" /> <xs:enumeration value="SELECT_MEASURE" /> <xs:enumeration value="SHOW_REVIEW_SET" /> <xs:enumeration value="SHOW_PORTFOLIO_TREE" /> <xs:enumeration value="SHOW_DASHBOARD_TAB" /> <xs:enumeration value="SHOW_ACTION_ITEMS_TAB" /> <xs:enumeration value="SHOW_HIGHLIGHTS_TAB" /> <xs:enumeration value="SHOW_FINDINGS_TAB" /> <xs:enumeration value="SHOW_REPORTS_TAB" /> <xs:enumeration value="SHOW_FORMS_TAB" /> <xs:enumeration value="SHOW_INDICATORS_TAB" /> <xs:enumeration value="SHOW_MEASURES_TAB" /> <xs:enumeration value="SHOW_COMMENTS_TAB" /> <xs:enumeration value="SHOW_ACTION_ITEMS_ADVANCED_SEARCH" /> <xs:enumeration value="EXPAND_ACTION_ITEM" /> <xs:enumeration value="SHOW_FINDINGS_ADVANCED_SEARCH" /> <xs:enumeration value="SELECT_FINDING" /> <xs:enumeration value="SELECT_FINDING_ARTEFACT" /> <xs:enumeration value="EXPAND_FINDING" /> <xs:enumeration value="EXPAND_ATTRIBUTE" /> <xs:enumeration value="SWITCH_INDICATORS_PAGE" /> <xs:enumeration value="SWITCH_MEASURES_PAGE" /> <xs:enumeration value="SWITCH_COMMENTS_PAGE" /> <xs:enumeration value="CLOSE_CHART_POPUP" /> <xs:enumeration value="OPEN_CHART_POPUP" /> <xs:enumeration value="OPEN_MODEL_CHART_POPUP" /> <xs:enumeration value="SELECT_DESCR_TAB" /> <xs:enumeration value="SELECT_COMMENTS_TAB" /> <xs:enumeration value="SELECT_FAVORITES_TAB" /> <xs:enumeration value="COMPARE_CHART" /> <xs:enumeration value="QUIT_COMPARATIVE_MODE" /> <xs:enumeration value="QUIT_FULLDISPLAY_MODE" /> <xs:enumeration value="CLOSE_ARTEFACT_TREE_FILTER" /> <xs:enumeration value="SHOW_ARTEFACT_TREE_FILTER" /> <xs:enumeration value="OPEN_TABLE" /> <xs:enumeration value="CHANGE_PAGE" /> <xs:enumeration value="CREATE_NEW_PROJECT" /> <xs:enumeration value="SELECT_WIZARD" /> <xs:enumeration value="VALIDATE_WIZARD" /> <xs:enumeration value="VALIDATE_INFORMATION" /> <xs:enumeration value="VALIDATE_DP_OPTIONS" /> <xs:enumeration value="RUN_PROJECT_CREATION" /> <xs:enumeration value="OPEN_SUB_MENU_HELP" /> <xs:enumeration value="CLOSE_TUTORIAL_POPUP" /> <xs:enumeration value="OPEN_TUTORIAL_POPUP" /> <xs:enumeration value="NONE" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="alias"> <xs:restriction base="xs:string"> <xs:enumeration value="CUSTOM" /> <xs:enumeration value="BODY" /> <xs:enumeration value="BREADCRUMBS" /> <xs:enumeration value="MENU_HELP" /> <xs:enumeration value="SUB_MENU_HELP" /> <xs:enumeration value="SUB_MENU_HELP_ROW" /> <xs:enumeration value="SUB_MENU_HELP_ROW_FIRST" /> <xs:enumeration value="TUTORIAL_POPUP" /> <xs:enumeration value="TUTORIAL_POPUP_MODEL" /> <xs:enumeration value="TUTORIAL_POPUP_MODEL_FIRST" /> <xs:enumeration value="TUTORIAL_POPUP_TUTORIAL_NAME" /> <xs:enumeration value="TUTORIAL_POPUP_TUTORIAL_NAME_FIRST" /> <xs:enumeration value="TUTORIAL_POPUP_TUTORIAL_DESCR" /> <xs:enumeration value="TUTORIAL_POPUP_TUTORIAL_DESCR_FIRST" /> <xs:enumeration value="EXPLORER" /> <xs:enumeration value="DRILLDOWN" /> <xs:enumeration value="EXPLORER_TAB" /> <xs:enumeration value="ARTEFACT_TREE" /> <xs:enumeration value="MEASURE_TREE" /> <xs:enumeration value="EXPLORER_HEADER" /> <xs:enumeration value="PORTFOLIO_HEADER" /> <xs:enumeration value="ARTEFACT_TREE_SEARCH" /> <xs:enumeration value="ARTEFACT_TREE_FILTER" /> <xs:enumeration value="REVIEW_SET" /> <xs:enumeration value="PORTFOLIO_TREE" /> <xs:enumeration value="PORTFOLIO_TREE_PROJECT" /> <xs:enumeration value="PORTFOLIO_TREE_PROJECT_FIRST" /> <xs:enumeration value="MODEL_DASHBOARD" /> <xs:enumeration value="MODEL_CHARTS" /> <xs:enumeration value="MODEL_CHART_FIRST" /> <xs:enumeration value="MODEL_TABLE" /> <xs:enumeration value="MODEL_TABLE_ROW_FIRST" /> <xs:enumeration value="MODEL_CHART" /> <xs:enumeration value="MODEL_TABLE_ROW" /> <xs:enumeration value="MODEL_CHART_POPUP" /> <xs:enumeration value="MODEL_CHART_POPUP_GRAPH" /> <xs:enumeration value="MODEL_CHART_POPUP_PREVIOUS_ARROW" /> <xs:enumeration value="MODEL_CHART_POPUP_NEXT_ARROW" /> <xs:enumeration value="MODEL_CHART_POPUP_NAV_BAR" /> <xs:enumeration value="MODEL_CHART_POPUP_ASIDE" /> <xs:enumeration value="MODEL_CHART_POPUP_ASIDE_HEAD" /> <xs:enumeration value="MODEL_CHART_POPUP_DESCR" /> <xs:enumeration value="FILTER_POPUP" /> <xs:enumeration value="FILTER_LEVEL" /> <xs:enumeration value="FILTER_TYPE" /> <xs:enumeration value="FILTER_EVOLUTION" /> <xs:enumeration value="FILTER_STATUS" /> <xs:enumeration value="ARTEFACT_TREE_LEAF" /> <xs:enumeration value="MEASURE_TREE_LEAF" /> <xs:enumeration value="MENU_INDICATOR_ARTEFACT" /> <xs:enumeration value="DASHBOARD " /> <xs:enumeration value="SCORECARD" /> <xs:enumeration value="KPI" /> <xs:enumeration value="CHARTS" /> <xs:enumeration value="TABLES" /> <xs:enumeration value="CHART_FIRST" /> <xs:enumeration value="LINE" /> <xs:enumeration value="CHART" /> <xs:enumeration value="CHART_FIRST" /> <xs:enumeration value="TABLE" /> <xs:enumeration value="TABLE_FIRST" /> <xs:enumeration value="MEASURE_POPUP" /> <xs:enumeration value="MEASURE_POPUP_CONTENT" /> <xs:enumeration value="MEASURE_POPUP_LEVELS" /> <xs:enumeration value="MEASURE_POPUP_ROW_FIRST" /> <xs:enumeration value="MEASURE_POPUP_ROW" /> <xs:enumeration value="CHART_POPUP" /> <xs:enumeration value="CHART_POPUP_GRAPH" /> <xs:enumeration value="CHART_POPUP_COMPARE_OPTION" /> <xs:enumeration value="CHART_POPUP_PREVIOUS_ARROW" /> <xs:enumeration value="CHART_POPUP_NEXT_ARROW" /> <xs:enumeration value="CHART_POPUP_NAV_BAR" /> <xs:enumeration value="CHART_POPUP_ASIDE" /> <xs:enumeration value="CHART_POPUP_ASIDE_HEAD" /> <xs:enumeration value="CHART_POPUP_DESCR" /> <xs:enumeration value="CHART_POPUP_COMMENTS" /> <xs:enumeration value="CHART_POPUP_FAVORITES" /> <xs:enumeration value="CHART_POPUP_COMPARATIVE_CHART" /> <xs:enumeration value="ACTION_ITEMS" /> <xs:enumeration value="ACTION_ITEMS_TABLE" /> <xs:enumeration value="ACTION_ITEMS_TABLE_HEAD" /> <xs:enumeration value="ACTION_ITEMS_TABLE_HEAD_CHECK" /> <xs:enumeration value="ACTION_ITEMS_ADD_REVIEW_SET" /> <xs:enumeration value="ACTION_ITEMS_EXPORT_LIST" /> <xs:enumeration value="ACTION_ITEMS_EXPORT_BUTTON" /> <xs:enumeration value="ACTION_ITEMS_SEARCH" /> <xs:enumeration value="ACTION_ITEMS_ROW" /> <xs:enumeration value="ACTION_ITEMS_REASON" /> <xs:enumeration value="ACTION_ITEMS_ADVANCED_SEARCH" /> <xs:enumeration value="ACTION_ITEMS_ADVANCED_SEARCH_SELECT_FIRST" /> <xs:enumeration value="ACTION_ITEMS_ADVANCED_SEARCH_SELECT" /> <xs:enumeration value="HIGHLIGHTS" /> <xs:enumeration value="HIGHLIGHTS_TABLE" /> <xs:enumeration value="HIGHLIGHTS_TABLE_HEAD" /> <xs:enumeration value="HIGHLIGHTS_TABLE_HEAD_CHECK" /> <xs:enumeration value="HIGHLIGHTS_SEARCH" /> <xs:enumeration value="HIGHLIGHTS_SEARCH_FILTER" /> <xs:enumeration value="HIGHLIGHTS_SEARCH_TYPE" /> <xs:enumeration value="HIGHLIGHTS_EXPORT_BUTTON" /> <xs:enumeration value="HIGHLIGHTS_ADD_REVIEW_SET" /> <xs:enumeration value="HIGHLIGHTS_ROW_FIRST" /> <xs:enumeration value="FINDINGS" /> <xs:enumeration value="FINDINGS_TABLE" /> <xs:enumeration value="FINDINGS_TABLE_HEAD" /> <xs:enumeration value="FINDINGS_SEARCH" /> <xs:enumeration value="FINDINGS_INFO" /> <xs:enumeration value="FINDINGS_RULE" /> <xs:enumeration value="FINDINGS_ARTEFACT" /> <xs:enumeration value="FINDINGS_ROW_FIRST" /> <xs:enumeration value="FINDINGS_ADVANCED_SEARCH" /> <xs:enumeration value="FINDINGS_ADVANCED_SEARCH_SELECT_FIRST" /> <xs:enumeration value="FINDINGS_ADVANCED_SEARCH_SELECT" /> <xs:enumeration value="REPORTS" /> <xs:enumeration value="REPORTS_REGION" /> <xs:enumeration value="REPORTS_OPTIONS" /> <xs:enumeration value="REPORTS_OPTION_TEMPLATE" /> <xs:enumeration value="REPORTS_OPTION_FORMAT" /> <xs:enumeration value="REPORTS_OPTION_SYNTHETIC_VIEW" /> <xs:enumeration value="REPORTS_CREATE" /> <xs:enumeration value="EXPORT_REGION" /> <xs:enumeration value="EXPORT_OPTIONS" /> <xs:enumeration value="EXPORT_CREATE" /> <xs:enumeration value="FORMS" /> <xs:enumeration value="FORMS_ATTRIBUTE" /> <xs:enumeration value="FORMS_ATTRIBUTE_FIELD" /> <xs:enumeration value="FORMS_ATTRIBUTE_COMMENT" /> <xs:enumeration value="FORMS_HISTORY" /> <xs:enumeration value="FORMS_BLOCK" /> <xs:enumeration value="INDICATORS" /> <xs:enumeration value="INDICATORS_TABLE" /> <xs:enumeration value="INDICATORS_TABLE_HEAD" /> <xs:enumeration value="INDICATORS_ROW" /> <xs:enumeration value="MEASURES" /> <xs:enumeration value="MEASURES_TABLE" /> <xs:enumeration value="MEASURES_TABLE_HEAD" /> <xs:enumeration value="MEASURES_ROW" /> <xs:enumeration value="COMMENTS" /> <xs:enumeration value="COMMENTS_TABLE" /> <xs:enumeration value="COMMENTS_TABLE_HEAD" /> <xs:enumeration value="COMMENTS_ROW" /> <xs:enumeration value="CREATE_PROJECT_BUTTON" /> <xs:enumeration value="WIZARD_PANEL" /> <xs:enumeration value="WIZARD_ROW" /> <xs:enumeration value="WIZARD_ROW_FIRST" /> <xs:enumeration value="WIZARD_NEXT_BUTTON" /> <xs:enumeration value="GENERAL_INFORMATION" /> <xs:enumeration value="PROJECT_IDENTIFICATION_BLOCK " /> <xs:enumeration value="GENERAL_INFO_BLOCK" /> <xs:enumeration value="GENERAL_INFO_ROW" /> <xs:enumeration value="PROJECT_NEXT_BUTTON" /> <xs:enumeration value="DP_PANEL" /> <xs:enumeration value="DP_PANEL_BLOCK" /> <xs:enumeration value="DP_PANEL_ROW" /> <xs:enumeration value="DP_PANEL_NEXT_BUTTON" /> <xs:enumeration value="CONFIRMATION_PANEL" /> <xs:enumeration value="SUMMARY" /> <xs:enumeration value="CONFIRMATION_PANEL_PARAMETERS" /> <xs:enumeration value="RUN_NEW_PROJECT_BUTTON" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="colors"> <xs:union> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="#[A-Fa-f0-9]{6}" /> </xs:restriction> </xs:simpleType> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="(rgb|RGB)\([0-9]{3},[0-9]{3},[0-9]{3}\)" /> </xs:restriction> </xs:simpleType> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="aqua" /> <xs:enumeration value="black" /> <xs:enumeration value="blue" /> <xs:enumeration value="gray" /> <xs:enumeration value="lime" /> <xs:enumeration value="green" /> <xs:enumeration value="maroon" /> <xs:enumeration value="navy" /> <xs:enumeration value="olive" /> <xs:enumeration value="orange" /> <xs:enumeration value="purple" /> <xs:enumeration value="red" /> <xs:enumeration value="silver" /> <xs:enumeration value="teal" /> <xs:enumeration value="white" /> <xs:enumeration value="yellow" /> <xs:enumeration value="transparent" /> </xs:restriction> </xs:simpleType> </xs:union> </xs:simpleType> <xs:simpleType name="text-positions"> <xs:restriction base="xs:string"> <xs:enumeration value="INTERNAL" /> <xs:enumeration value="EXTERNAL" /> <xs:enumeration value="LEFT" /> <xs:enumeration value="RIGHT" /> <xs:enumeration value="TOP" /> <xs:enumeration value="BOTTOM" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="phase-type"> <xs:restriction base="xs:string"> <xs:enumeration value="PARALLEL" /> <xs:enumeration value="PROGRESSIVE" /> <xs:enumeration value="SEQUENTIAL" /> <xs:enumeration value="FREE" /> </xs:restriction> </xs:simpleType> <xs:complexType name="elements"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element ref="help"/> </xs:sequence> </xs:complexType> <xs:element name="Bundle" type="elements" /> <xs:element name="Package" type="elements"/> <xs:element name="item"> <xs:complexType> <xs:attribute name="element" use="required" type="external-id" /> <xs:attribute name="param" use="optional" type="xs:string" /> <xs:attribute name="descrId" use="required" type="xs:string" /> <xs:attribute name="textPosition" use="optional" default="EXTERNAL" type="text-positions" /> <xs:attribute name="maskColor" use="optional" default="#2aa0d5" type="colors" /> <xs:attribute name="maskOpacity" use="optional" default="0.8" type="opacity" /> <xs:attribute name="textSize" use="optional" default="25" type="positive-integer" /> <xs:attribute name="textColor" use="optional" default="white" type="colors" /> </xs:complexType> </xs:element> <xs:element name="preAction"> <xs:complexType> <xs:attribute name="action" use="required" type="actions" /> <xs:attribute name="param" use="optional" default="" type="xs:string" /> <xs:attribute name="clickIndicator" use="optional" default="false" type="xs:boolean" /> </xs:complexType> </xs:element> <xs:element name="phase"> <xs:complexType> <xs:sequence maxOccurs="unbounded"> <xs:choice> <xs:element minOccurs="0" maxOccurs="unbounded" ref="item" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="preAction" /> </xs:choice> </xs:sequence> <xs:attribute name="element" use="required" type="external-id" /> <xs:attribute name="param" use="optional" type="xs:string" /> <xs:attribute name="type" use="optional" default="PARALLEL" type="phase-type" /> <xs:attribute name="textPosition" use="optional" default="EXTERNAL" type="text-positions" /> <xs:attribute name="textSize" use="optional" default="25" type="positive-integer" /> <xs:attribute name="textColor" use="optional" default="white" type="colors" /> <xs:attribute name="maskColor" use="optional" default="#2aa0d5" type="colors" /> <xs:attribute name="maskOpacity" use="optional" default="0.6" type="opacity" /> </xs:complexType> </xs:element> <xs:element name="help"> <xs:complexType> <xs:sequence minOccurs="1" maxOccurs="unbounded"> <xs:choice> <xs:element ref="phase" /> <xs:element ref="item" /> </xs:choice> </xs:sequence> <xs:attribute name="id" use="required" type="external-id" /> <xs:attribute name="opacity" use="optional" default="0.4" type="opacity" /> <xs:attribute name="textPosition" use="optional" default="EXTERNAL" type="text-positions" /> <xs:attribute name="textSize" use="optional" default="25" type="positive-integer" /> <xs:attribute name="textColor" use="optional" default="white" type="colors" /> <xs:attribute name="maskColor" use="optional" default="#2aa0d5" type="colors" /> <xs:attribute name="maskOpacity" use="optional" default="0.6" type="opacity" /> <xs:attribute name="firstConnexionGroup" use="optional" type="xs:string" /> <xs:attribute name="icon" use="optional" type="xs:string" /> </xs:complexType> </xs:element> </xs:schema>
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="list-id"> <xs:restriction base="xs:string"> <xs:pattern value='[A-Z_][A-Z0-9_]+(;[A-Z_][A-Z0-9_]+)*' /> </xs:restriction> </xs:simpleType> <xs:simpleType name="alignment"> <xs:restriction base="id"> <xs:enumeration value="LEFT"/> <xs:enumeration value="RIGHT"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="project-status"> <xs:restriction base="id"> <xs:enumeration value="IGNORE"/> <xs:enumeration value="WARNING"/> <xs:enumeration value="ERROR"/> </xs:restriction> </xs:simpleType> <xs:element name="Bundle"> <xs:complexType> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="tags"/> <xs:element ref="wizard"/> </xs:choice> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="tags"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="tag"/> </xs:sequence> <xs:attribute name="textAlign" type="alignment"/> <xs:attribute name="valueAlign" type="alignment"/> </xs:complexType> </xs:element> <xs:element name="tag"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="value"/> </xs:sequence> <xs:attribute name="defaultValue"/> <xs:attribute name="displayType"/> <!-- Not display-type because it is case insensitive --> <xs:attribute name="group"/> <xs:attribute name="groupId" type="id"/> <xs:attribute name="measureId" use="required" type="id"/> <xs:attribute name="name"/> <xs:attribute name="placeholder"/> <xs:attribute name="required" type="xs:boolean"/> <xs:attribute name="review" type="xs:boolean"/> <xs:attribute name="suffix"/> <xs:attribute name="targetArtefactTypes" type="list-id"/> <xs:attribute name="textAlign" type="alignment"/> <xs:attribute name="type" use="required"/> <!-- Not tag-type because it is case insensitive --> <xs:attribute name="valueAlign" type="alignment"/> </xs:complexType> </xs:element> <xs:element name="value"> <xs:complexType> <xs:attribute name="key" use="required"/> <xs:attribute name="value" use="required" type="xs:decimal"/> </xs:complexType> </xs:element> <xs:element name="wizard"> <xs:complexType> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="tags"/> <xs:element ref="milestones"/> <xs:element ref="repositories"/> <xs:element ref="tools"/> </xs:choice> </xs:sequence> <xs:attribute name="autoBaseline" type="xs:boolean"/> <xs:attribute name="group"/> <xs:attribute name="groups"/> <xs:attribute name="hideRulesEdition" type="xs:boolean"/> <xs:attribute name="img"/> <xs:attribute name="users"/> <xs:attribute name="versionPattern"/> <xs:attribute name="wizardId" use="required" type="id"/> <xs:attribute name="projectsSelection" type="xs:boolean"/> <xs:attribute name="name"/> </xs:complexType> </xs:element> <xs:element name="milestones"> <xs:complexType> <xs:sequence minOccurs="0"> <xs:element minOccurs="0" maxOccurs="unbounded" ref="goals"/> <xs:element minOccurs="0" maxOccurs="unbounded" ref="milestone"/> </xs:sequence> <xs:attribute name="canCreateMilestone" type="xs:boolean"/> <xs:attribute name="canCreateGoal" type="xs:boolean"/> <xs:attribute name="hide" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="goals"> <xs:complexType> <xs:sequence minOccurs="0"> <xs:element ref="goal"/> </xs:sequence> <xs:attribute name="displayableFamilies" use="required" type="list-id"/> </xs:complexType> </xs:element> <xs:element name="goal"> <xs:complexType> <xs:attribute name="mandatory" use="required" type="xs:boolean"/> <xs:attribute name="measureId" use="required" type="id"/> </xs:complexType> </xs:element> <xs:element name="milestone"> <xs:complexType> <xs:sequence> <xs:element ref="defaultGoal"/> </xs:sequence> <xs:attribute name="id" use="required" type="id"/> <xs:attribute name="mandatory" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="defaultGoal"> <xs:complexType> <xs:attribute name="measureId" use="required" type="id"/> <xs:attribute name="value" use="required" type="xs:integer"/> </xs:complexType> </xs:element> <xs:element name="repositories"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="repository"/> </xs:sequence> <xs:attribute name="all" use="required" type="xs:boolean"/> <xs:attribute name="hide" use="required" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="repository"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" ref="param"/> </xs:sequence> <xs:attribute name="checkedInUI" type="xs:boolean"/> <xs:attribute name="name" use="required"/> </xs:complexType> </xs:element> <xs:element name="tools"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="tool"/> </xs:sequence> <xs:attribute name="all" type="xs:boolean"/> <xs:attribute name="expandedInUI" type="xs:boolean"/> </xs:complexType> </xs:element> <xs:element name="tool"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="param"/> </xs:sequence> <xs:attribute name="checkedInUI" type="xs:boolean"/> <xs:attribute name="expandedInUI" type="xs:boolean"/> <xs:attribute name="name" use="required"/> <xs:attribute name="optional" type="xs:boolean"/> <xs:attribute name="projectStatusOnFailure" type="project-status"/> <xs:attribute name="projectStatusOnWarning" type="project-status"/> </xs:complexType> </xs:element> <xs:element name="param"> <xs:complexType> <xs:attribute name="availableChoices"/> <xs:attribute name="name" use="required"/> <xs:attribute name="value"/> <xs:attribute name="hide" type="xs:boolean"/> </xs:complexType> </xs:element> </xs:schema>