- Checkmarx Documentation
- Checkmarx One
- Checkmarx One User Guide
- DAST (Dynamic Application Software Testing)
- Configuration File
- Configuration File Structure
Configuration File Structure
On this page, we will address the configuration file structure.
The configuration file has two major parts: environment (env:) and job (jobs:).
Configuration file example
--- env: contexts: - name: "Default Context" urls: - "http://testhtml5.vulnweb.com/" includePaths: - "http://testhtml5.vulnweb.com/.*" excludePaths: [] authentication: method: "form" parameters: loginPageUrl: "http://testhtml5.vulnweb.com/#/popular" loginRequestUrl: "http://testhtml5.vulnweb.com/login" loginRequestBody: "username={%username%}&password={%password%}" verification: method: "response" loggedInRegex: "Logout" loggedOutRegex: "Login" pollFrequency: 60 pollUnits: "requests" pollUrl: "" pollPostData: "" sessionManagement: method: "cookie" parameters: {} technology: exclude: [] users: - name: "test_user" credentials: password: "admin" username: "admin" parameters: failOnError: true failOnWarning: false progressToStdout: true vars: {} jobs: - parameters: scanOnlyInScope: true enableTags: false rules: [] name: "passiveScan-config" type: "passiveScan-config" - parameters: context: "Default Context" user: "test_user" url: "" maxDuration: 0 maxDepth: 0 maxChildren: 0 name: "spider" type: "spider" - parameters: {} name: "passiveScan-wait" type: "passiveScan-wait" - parameters: template: "risk-confidence-html" reportDir: "" reportTitle: "ZAP Scanning Report" reportDescription: "" name: "report" type: "report"
Environment configuration
This section of the YAML configuration file defines the applications where the rest of the jobs can work.
At this stage, in terms of authentication mechanisms, there's support for form-based and HTTP/NTLM authentications.
Note : When testing targets that operate on default ports (80 for HTTP, 443 for HTTPS), the colon portion of the URL should not be included. Including that portion (for example: http://example.com:80) may result in an inability to crawl or test the target. If a ‘default’ port is specified, the engine will treat it without the default port being included then it doesn’t match the expectation within the Context, and there’s nothing to interact with as part of the Context.
Environment Structure
--- env: contexts: - name: "Default Context" urls: - "http://testhtml5.vulnweb.com/" includePaths: - "http://testhtml5.vulnweb.com/.*" excludePaths: [] authentication: method: "form" parameters: loginPageUrl: "http://testhtml5.vulnweb.com/#/popular" loginRequestUrl: "http://testhtml5.vulnweb.com/login" loginRequestBody: "username={%username%}&password={%password%}" verification: method: "response" loggedInRegex: "Logout" loggedOutRegex: "Login" pollFrequency: 60 pollUnits: "requests" pollUrl: "" pollPostData: "" sessionManagement: method: "cookie" parameters: {} technology: exclude: [] users: - name: "test_user" credentials: password: "admin" username: "admin" parameters: failOnError: true failOnWarning: false progressToStdout: true vars: {}
Name | Description | Type / Default |
---|---|---|
contexts: | ||
name: context 1 | Name to be used to refer to this context in other jobs, mandatory | |
urls: | A mandatory list of top-level URLs will include everything under each URL. | |
includePaths: | List of all the base URLs to be scanned (optional) | |
excludePaths: | List of URLs to be excluded from being scanned (optional) | |
authentication: | ||
method: | One of: 'HTTP,' 'form,' 'JSON,' or 'script' | String |
parameters: | May include any required scripts. All of the parameters support vars except for the port. | |
hostname: | Only for 'HTTP' authentication | String |
port: | Only for 'HTTP' authentication | Int |
realm: | Only for 'HTTP' authentication | String |
loginPageUrl: | The login page URL to read before making the request, only for 'form' or 'JSON' authentication | String |
loginRequestUrl: | The login URL to request is only for 'form' or 'JSON' authentication. | String |
loginRequestBody: | The login request body - if not supplied, a GET request will be used only for 'form' or 'JSON' authentication | String |
script: | Path to a script, only for 'script' authentication | String |
scriptEngine: | The name of the script engine to use, only for 'script' authentication | String |
verification: | ||
method: | One of the 'response,' 'request,' 'both,' 'poll' | String |
loggedInRegex: | Regex pattern for determining if logged-in | String |
loggedOutRegex: | Regex pattern for determining if logged out | String |
pollFrequency: | The poll frequency, only for 'poll' verification | |
pollUnits: | The poll units, one of 'requests,' 'seconds,' only for 'poll' verification | String |
pollUrl: | The URL to the poll is only for 'poll' verification. | String |
pollPostData: | Post date to include in the poll, only for 'poll' verification | String |
pollAdditionalHeaders: | List of additional headers for poll request, only for 'poll' verification | |
| The header name | |
value: | The header value | |
sessionManagement: | ||
method: | Either 'cookie,' 'http,' or 'script.' | String |
parameters: | List of 0 or more parameters - may include any required for scripts. | |
script: | Path to a script, only for 'script' session management | String |
scriptEngine: | The name of the script engine to use, only for 'script' session management | String |
technology: | ||
exclude: | List of tech to exclude, as per https://www.zaproxy.org/techtags/ (just use last names) | |
users: | List of one or more users available to use for authentication | |
| The name of the user to be used by the jobs | String |
credentials: | List of user credentials - may include any required for scripts. | |
username: | The username to use when authenticating, vars supported | String |
password: | The password to use when authenticating, vars supported | String |
vars: | List of 0 or more custom variables to be used throughout the config file | |
myVarOne: CustomConfigVarOne | Can be used as ${myVarOne} anywhere throughout the config | |
myVarTwo: ${myVarOne}.VarTwo | Can refer other vars | |
parameters: | ||
failOnError: true | If set exit on an error | |
failOnWarning: false | If set exit on a warning | |
progressToStdout: true | If set, will write job progress to stdout. | |
Jobs Supported
This section will detail the jobs supported by our solution.
passiveScan-config
The passive scanner runs against all requests and responses that are generated by the engine or are proxied through it. If you want to configure the passive scan configuration then you should typically do so before running any other jobs. However, you can run this job later, or multiple times, if you want different jobs to use different passive scan configurations.
The job saves the current passive scan configuration when a plan starts and resets it when the plan ends. This is primarily to ensure the scanOnlyInScope setting is not changed - the default is ’true’ for the job but ‘false’ in the GUI.
Note that if you set disableAllRules
to true
then they will stay disabled when the plan has finished. Automatically re-enabling them when the plan finishes could result in the rules becoming enabled while the passive scan queue is being processed, for example, if the passiveScan-wait job is not used, or if it is used but with the maxDuration option is set.
In versions up to and including 0.16.0 running this job with the default settings would change scanOnlyInScope to ’true’ in the GUI. This has proved confusing as many users use the GUI without setting a scope - when scanOnlyInScope is set to ’true’ and no scope is defined then no passive scan alerts are raised.
Job structure
- type: passiveScan-config name: "passiveScan-config" parameters: scanOnlyInScope: true enableTags: false rules: []
Possible parameters
- disableAllRules: <Bool> (Default - false)
If true then will disable all rules before applying the settings in the rules section.
- enableTags: <Bool> (Default - false)
Enable passive scan tags - enabling them can impact performance.
- id: <int>
The rule id.
- maxAlertsPerRule: <int> (Default - 10)
Maximum number of alerts to raise per rule.
- maxBodySizeInBytesToScan: <int> (Default - 0 - will scan all messages)
Maximum body size to scan.
- name: <string>
The name of the rule for documentation purposes - this is not required or actually used.
- rules:
A list of one or more passive scan rules and associated settings which override the defaults.
- scanOnlyInScope: <Bool> (Default - true)
Only scan URLs in scope.
- threshold: <string> (Default - Medium)
The Alert Threshold for this rule (Off, Low, Medium, High).
Name | Description | Type / Default |
---|---|---|
| Maximum number of alerts to raise per rule | Int, default: 10 |
| Only scan URLs in scope | Bool, default: true |
| Maximum body size to scan | Int, default: 0 - will scan all messages |
| Enable passive scan tags, default: false - enabling them can impact performance | Bool, default: false |
| If true then will disable all rules before applying the settings in the rules section | Bool, default: false |
| A list of one or more passive scan rules and associated settings which override the defaults | |
| The rule id | Int |
| The name of the rule for documentation purposes - this is not required or actually used | String |
| The Alert Threshold for this rule | String, default: medium one of Off, Low, Medium, High |
Spider
Description
The Spider is a tool that automatically discovers new resources (URLs) on a particular site. It begins with a list of URLs to visit, called seeds, which depend on how the Spider is run. The Spider then visits these URLs, identifies all the hyperlinks on the page and adds them to the list of URLs to visit and the process continues recursively as long as new resources are found.
Jobs structure
- type: spider name: "spider" parameters: context: "Default Context" user: "test_user" url: "" maxDuration: 0 maxDepth: 0 maxChildren: 0
Possible parameters
- browserId: <string> (Default - firefox-headless)
Browser ID to use.
- clickDefaultElems: <boolean> (Default - true)
When enabled only click the default element: 'a', 'button', and 'input'; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions.
- clickElemsOnce: <boolean> (Default - true)
When enabled only click each element once; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions.
- context: <string>
The context mentioned in ENV:
- elements:
A list of HTML elements to click - will be ignored unless
clickDefaultElems
is false."a" - represents the HTML element LINK.
"button" - represents the HTML element BUTTON.
"input" - represents the HTML element Input.
- eventWait: <integer> (Default - 1000)
The time in milliseconds to wait after a client-side event is fired.
- inScopeOnly: <boolean> (Default - true)
If true then any URLs requested which are out of scope will be ignored; for microservices / muti-endpoint applications the setting should be set to false.
- maxCrawlDepth: <integer> (Default - 10, 0 is unlimited)
At maximum depth of analysis, the spider will continue following links when crawling the application; it will impact the duration of the scan and should reflect the goal of the DAST scan.
- maxCrawlStates: <integer> (Default - 0 is unlimited)
The maximum number of crawl states the crawler should crawl.
- maxDuration: <integer> (Default - 0 is unlimited)
Maximum duration time for spider analysis; it will impact the duration of the scan and should reflect the goal of the DAST scan.
- numberOfBrowsers: <integer> (Default - 1)
The number of browsers the spider will use, more will be faster but will use up more memory.
- randomInputs: <boolean> (Default - true)
When enabled random values will be entered into the input element.
- reloadWait: <integer> (Default - 1000)
The time in milliseconds to wait after the URL is loaded.
- runOnlyIfModern: <boolean> (Default - false)
If true then the spider will only run if a "modern app" alert is raised; it is recommended to force the spider by setting it to false.
- url: <string> (Default - inherited from context)
URL to start spidering.
- user: <string> (Default - inherited from context)
An optional user to use for authentication must be defined in the env.
Name | Description | Type / Default |
---|---|---|
| The context mentioned in ENV: | String |
|
| String, inherited from Context |
| URL to start spidering | String, inherited from Context |
| Maximum duration time for spider analysis; it will impact the duration of the scan and should reflect the goal of the DAST scan | Integer, default: 0 unlimited |
| At maximum depth of analysis, the spider will continue following links when crawling the application; it will impact the duration of the scan and should reflect the goal of the DAST scan | Integer, default: 10, 0 is unlimited |
| The number of browsers the spider will use, more will be faster but will use up more memory | Integer, default: 1 |
| If true then the spider will only run if a "modern app" alert is raised; it is recommended to force the spider by setting it to false | Boolean, default: false |
| If true then any URLs requested which are out of scope will be ignored; for microservices / multi-endpoint applications the setting should be set to false | Boolean, default: true |
| Browser Id to use | String, default: firefox-headless |
| When enabled only click the default element: 'a', 'button', and 'input'; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions | Boolean, default: true |
| When enabled only click each element once; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions | Boolean, default: true |
| The time in milliseconds to wait after a client-side event is fired | Integer, default: 1000 |
| The maximum number of crawl states the crawler should crawl | Integer, default: 0 unlimited |
| When enabled random values will be entered into the input element | Boolean, default: true |
| The time in milliseconds to wait after the URL is loaded | Integer, default: 1000 |
| A list of HTML elements to click - will be ignored unless clickDefaultElems is false | |
| It represents the HTML element LINK | |
| It represents the HTML element Button | |
| It represents the HTML element Input |
Spider Ajax
The AJAX Spider is a crawler of AJAX-rich sites called Crawljax. You can use it to identify the pages of the targeted site. You can combine it with the (normal) spider for better results.
The spiderAjax job allows you to run the Ajax Spider - it is slower than the traditional spider but handles modern web applications well.
If the runOnlyIfModern is set to True then the passiveScan-wait job MUST be run before this one (as well as after it) and the Modern Web Application rule installed and enabled. If either of those things are not done then the Ajax spider will always run and a warning output. If they are both done and no Modern Web Application alert is raised then the assumption is made that this is a traditional app and therefore the Ajax spider is not needed.
Jobs structure
- type: spiderAjax # The Ajax spider is slower than the normal spider but handles modern apps well. name: "spiderAjax" parameters: context: "Default Context" user: "test_user" url: "" maxDuration: 60 maxCrawlDepth: 10 numberOfBrowsers: 1 runOnlyIfModern: false
Possible parameters
Name | Description | Type / Default |
---|---|---|
| The context mentioned in ENV: | String |
| User to be used for authentication (optional), generally inherited from ENV context | String, inherited from Context |
| URL to start spidering | String, inherited from Context |
| Maximum duration time for spider analysis; it will impact the duration of the scan and should reflect the goal of the DAST scan | Integer, default: 0 unlimited |
| Maximum depth of analysis, the spider will continue following links when crawling the application; it will impact the duration of the scan and should reflect the goal of the DAST scan | Integer, default: 10, 0 is unlimited |
| The number of browsers the spider will use, more will be faster but will use up more memory | Integer, default: 1 |
| If true then the spider will only run if a "modern app" alert is raised; it is recommended to force the spider by setting it to false | Boolean, default: false |
| If true then any URLs requested which are out of scope will be ignored; for microservices / multi-endpoint applications the setting should be set to false | Boolean, default: true |
| Browser ID to use | String, default: firefox-headless |
| When enabled only click the default elements: a, button, and input; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions | Boolean, default: true |
| When enabled only click each element once; to be modified only for specific scenarios of spidering applications that are more complex in terms of Ajax interactions | Boolean, default: true |
| The time in milliseconds to wait after a client-side event is fired | Integer, default: 1000 |
| The maximum number of crawl states the crawler should crawl | Integer, default: 0 unlimited |
| When enabled random values will be entered into the input element | Boolean, default: true |
| The time in milliseconds to wait after the URL is loaded | Integer, default: 1000 |
| A list of HTML elements to click - will be ignored unless clickDefaultElems is false | |
-"a" | It represents the HTML element LINK | |
-"button" | It represents the HTML element Button | |
-"input" | It represents the HTML element Input |
passiveScan-wait
Description
This job waits for the passive scanner to finish scanning the requests and responses in the current queue. You should typically run this job after the jobs that explore your application, such as the spider jobs or those that import API definitions. If any more requests are sent by the engine or proxied after this job has run then they will be processed by the passive scanner. You can run this job as many times as you need to.
Jobs structure
- type: passiveScan-wait parameters: maxDuration: 5
Possible parameters
- maxDuration: <int> (Default - 0 is unlimited)
Max time to wait for the passive scanner.
Name | Description | Type / Default |
---|---|---|
|
| Int, default: 0 unlimited |
openapi
Description
This job allows you to spider all the endpoints addressed in the Swagger file. Versions 1.2, 2.0, and 3.0 are supported.
Jobs structure
- type: openapi parameters: apiFile: "C:\\Users\\OpenAPI.yaml" apiUrl: "" context: "Default Context" targetUrl: "http://TestURL.com/*"
Possible parameters
- apiFile: <string> (Default - null, no definition will be imported)
The local file containing the OpenAPI definition.
- Context: <string> (Default - null, no context will be used)
Context to use when importing the OpenAPI definition.
- targetUrl: <string> (Default - null, the target will not be overridden)
URL which overrides the target defined in the definition.
Name | Description | Type / Default |
---|---|---|
| The local file containing the OpenAPI definition | String, null, no definition will be imported |
| Context to use when importing the OpenAPI definition | String, null, no context will be used |
| URL which overrides the target defined in the definition | String, null, the target will not be overridden |
activescan
Description
This job runs the active scanner. This actively attacks your applications and should therefore only be used on applications that you have permission to test.
By default, this job will actively scan the first context defined in the environment and so none of the parameters are mandatory.
Job Structure
- parameters: {} policyDefinition: rules: [] name: "activeScan" type: "activeScan"
Possible parameters
- addQueryParam: <bool> (Default - false)
If set will add an extra query parameter to requests that do not have one.
- context: <string> (Default - first context)
Name of the context to attack.
- defaultPolicy: <string> (Default - default policy)
The name of the default scan policy to use.
- defaultStrength: <string>
The default Attack Strength for all rules is either Low, Medium, High or Insane (not recommended).
- defaultThreshold: <string> (Default - Medium)
The default Alert Threshold for all rules, is either Off, Low, Medium, or High.
- delayInMs: <int> (Default - 0)
The delay in milliseconds between each request, used to reduce the strain on the target.
- handleAntiCSRFTokens: <bool> (Default - false)
If set, automatically handles anti-CSRF tokens.
- id: <int>
The rule id as per https://www.zaproxy.org/docs/alerts/.
- injectPluginIdInHeader: <bool>
If set, the relevant rule ID will be injected into the X-ZAP-Scan-ID header of each request.
- maxRuleDurationInMins: <int> (Default - 0 unlimited)
The max time in minutes any individual rule will be allowed to run for.
- maxScanDurationInMins: <int> (Default - 0 unlimited)
The max time in minutes the active scanner will be allowed to run for.
- name: <string>
The name of the rule for documentation purposes - this is not required nor actually used.
- policyDefinition:
The policy definition - is only used if the 'policy' is not set.
- policy: <string> (Default - default policy)
Name of the scan policy to be used.
- rules:
A list of one or more active scan rules and associated settings which override the defaults.
- scanHeadersAllRequests: <bool> (Default - false)
If set then the headers of requests that do not include any parameters will be scanned.
- strength: <string> (Default - Medium)
The Attack Strength for this rule is either Low, Medium, High, or Insane.
- threadPerHost: <int> (Default - 2)
The max number of threads per host.
- threshold: <string> (Default - Medium)
The Alert Threshold for this rule, is either Off, Low, Medium, or High.
- user: <string>
An optional user to use for authentication, must be defined in the environment.
Name | Description | Type / Default |
---|---|---|
context: | String: Name of the context to attack | String, default: first context |
user: | String: An optional user to use for authentication, must be defined in the environment | String |
policy: | String: Name of the scan policy to be used | String, default: Default Policy |
maxRuleDurationInMins: | Int: The max time in minutes any individual rule will be allowed to run for | Int, default: 0 unlimited |
maxScanDurationInMins: | Int: The max time in minutes the active scanner will be allowed to run for | Int, default: 0 unlimited |
addQueryParam: | Bool: If set will add an extra query parameter to requests that do not have one | Bool, default: false |
defaultPolicy: | String: The name of the default scan policy to use. | String, default: Default Policy |
delayInMs: | Int: The delay in milliseconds between each request, used to reduce the strain on the target | Int, default 0 |
handleAntiCSRFTokens: | Bool: If set, automatically handles anti-CSRF tokens | Bool, default: false |
injectPluginIdInHeader: | If set, the relevant rule Id will be injected into the X-ZAP-Scan-ID header of each request, | Bool |
scanHeadersAllRequests: | Bool: If set then the headers of requests that do not include any parameters will be scanned, | Bool, default: false |
threadPerHost: | Int: The max number of threads per host | Int, default: 2 |
policyDefinition: | The policy definition - is only used if the 'policy' is not set | |
defaultStrength: | The default Attack Strength for all rules is either Low, Medium, High, or Insane (not recommended). | String |
defaultThreshold: | String: The default Alert Threshold for all rules, is either Off, Low, Medium, or High. | String, default: Medium |
rules: | A list of one or more active scan rules and associated settings which override the defaults. | |
id: | Int: The rule id as per https://www.zaproxy.org/docs/alerts/ | |
name: | String: The name of the rule for documentation purposes - this is not required nor actually used. | String |
strength: | The Attack Strength for this rule is either Low, Medium, High, or Insane. | String, default: Medium |
threshold: | The Alert Threshold for this rule, is either Off, Low, Medium, or High. | String, default: Medium |
report
Description
The report job allows you to generate reports using any of the installed report templates.
Notice
This jobs is only supported in the pipeline approach (using the docker image to run the DAST scan) and for the traditional-pdf format.
Job Structure example
- parameters: template: "traditional-pdf" reportDir: "" reportTitle: "ZAP Scanning Report" reportDescription: "" name: "report-pdf" type: "report"
Possible parameters
- confidences: <list> (Default - all)
The confidences to include in this report. High, Medium, Low, or falsepositive.
- displayReport: <boolean> (Default - false)
Display the report when generated.
- reportDescription: <string>
The report description.
- reportDir: <string>
The directory into which the report will be written.
- reportFile: <string> (Default - {{yyyy-MM-dd}}-ZAP-Report-[[site]])
The report file pattern.
- reportTitle: <string>
The report title.
- risks: <list> (Default - all)
The risks to include in this report. High, Medium, Low, or Info.
- sections: <list> (Default - all)
The template sections to include in this report - see the relevant template.
- template: <string> (Default - traditional-html)
The template ID.
Name | Description | Type / Default |
---|---|---|
template: | The template id | String, default: traditional-html |
reportDir: | The directory into which the report will be written | String |
reportFile: | The report file name pattern | String, default: {{yyyy-MM-dd}}-ZAP-Report-[[site]] |
reportTitle: | The report title | String |
reportDescription: | The report description | String |
displayReport: | Display the report when generated | Boolean, default: false |
risks: | The risks to include in this report | List, default: all
|
confidences: | The confidences to include in this report | List, default: all
|
sections: | The template sections to include in this report - see the relevant template | List, default all |
postman
This job lets you import Postman collections via a URL or a local file.
The variables parameter works as follows:
Any variables defined in the collection will be replaced with their values. Additionally, the dialog allows for the provision of a comma-separated list of variables as key-value pairs in the format key1=value1,key2=value2,...
these variables will have precedence over the collection ones.
Jobs Structure
jobs: - parameters: collectionFile: "/Users/user/postman.yaml" collectionUrl: "key1=value1,key2=value2" variables: http://TestURL.com/* name: "postman" type: "postman"
- collectionFile:<String> (default: null, no collection will be imported)
Local file containing the Postman collection,
- collectionUrl:<String> (default: null, no collection will be imported)
URL containing the Postman collection,
- variables:<String> (default: null, no additional variables will be imported)
Comma-separated list of variables as key-value pairs in the format
key1=value1,key2=value2,...,
these variables will have precedence over the collection ones