Skip to content

Conversation

@vbakke
Copy link
Collaborator

@vbakke vbakke commented Sep 22, 2025

Added missing descriptions and assessments, on level 1 activities.

Feel free to comment and adjust. I reckon we should aim to avoid repeating the same message in multiple attributes of the same activity. But I might still be doing that. Nice with more sets of eyes to improve...

@vbakke vbakke requested a review from wurstbrot September 22, 2025 15:14
Copy link
Contributor

@wurstbrot wurstbrot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good in general.

For the assessment attribute: It was written in a way that users understand what they have to provide in order to get it marked as implemented. I do not enforce that you re-write it, but for you know.

E.g.

The organization has a process for triaging and documenting false positives and accepted risks

could be

Provide a maximum one year old sample of a false positive or accepted finding including the date, description and date and expire date

- Deployment preparation
This can be done with a Jenkinsfile, Maven, or similar tools.
A *defined build process* automates these steps to ensure consistency, reproducibility, and security. Automation reduces human error and enforces security controls. Use tools such as Jenkins, GitHub Actions, GitLab CI, or Maven to codify the process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reproducibility is an extra activity (e.g. level 3) from my point of view.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. And it doesn't actually ensure security either. (It helps not get accidental security misconfiguration into the build. But ensuring security is a bit of a stretch.

What about:

Basing the build process on human memory may lead to inconsistencies and security misconfigurations.

A defined build process can automate these steps to ensure consistency, avoiding accidental omissions or misconfigurations. Use tools such as Jenkins, GitHub Actions, GitLab CI, or Maven to codify the process.

A simplified, but still a defined build process, may be a checklist of the steps to be performed.

This takes into account your comment for the assessment below, regarding simple README instructions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

risk:
Performing builds without a defined process is error prone; for example,
as a result of incorrect security related configuration.
Performing builds without a defined and automated process is error-prone and increases the risk of security misconfigurations, unauthorized changes, and supply chain attacks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is a defined build helping with supply chain attacks?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deducted from 'managing dependencies'. But I totally agree.

How about:

risk:
Without a defined and automated build process the risk increase for accidental mistakes, forgetting test activities, and insecure misconfigurations.

measure:
A well defined build process lowers the possibility of errors during
the build process.
A well-defined, automated, and auditable build process lowers the possibility of errors and unauthorized changes during the build process. It also enables traceability and rapid response to incidents.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is it helping with "It also enables traceability and rapid response to incidents."?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And: The sentence is not really a measure.

How about:

measure:
Find a tool that suits your environment. Add your manual build steps, include steps for running tests, scanning and preparation for deployment.

measure: |
Develop, document, and communicate a BCDR plan for all critical components. The plan must define roles and responsibilities, Service Level Agreements (SLAs), Recovery Point Objectives (RPOs), Recovery Time Objectives (RTOs), and failover procedures. Ensure all relevant personnel are trained and the plan is reviewed and updated regularly.
assessment: |
- The organization has a documented BCDR plan covering all critical components.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you recommend to have it for the organization and not the application?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. Because to me it was natural to think that it is the organisation that has a BCDR plan. An application does not have a plan. But the plan could be for the applications in question.

How about:

There is a documented BCDR plan covering all critical components of the application(s).

- [Kubescape with VEX](https://kubescape.io/blog/2023/12/07/kubescape-support-for-vex-generation/)
- [OWASP DefectDojo Risk Acceptance](https://docs.defectdojo.com/en/working_with_findings/findings_workflows/risk_acceptances/) and [False Positive Handling](https://docs.defectdojo.com/en/working_with_findings/intro_to_findings/#triage-vulnerabilities-using-finding-status)
assessment: |
The organization has a process for triaging and documenting false positives and accepted risks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The organization might have different tools for FP handling for different departments.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True.

How about:

A process is defined for triaging and documenting false positives and accepted risks

measure: |
- Make it a rule that all high or critical security findings must be fixed before the software is approved for release or use.
- Track these issues and make sure they are resolved quickly.
- Pay extra attention to Known Exploited Vulnerabilities (KEV) from CISA and EPSS scores when prioritizing fixes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I struggle to understand the content in Exploit likelihood estimation.
I agree with both sentences in the activity. But I struggle to understand what those mean in practical life.

KEV
I tried scanning our production war/jar/ear files using grype, and the output contained KEV labels as well as severity and EPSS.

Do you mean that prioritizing based on KEV is a different activity from prioritizing based on Severity?

Anyway, using grype was pretty straightforward. So I don't think that needs to be a level 3 activity anymore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, adding EPSS / KEV is an extra steps. A lot of (old) tools to not have that.

Vulnerabilities with severity high or higher are added to the quality
gate.
description: |
All security problems that are rated as "high" or "critical" must be fixed before the software can be released or used in production. This means that if a serious vulnerability is found, it cannot be ignored or postponed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old text is not clear but leaves interpretation what "quality gate" is.

For known vulnerabilities, which one is better? Scanning for known vuln in prod or in the pipeline?
From my point of view scanning for known vuln. in prod is much more important than somewhere in a pipeline.

The problem arises because we do not have an extra SCA sub-dimension. It might be time to add it (not in this pr)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think scanning the production environment is better than just scanning the pipeline.

Applications / services that are updated infrequently will "never" be scanned if we only do it in the pipeline.

And dependency vulnerability may "occur" after the application is deployed, without anyone changing a single line in the application.

SCA
Regarding a SCA sub-dimension: In a separate branch I have been adding more tags on activities. (Not yet pushed to your repo.) Tags such as 'scan', 'sca', 'sast', and 'dast'. That can also be used in the meantime. Without creating a totally new sub-dimension.

Copy link
Collaborator Author

@vbakke vbakke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for a thorough review.

Looks like I skipped Implementation and Test and Verification. I can try to add them soon.

However, could you please elaborate some on Default settings for intensity? The current text is very short, and I struggle to understand the boundaries for the activity.

- Deployment preparation
This can be done with a Jenkinsfile, Maven, or similar tools.
A *defined build process* automates these steps to ensure consistency, reproducibility, and security. Automation reduces human error and enforces security controls. Use tools such as Jenkins, GitHub Actions, GitLab CI, or Maven to codify the process.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. And it doesn't actually ensure security either. (It helps not get accidental security misconfiguration into the build. But ensuring security is a bit of a stretch.

What about:

Basing the build process on human memory may lead to inconsistencies and security misconfigurations.

A defined build process can automate these steps to ensure consistency, avoiding accidental omissions or misconfigurations. Use tools such as Jenkins, GitHub Actions, GitLab CI, or Maven to codify the process.

A simplified, but still a defined build process, may be a checklist of the steps to be performed.

This takes into account your comment for the assessment below, regarding simple README instructions.

risk:
Performing builds without a defined process is error prone; for example,
as a result of incorrect security related configuration.
Performing builds without a defined and automated process is error-prone and increases the risk of security misconfigurations, unauthorized changes, and supply chain attacks.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deducted from 'managing dependencies'. But I totally agree.

How about:

risk:
Without a defined and automated build process the risk increase for accidental mistakes, forgetting test activities, and insecure misconfigurations.

measure:
A well defined build process lowers the possibility of errors during
the build process.
A well-defined, automated, and auditable build process lowers the possibility of errors and unauthorized changes during the build process. It also enables traceability and rapid response to incidents.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And: The sentence is not really a measure.

How about:

measure:
Find a tool that suits your environment. Add your manual build steps, include steps for running tests, scanning and preparation for deployment.

measure: |
Develop, document, and communicate a BCDR plan for all critical components. The plan must define roles and responsibilities, Service Level Agreements (SLAs), Recovery Point Objectives (RPOs), Recovery Time Objectives (RTOs), and failover procedures. Ensure all relevant personnel are trained and the plan is reviewed and updated regularly.
assessment: |
- The organization has a documented BCDR plan covering all critical components.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. Because to me it was natural to think that it is the organisation that has a BCDR plan. An application does not have a plan. But the plan could be for the applications in question.

How about:

There is a documented BCDR plan covering all critical components of the application(s).

- [Kubescape with VEX](https://kubescape.io/blog/2023/12/07/kubescape-support-for-vex-generation/)
- [OWASP DefectDojo Risk Acceptance](https://docs.defectdojo.com/en/working_with_findings/findings_workflows/risk_acceptances/) and [False Positive Handling](https://docs.defectdojo.com/en/working_with_findings/intro_to_findings/#triage-vulnerabilities-using-finding-status)
assessment: |
The organization has a process for triaging and documenting false positives and accepted risks
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True.

How about:

A process is defined for triaging and documenting false positives and accepted risks

Vulnerabilities with severity high or higher are added to the quality
gate.
description: |
All security problems that are rated as "high" or "critical" must be fixed before the software can be released or used in production. This means that if a serious vulnerability is found, it cannot be ignored or postponed.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think scanning the production environment is better than just scanning the pipeline.

Applications / services that are updated infrequently will "never" be scanned if we only do it in the pipeline.

And dependency vulnerability may "occur" after the application is deployed, without anyone changing a single line in the application.

SCA
Regarding a SCA sub-dimension: In a separate branch I have been adding more tags on activities. (Not yet pushed to your repo.) Tags such as 'scan', 'sca', 'sast', and 'dast'. That can also be used in the meantime. Without creating a totally new sub-dimension.

measure: |
- Make it a rule that all high or critical security findings must be fixed before the software is approved for release or use.
- Track these issues and make sure they are resolved quickly.
- Pay extra attention to Known Exploited Vulnerabilities (KEV) from CISA and EPSS scores when prioritizing fixes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I struggle to understand the content in Exploit likelihood estimation.
I agree with both sentences in the activity. But I struggle to understand what those mean in practical life.

KEV
I tried scanning our production war/jar/ear files using grype, and the output contained KEV labels as well as severity and EPSS.

Do you mean that prioritizing based on KEV is a different activity from prioritizing based on Severity?

Anyway, using grype was pretty straightforward. So I don't think that needs to be a level 3 activity anymore.

A defined deployment process is a documented and automated set of steps for releasing software into production. It ensures that deployments are consistent, secure, and auditable, reducing the risk of errors and unauthorized changes. This process should include validation, approval, and rollback mechanisms.
risk: >-
Deployment of insecure or malfunctioning artifacts.
Deployment based human routines are error prone, and of insecure or malfunctioning artifacts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, readme is well definied.
Building and testing of artifacts in virtual environments is there to not run all build jobs on the same server without isolation.

assessment: |
- Deployment process is documented and available to relevant staff
- All deployment steps are automated
- Rollback procedures are defined and tested [Keep??? Delete???]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is not part of this activity I think. That is Rolling update on deployment

measure: |
Make security consulting available to teams on request, ensuring that expert advice is accessible when needed to address security concerns during development.
assessment: |
Records show that teams have access to security consulting services and have used them when needed. Documentation of consultations and resulting actions is available for review.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the assessment maybe:

  • Show evidence that an it security expert is available to ask questions to at least quarterly.

You suggestion is a bit like "buy a consultant" which might not be needed (for sure it comes from my wording beforehand).

risk: Attackers a gaining access to internal systems and application interfaces
measure: All internal systems are using simple authentication
assessment: |
- Demonstrate that every team member has appropriate access (least privilege).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my point of view, it is the opposite:
- Demonstrate that every team member has least access as possible.

Locally stored system logs can be manipulated by attackers unauthorized or might be corrupt or lost after an incident. In addition, it is hard to perform aggregation of logs.
measure: |
- Implement a centralized logging solution for all critical systems.
- System logs must be securely transmitted and stored in a central repository, protected from unauthorized access and modification.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"secure" transmission is not part of this activity

Collect and monitor key system metrics, including CPU, memory, and disk usage. Set up alerts for abnormal resource consumption or patterns that may indicate incidents or attacks.
assessment: |
- Basic system metrics are monitored and reviewed regularly
- Alerting outside given thresholds are implemented
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

description: |
Security tests may produce false positives—findings that are incorrectly identified as vulnerabilities.
It is important distinguish these from true vulnerabilities to avoid wasting time and resources on non-issues.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true positive vulnerabilities

measure: |
- Make it a rule that all high or critical security findings must be fixed before the software is approved for release or use.
- Track these issues and make sure they are resolved quickly.
- Pay extra attention to Known Exploited Vulnerabilities (KEV) from CISA and EPSS scores when prioritizing fixes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, adding EPSS / KEV is an extra steps. A lot of (old) tools to not have that.

- Track these issues and make sure they are resolved quickly.
- Pay extra attention to Known Exploited Vulnerabilities (KEV) from CISA and EPSS scores when prioritizing fixes.
assessment: |
There is clear evidence that all high or critical security issues are tracked and fixed before release. No high or critical issues remain open in production systems.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you agreed beforehand that "production view" is better than "pipeline view", this sentences needs to be alinged, e.g.:

  • Provide evidence that vulnerabilities are treated within the defined time frame in production. For example via the DSOMM activity Number of vulnerabilities/severity or Patching mean time to resolution via PR with extra deployment statistics.

- 5.25
implementation: []
implementation:
- $ref: src/assets/YAML/default/implementations.yaml#/implementations/cisa-kev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no kev here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants