Blog Post

SAMM and BSIMM

  • By Brian Glas
  • 11 Oct, 2020

Which one should I choose?

Building Security In Maturity Model (BSIMM)

and

Software Assurance Maturity Model (SAMM)


BSIMM - https://www.bsimm.com/
OWASP SAMM – https://owaspsamm.org


BSIMM (Building Security In Maturity Model) and SAMM (Software Assurance Maturity Model) have similar origins dating back to a common origin back in 2008-2009. I’m frequently asked about what is similar and what is different between the two models, so I wrote up this comparison to help organizations understand which of these two models may be a better fit for their needs.

Topic BSIMM SAMM
Type of Model

BSIMM is a descriptive model.

The BSIMM is primarily a measuring stick for software security. The best way to use it is to compare and contrast your own initiative with the data about what other organizations are doing. The BSIMM also functions as a roadmap for an SSI. You can identify your own goals and objectives, then refer to the BSIMM to determine which additional activities make sense for you.

The purpose of the BSIMM is to quantify the activities carried out by various kinds of SSIs across many organizations. Because these initiatives use different methodologies and different terminology, the BSIMM requires a framework that allows us to describe any initiative in a uniform way. Our software security framework (SSF) and activity descriptions provide a common vocabulary for explaining the salient elements of an SSI, thereby allowing us to compare initiatives that use different terms, operate at different scales, exist in different parts of the organizational chart, operate in different vertical markets, or create different work products. [1]

As a descriptive model, the only goal of the BSIMM is to observe and report. We like to say we visited a neighborhood to see what was happening and observed that “there are robot vacuum cleaners in X of the Y houses we visited.” Note that the BSIMM does not say, “all houses must have robot vacuum cleaners,” “robots are the only acceptable kind of vacuum cleaners,” “vacuum cleaners must be used every day,” or any other value judgements. We offer simple observations simply reported. [2]

SAMM is a prescriptive model.

SAMM is a prescriptive model, an open framework which is simple to use, fully defined, and measurable. The solution details are easy enough to follow even for non-security personnel. It helps organizations analyze their current software security practices, build a security program in defined iterations, show progressive improvements in secure practices, and define and measure security-related activities.

SAMM was defined with flexibility in mind so that small, medium, and large organizations using any style of development can customize and adopt it. It provides a means of knowing where your organization is on its journey towards software assurance and understanding what is recommended to move to the next level of maturity.

SAMM does not insist that all organizations achieve the maximum maturity level in every category. Each organization can determine the target maturity level for each Security Practice that is the best fit and adapt the available templates for their specific needs. [3]

The value of OWASP SAMM lies in providing a means to know where your organization is on its journey towards software assurance and understand what is recommended to move to a next level of maturity. OWASP SAMM does not insist that all organizations achieve maturity level 3 in every category. Indeed, you determine the target maturity level for each Security Practice that is the best fit for your organization and its needs. OWASP SAMM provides a number of templates for typical organizations to this end, but we recommended that you adapt these to the needs of your organization. [4]
Maturity Levels “The BSIMM is not a traditional maturity model where a set of activities are repeated at multiple levels of depth and breadth—do something at level 1, do it more at level 2, do it better at level 3, and so on. Instead, the BSIMM comprises a set of unique activities, with activity levels used only to distinguish the relative frequency with which the activities are observed in organizations. Frequently observed activities are designated as level 1, less frequently observed activities are designated level 2, and infrequently observed activities are designated level 3.” [5] In SAMM, each of the security practices has three defined maturity levels and an implicit starting point at zero. The details for each level differ between the practices, but they generally represent:
 0 – Implicit starting point representing the activities in the practice being unfulfilled
 1 – Initial understanding and ad-hoc provision of security practice
 2 – Increase efficiency and/or effectiveness of the security practice
 3 – Comprehensive mastery of the security practice at scale
Primary Categories BSIMM has four high level domains:
 - Governance
 - Intelligence
 - SSDL Touchpoints
 - Deployment
SAMM has five business functions:
 - Governance
 - Design
 - Implementation
 - Verification
 - Operations
Secondary Categories BSIMM has three practices per domain
Governance
 - Strategy and Metrics
 - Compliance and Policy
 - Training

Intelligence
 - Attack Models
 - Security Features and Design
 - Standards and Requirements

SSDL Touchpoints
 - Architecture Analysis
 - Code Review
 - Security Testing

Deployment
 - Penetration Testing
 - Software Environment
 - Config Mgmnt / Vuln Mgmnt
SAMM has three security practices per business function:
Governance
 - Strategy and Metrics
 - Policy and Compliance
 - Education and Guidance

Design
 - Threat Assessment
 - Security Requirements
 - Secure Architecture

Implementation
 - Secure Build
 - Secure Deployment
 - Defect Management

Verification
 - Architecture Analysis
 - Requirements-driven Testing
 - Security Testing

Operations
 - Incident Management
 - Environment Management
 - Operational Management
Activities BSIMM has grouped between 7-12 observed and related activities per practice area. SAMM has defined two streams of activities per security practice that compliment and build on each other.
Results Comparison BSIMM 11 has 130 contributing organizations that were interviewed and contributed to their comparison dataset. SAMM is working on the SAMM Benchmark project to collect data from implementing and assessing organizations for comparison.
Frequency of Updates BSIMM is updated annually SAMM is updated every 2-3 years
Assessment BSIMM is proprietary to Synopsys. For an assessment you should reach out to them for cost and logistics. SAMM is an open model and can be self-assessed or conducted by a number of different consulting organizations and individuals.

[1] BUILDING SECURITY IN MATURITY MODEL (BSIMM) – VERSION 11 (Pg 3)

[2] BUILDING SECURITY IN MATURITY MODEL (BSIMM) – VERSION 11 (Pg 9)

[3] https://owaspsamm.org/about/

[4] https://owaspsamm.org/guidance/quick-start-guide/

[5] BUILDING SECURITY IN MATURITY MODEL (BSIMM) – VERSION 11 (Pg 3)


The comparison table should be factual, it’s pulled from the published information for each model. If you ask me which model you should use, I would give you this guidance based on my knowledge and experience:

 - If you absolutely must have a comparison to other organizations then your two options are either conduct a BSIMM engagement to receive a report that will show how your organization will compare with the other participating organizations in the BSIMM framework or conduct a SAMM assessment and perform a high level comparison of the scores. It will be relatively comparable at a high level as the security practices are still fairly similar. Until the SAMM Benchmark project is further along, this is the only real option.

 - If you would like to conduct a self-assessment, SAMM would be the obvious choice as the model is open and available for self-assessments. SAMM also provides a number of resources including quick start guides and the toolbox to aid in capturing and measuring assessments.

 - If you would like a third-party to conduct an assessment, you could go with Synopsys to conduct a BSIMM assessment or one of a number of different consulting companies that would conduct a SAMM assessment.


If you have any questions, please feel free to reach out.

By Brian Glas 12 Jun, 2019
Something that's been bouncing around in my head is the topic of data collection and analysis, specifically in the security realm. I've been part of the OWASP Top 10 2017 data collection and analysis process (might be for future ones) and also starting to work on the OWASP SAMM Benchmarking project which will likely encounter similar challenges.

Data collection and management is a large challenge for projects in Open Source organizations like OWASP. As a group of people that come together with a common goal and wanting to be neutral for the most part, there are some areas that are many times just taken for granted at a large organization that can present some hard challenges to a loose band of people trying to help in an area they are passionate about.

For the OWASP Top 10, for many years the data collection was a fairly quiet affair. Then a few people started making noise that it should be a fully public dataset so anyone could work on it. While this is a noble goal, there are real-world trade-offs for doing that. For the Top Ten 2017, we had to explain to organizations that were considering contributing data, that "yes, anyone in the world will be able to see the raw data." There were several companies that didn't contribute due to that stipulation. I was working with an organization that over half a million records that could be contributed, but legal couldn't approve the release as they feared (and rightly so) that someone could use it against them. Not to breach anything, but to attempt to damage their brand.

So you have to ask yourself, would you rather have a larger dataset that isn't public or a smaller dataset that is? Almost all the contributors to the Top 10 2017 were service providers. The numbers they reported were already an aggregation of a number of their clients that are anonymized. Which makes sense, there is a tangible risk to disclosing your internal vulnerability metrics to the world; but if it's already an aggregate dataset, then it shouldn't be able to be traced back to individual organizations (generally).

We've had similar discussions for OWASP SAMM. We are working on a project to build a benchmark of software security maturity scores and are working through the details of how to collect, manage, and distribute the data. It's a pretty challenging task to work through the details for how to collect data, allow updates, determine what metadata could/should be collected, how anonymous it should be, how to handle versions, how to provide meaningful metrics to help organizations learn from others while protecting the submitting organizations and earning their trust, and so on.

Should the raw contributed SAMM scores be public data?

Honestly, in my opinion, the answer is no.

Also, I think the raw contributed data for the Top 10 2020 shouldn't be either.

Look at all the industry analysis produced by numerous organizations. I don't believe any of that raw data has been provided to the public, and I'm ok with that. I have my own opinions (as I'm sure you do as well) about some of those reports, but at the end of the day, we have to choose whether or not to trust their analysis. Sometimes we can, sometimes, not so much. If we can get more data, with a broader reach, and possibly a little more detail, I think it's totally worth it. You may not agree with me, and that's fine. Just stating my opinion based on my experiences.

I would love to be able to start trying to build a knowledge base of which vulnerabilities are more prevalent in specific languages and frameworks among other correlations, we have all this data in tiny silos; we really need to put it to good use. I've previously talked about so many different questions that we haven't been able to answer because of the lack of a good clean, solid, large data set. Then we could start trying to solve what we can in the languages/frameworks themselves and teach people how to code securely for the remaining things. So much could be done in this space if we sit down and make it a bit more of a priority.
By Brian Glas 10 Jun, 2019
Introduction to PG Security Advisors and why we do what we do.
More Posts
Share by: