What are Security Invariants?
Security invariants are a pattern for solving problems. We know what we want to be true, with minimal subjectivity. We know how to verify its truth, with minimal interpretation. It may not be true everywhere, but we know everywhere where it isn’t true.
An example of an invariant is:
- All employee laptop disks are encrypted
For that invariant to be valid, we need to answer two questions:
- What are all the disks?
- Are they all encrypted?
If we can’t affirm either of those questions, the invariant is invalid. Why are invariants useful?
Invariants are decidable
Many goals of software owners are misunderstood as binaries:
- Performant, not slow
- Well-designed, not shoddy
- Legal, not unlawful
- Secure, not exploitable
Teams that work on these goals understand there is no finish line. There is only postponement of the next inevitable failure. There is a rough consensus of past good decisions, based on observation. We cannot solve for complete security.
While security is not a binary, invariants are. We can know that we’re progressing if we can track our pace towards a fixed goal. We can pass go, collect $200, and move to the next prioritized problem.
Invariants are observable
System security is fractal. As a security engineer, your challenge is to build a lens on this complexity. This lens should be useful for both the non-security and security teams. What problem can we bring into focus?
If we focus on one problem at a time, we can take high-quality measurements. We can chart our progress. We can observe its resolution. We can avoid regression when our focus must shift.
Invariants include context
Security “best practices” need to be re-implemented for thousands of software projects. Each software project is a unique mix of people, process, and technology. Different companies adhere to best practices in unrelated ways:
- Use 2FA
- Encrypt all your data
- Patch your vulnerable infrastructure
We work backwards from the security compromises of legacy infrastructure. Each team is a single vantage point in the universe of collective knowledge. The way that “backup your data” works might vary from team to team within a company. Without familiarity, it’s impossible to know whether a best practice is working.
Furthermore, because you can’t refute their necessity, security best practices pile up. Security teams are thus incentivized to provide charitable interpretations as binary observations:
- We run static analysis (but we ignore the results)
- We have a firewall (with dangerous ports open)
- We use machine learning for threat detection (with high false positives and negatives)
We see this incentive-induced behavior (i.e. bullshit) in the exchange of security questionnaires. The value of this information can be less than the effort required to produce and collect it.
Checklist of security “best practices” suffer a form of context collapse. Insecure is difficult to infer. Secure is impossible to infer. Invariants provide the complete context needed to talk about security as a yes or no. If an invariant is valid, then you have truth. If it isn’t, you have nothing.
What is an invariant?
After working on security projects for years, I noticed a pattern. You start with a security best practice, either based on risk or compliance. You interpret this practice based on your organization’s technical and social culture. You develop consensus for a concrete outcome. These outcomes can become invariants. Some examples of good candidate invariants:
- All software services use single sign-on
- All users use WebAuthn tokens
- All servers have a reverse uptime of less than 90 days
- All results from a scanner are triaged and resolved
- All devices that access critical data are enrolled in MDM
- All alerts go through an enrichment pipeline
The process for turning this into an invariant is four steps:
- Identify all the resources
- Measure adherence
- Document exceptions
- Prevent regressions
Identify all the resources
The first piece of context for our invariant is the determination of where it applies. This is where invariants can go off the rails. If we don’t know about resources, we will never know if the invariant is valid. This quality of this data is foundational and requires effort to collect:
- A register of software services
- A user directory
- A cloud server inventory
- A vulnerability database
- An employee device inventory
- A list of security alerts from different systems
As much as possible, these systems should be a single source of truth. The security team should have appropriate programmatic access to these systems. This helps support automating invariant validation.
Measure adherence
Next, we need to establish if resources adhere to the invariant. In the best case there is an existing boolean value that we can pass through from the source of truth. Often, the signal requires more context. For example, how can we determine whether an alert has been “triaged”? It may mean building supporting alert management processes.
We’re in trouble if we can’t figure out whether an invariant holds for a particular resource. We should reformulate until the answer is determinable. Exclude systems that make the invariant hard to capture (e.g. “our scanner can’t run on this resource”).
Document exceptions
When we establish an invariant, some resources will not adhere to the invariant. These resources fall into two categories:
- We should resolve this issue
- We should not resolve this issue because the invariant doesn’t apply to this resource
It’s critical that we categorize each invalid resource. If in the first category, we may want to fix it immediately:
- This high-risk server has critical vulnerabilities
- This terminated employee has access to a production system
We may put things in a backlog of “nice to haves”:
- This low-risk server has an unverified local privilege escalation vulnerability
- This legacy shared account has an API key older than 90 days
We can always decide where the invariant doesn’t apply. Record the exception with the human decision-making context:
- This dependency vulnerability is not exploitable. We don’t use the vulnerable function.
- This security header is no longer considered a security best practice
For any resource where the invariant does not hold, we must be able to determine the exception for it. Once we have complete a mapping, we have established the invariant. We have created a small area of clarity on which to rest our further assumptions about risk.
Prevent regressions
To preserve our progress, we we need continuous validation of our invariants. We should plan how to do this at the beginning of the process. How do we know about all new resources? How do we measure adherence? How do we document exceptions?
This could look like a manual process performed daily, weekly, or quarterly. If we automate, we can inspect every new resource at creation time and track it. If we collaborate with product development, we can integrate the invariant into the developer experience.
Invariants are a powerful tool for framing successful security projects. Like OKRs, they reduce subjectivity and vagueness. But they are not estimates of progress tied to broader objectives. They are foundational rules of operation. Invariants are grown and nurtured over time. Once established, they decelerate the forces of entropy. This creates space for more ambitious future security improvements.