## axioms
- security efficacy has diminishing value, at some point, as rule quantity grows
- rule count is not an absolute measure of successful coverage
- coverage is not an absolute measure of security
- alert count has an inverse relationship with their manageability
- threats are not static
- security posture is a snapshot, thus only instantaneously represented
---
## the zen approach to detection
- detect behaviors over IOCs, except when necessary
- resist temptation to over or under-scope; balance between precise and broad
- rule upkeep should not outweigh the value of its protection
- correlation has power, but it can be overdone, leading to trivial bypasses
- generic patterns can minimize brittleness, but at the cost of performance
- have an inclination towards performance; expensive rules must justify the cost
- large environments are unpredictable; there will always be surprises from false positives and negatives
- clear beats vague, simple tops complex, readable rules win
- just like code, readability is important for avoiding errors
- in the face of ambiguity, refuse the temptation to guess
- if you can detect it, you can test it
- if the implementation is hard to explain, it's a bad idea
- if the implementation is easy to explain, it may be a good idea
- if a rule is too complex to understand, the alert is even worse
- consistent rule formats create predictability and fewer mistakes
- detection logic formatting should emphasize logical precedence and grouping
- detection logic should be resilient, but when it can’t, use multiple rules
- cut false positives to fight alert fatigue, even if it has missed behavior; there will be another angle
- less is more; don’t write rules for the sake of writing rules
- there is no “correct” quantity of total rules, but there can be too many and too few
- production environments vary wildly, so minimize assumptions
- adopt a detections-as-code or similar software-driven process for managing rules early
- always version your rules
---
## detection opportunities - evaluation questions
- Is the detection **brittle**?
- Does it have **narrow coverage**?
- Is it focused on **elements lower on the Pyramid of Pain**?
- Is it **resilient over time** and **resistant to changes in attack behavior**?
- Do we have reliable **visibility** into the required log source?
- Does it result in **too much noise** in my environment?
- Does it **incur excessive labor cost**?
- Is it **threat-oriented**?
- Does it come from a **threat model** or **threat intelligence**?
- What is the **prevalence or chokepoint** of the threat?
- Is it **actionable**?
- Does it **reflect insightful risk and impact** for my organization?
- Is it tailored to my **unique environment** and **crown jewels**?
---
## end goal - confidently answer "yes" to all of these questions
- Can you name the assets you are defending?
- Do you have visibility across your assets?
- Can you detect unauthorized activity?
- Can you accurately classify detection results?
- Who are your adversaries? What are their capabilities?
- Can you detect adversary activity within your environment?
- Can you detect an adversary that is already embedded?
- During an intrusion, can you observe adversary activity in real time?
- Can you deploy proven countermeasures to evict and recover?
- Can you collaborate with trusted partners to disrupt adversary campaigns?
- And ultimately, can you operate at the tempo of the adversary?
## references & inspiration
- [The Zen of Python](https://peps.python.org/pep-0020/)
- [The Zen of Security Rules](https://br0k3nlab.com/resources/zen-of-security-rules/)
- [Elastic Security's Philosophy](https://github.com/elastic/detection-rules/blob/main/PHILOSOPHY.md)
- [DFIR Hierarchy of Needs](https://holisticinfosec.blogspot.com/2016/12/the-dfir-hierarchy-of-needs-critical.html)