A quick post on evaluating security products sold to enterprises.
Security products are somewhat unique in that the key metric of whether they perform their intended function can be quite divorced from other valuation criteria. For example, take endpoint security software: the ability to accurately (true positives, true negatives) identify attacks might be disconnected from other desired functionality such as user friendliness, ease of rollout, pricing, etc. And, in the enterprise market a lot of vendor write-ups are based on listed functionality over technical documentation, and often tack onto the latest trends regardless of suitability. This results in a hazy market where purchasing decisions can be made on listed functionality rather than actual feasibility.
Because of these market conditions and because we want to ensure that any security product we implement actually has a net benefit, its useful to start broadly, touching base with many vendors, and narrow quickly.
Narrowing is done through technical discussions: do you feel that the vendor and their product can actually deliver on their promises? Is it technical puffery or a smart solution to a difficult problem?
We don’t do requirements meetings with vendors and we don’t do “decision maker” meetings. If you’re at the stage to meet with vendors you should already know your requirements, and this can come off as fairly different for many enterprise vendors. The rationale for starting with technical discussions is two-fold:
At the same time, while we know what our requirements are, there can be many solutions to a technical problem and we rely on the vendor to know their own field better than we would. In short, technical conversations first not requirements meetings. Doing so let’s you start broadly and touch base with many vendors, and quickly create a short-list of which to take to a Proof-of-Concept (PoC).
Now that you have a short-list it’s time to undergo PoCs. As time can be short, the PoCs often end up being a prioritization process with the core functionality being the most important to validate.
Take the endpoint security software example, a core criteria might be “does it accurately identify attacks”? Accuracy would then be measured in both 1) True Positives: if an attack is flagged is this actually an attack and not a false positive, and 2) True Negatives: if we flag no attacks, are there really no attacks underway. For example, to test this you might go install a number of applications with known vulnerabilities, drop exploits against them, and use these exploits the mount increasingly complicated attacks while maintaining a group of regularly used behaviors.
We’re knowingly taking a risk here: assuming security problems are hard, if a vendor can deliver a product that addresses the hard problems, we assume (and hence take a risk) that they can deliver on easier problems. This gives us a PoC process where we verify the harder problems first than move on to easier problems as time permits.
Criteria | Vendor 1 | Vendor 2 | Vendor 3 | … | Weight | Totals |
---|---|---|---|---|---|---|
Group 1, Criteria 1 | Score (1 to 5) | Score (1 to 5) | Score (1 to 5) | … | 3 | Score*3 |
Group 1, Criteria 2 | … | … | … | … | 3 | … |
… | … | … | … | … | 3 | … |
Group 2, Criteria 1 | … | … | … | … | 2 | Score*2 |
… | … | … | … | … | 2 | … |
… | … | … | … | … | 2 | … |
Group 3, Criteria 1 | … | … | … | … | 1 | Score*1 |
… | … | … | … | … | 1 | … |
… | … | … | … | … | 1 | … |
Overall Score | Overall Score | Overall Score | Overall Score |
The above is an example table for scoring vendor products. In it, there are 3 groups of criteria each with a score of 1 (being the lowest) to 5 (being the highest). The 3 groups correspond to 3 different weights of 1 to 3.
Something like accuracy might be measured as two criteria: for example, sometimes we care more about having a low False Positive rate (so it doesn’t disrupt users or to ease the noise) more than a low false negative rate (which would however mean we’d have more misses).
A note on this table is that some criteria can fail the vendor as a whole if the criteria is core to the functionality of the product. Also, not all vendors evaluated need scores if they’re deemed not suitable.
One of the points I think I often miss is a criteria for how easy it is to work with the vendor (and perhaps the vendor’s pricing reflects how easy, or not, they find it is to work with us). Simply a note worth pointing out.
It’s useful to also note that how enterprise software vendors structure their sales processes are simply reflective of how buying cycles and decisions are often made in the enterprise. In a typical model, individuals higher in the ladder follow trends and make purchasing decisions - the focus on requirements meetings from vendors is reflective of this. In other models, everyone is aware of what the core broad problems are (for example, “endpoints need to be secured”) and it’s up to the day-to-day staff to come up with innovative solutions to these problems.