Loading...
Loading...
How we assess software. Published, versioned, and applied consistently to every vendor. Currently v0.1.0.
Every assessment is structured around Jobs to be Done — the specific capabilities a buyer needs from a product area. JTBDs are the atomic unit of assessment: each one is independently scored across multiple domains, evidence-graded, and versioned.
14 product areas currently defined: Third-Party Risk Management, AI Governance, Audit Management, Business Continuity, Compliance Management, Cyber Risk, ESG, Enterprise Risk Management, Governance, Incident Management, Policy Management, Privacy, Regulatory Change.
Each JTBD is scored across 8 domains. This prevents a single score from masking strengths or weaknesses — a vendor might score highly on depth but poorly on usability for non-experts.
A 1–5 ordinal scale where each level represents a qualitatively different degree of capability. These are not percentages — they describe architectural depth, not a ranking.
Every score is paired with an evidence level that communicates how the capability was observed. Higher evidence levels unlock higher maximum scores — you cannot score 5 on something that was only described in a slide deck.
Two tiers of assessment, using the same published rubric. The difference is depth and method of evidence collection.
These principles govern every assessment we conduct. They are non-negotiable.
Every score is derived from observable evidence, not vendor reputation or market position.
We assess outcomes, not features. The same rubric is applied to every vendor regardless of size or relationship.
Vendors receive published opinions before they go live and have the opportunity to respond. Responses are visible alongside AV conclusions.
Our JTBD rubrics and scoring criteria are published and versioned in a public GitHub repository.
Vendors have a 14-day review window to formally dispute scores with supporting evidence.
All commercial relationships are declared. Assessment fees are fixed and identical for all vendors.
Full JTBD rubrics and scoring criteria are available on GitHub. For specific questions about methodology, get in touch.