Last updated: April 2026
Every agency in our directory is scored on a single 100-point framework. The same rubric is applied to every agency, by the same evaluation system, using the same data sources. This page explains exactly how that score is built, what goes into it, and how an agency can improve theirs.
We publish our methodology in full because a ranking is only as trustworthy as the rubric behind it.
Every agency receives a score from 0 to 100, calculated as the sum of four equally weighted dimensions:
| Dimension | Range | What it measures |
|---|---|---|
| Creative Quality | 0–25 | Campaign originality, content innovation, creative execution |
| Client Impact | 0–25 | Measurable results delivered to clients |
| Industry Recognition | 0–25 | External validation from awards, platforms, and press |
| Revenue & Growth | 0–25 | Business scale, stability, and growth trajectory |
| Total | 0–100 |
Each dimension is scored independently against fixed criteria, not relative to other agencies. An agency's score reflects what it has actually done, not where it sits in a ranking.
This dimension evaluates whether an agency produces work that stands out. We look at campaign originality, the range of content formats an agency has executed, the quality of brand storytelling in case studies, and whether the agency has earned creative awards.
Scoring bands
| Score | Description |
| 20–25 | Award-winning creative work, innovative campaign formats, content that defines what others copy |
| 15–19 | Consistently strong creative output with several standout campaigns |
| 10–14 | Solid, competent creative execution across a portfolio |
| 5–9 | Basic campaign execution with limited creative differentiation |
| 0–4 | No evidence of creative work in available data |
What we look for
This dimension evaluates the measurable outcomes an agency has delivered for the brands it works with. We look at documented results, the calibre of clients in the portfolio, and the depth of evidence behind each case study.
Scoring bands
| Score | Description |
| 20–25 | Exceptional documented results across a blue-chip client roster |
| 15–19 | Strong results with recognisable brands and substantial case study depth |
| 10–14 | Some documented results across a mid-tier client base |
| 5–9 | Limited documented results, smaller client base |
| 0–4 | No evidence of client impact |
What we look for
We weight verifiable, named results more heavily than self-reported aggregate claims. An agency that can point to a specific campaign for a named client and show what it delivered will outscore an agency that claims large numbers without context.
This dimension evaluates external validation. An agency’s own marketing is not evidence; recognition from independent third parties is.
| Score | Description |
| 20–25 | Major industry awards (Cannes, Shorty, Webby), official platform partnerships (Meta, TikTok, YouTube partner status), frequent press coverage |
| 15–19 | Notable awards, some platform partnerships, speaking slots at major industry events |
| 10–14 | Regional awards, trade press mentions |
| 5–9 | Recognition limited to general directory listings |
| 0–4 | No external recognition found |
What we look for
This dimension evaluates business scale and stability. A good agency is not always a big agency, but scale is a meaningful proxy for the ability to deliver consistently across multiple clients and markets.
| Score | Description |
| 20–25 | Large team (200+), 10+ years established, global office footprint, clear growth signals |
| 15–19 | Mid-size team (50–200), 5–10 years established, multi-market presence |
| 10–14 | Small to mid-size team (10–50), growing, single or few markets |
| 5–9 | Small team (under 10), early stage, single market |
| 0–4 | No business scale data available |
What we look for
Every score is built from data we collect, structure, and verify ourselves. We do not rely on agency self-submission for the inputs that drive scoring.
We do not use agency self-rated scorecards, paid placements, or sponsored entries. No agency can pay to be added to the directory or to influence its score.
Every agency in the directory passes through the same multi-stage process before being scored.
We scrape the agency’s website to extract structured data: services, specialisations, brands worked with, locations, team size, year founded. Each field is captured from the source page and is auditable.
We deep-scrape every case study page on the agency’s website and extract structured records: client name, industry, campaign type, platforms used, work title, and quantitative results. This is the single largest input into Client Impact scoring.
We search award databases for mentions of the agency, and extract structured records: category, placement, awarding body, and year. Awards are stored individually, not as a count.
The structured data is passed through our four-dimension rubric. Each dimension is scored against fixed criteria, not against other agencies in the directory.
Every score and the data behind it is recorded and timestamped. Scores are recalculated when new data becomes available, for example when an agency wins a new award or publishes a new case study.
The fastest way to move up is to improve the underlying data. Below is a concrete checklist for each dimension.
If an agency believes its score does not reflect its current state, the right path is to update the underlying data on its website and award profiles. Scores will reflect those updates on the next refresh.
It is as important to be clear about what is excluded from the rubric as what is included.