Scorecard

by Rebecca Bellairs

Choosing a marketing agency is a high impact decision.

It affects:

  • brand and performance outcomes
  • internal workload
  • budgets
  • stakeholder trust
  • and the quality of delivery for months (or years)

A scorecard helps you compare agencies in a way that’s consistent, explainable, and fair.

This page includes:

  • criteria you can use across marketing agency appointments
  • guidance on setting weightings
  • and a method for stakeholder scoring and governance

Related guides:


What is an agency evaluation scorecard?

An agency evaluation scorecard is a structured way to evaluate agencies against agreed criteria.

It helps you:

  • compare agencies consistently
  • avoid decisions being driven by the loudest voice
  • align stakeholders before the final discussion
  • capture evidence alongside scores
  • and keep a decision trail you can stand behind

Scorecards work best when criteria, weightings, and evaluator roles are agreed before pitch sessions begin.


Why scorecards matter in marketing agency selection

Marketing agency decisions can drift toward:

  • familiarity
  • confidence
  • rapport
  • the last presentation
  • or the most memorable idea

A scorecard improves and standardises judgement.

It gives you:

  • a shared decision framework
  • a way to compare agencies on the same basis
  • evidence to support your scores
  • and a decision record that holds up later

How to use a scorecard

A scorecard works best as part of a structured process:

  1. Agree evaluators, criteria and weightings before pitch sessions
  2. Score independently after each pitch stage
  3. Capture evidence alongside every score
  4. Review divergence (where people scored differently)
  5. Record the decision and rationale

Scorecard criteria (adaptable across marketing agency types)

There isn’t a ‘one size fits all’ scorecard.

Criteria and weightings should change based on:

  • marketing agency type (creative, media, performance, influencer, social, automation)
  • sector risk and compliance requirements
  • desired outcomes (brand shift, acquisition growth, capability building, efficiency)
  • and your internal team maturity

That said, many marketing agency selection scorecards include a mix of the criteria groups below. You can adjust, combine, or expand them as needed.


1) Strategic understanding

Do they understand the problem, audience, and business context?
Do they frame the work in a way that makes sense?

Evidence to look for:

  • insight quality
  • prioritisation and trade offs
  • ability to connect marketing activity to outcomes

2) Approach and methodology

How would they run the work?
Is the approach practical and repeatable?

Evidence to look for:

  • process and planning discipline
  • workflow and feedback loops
  • how they handle iteration
  • how they manage complexity

3) Capability and delivery strength

Can they deliver what you need with the team proposed?

Evidence to look for:

  • relevant case studies
  • delivery maturity
  • quality control
  • ability to scale or flex

4) Team quality and resourcing

Who is on the day to day team, and how stable is it?

Evidence to look for:

  • team structure and seniority
  • clarity on roles and ownership
  • turnover risk
  • whether the proposed team feels real

5) Measurement and effectiveness

Do they have a credible approach to measurement, learning, and improvement?

Evidence to look for:

  • measurement approach
  • reporting discipline
  • Test and learn habits
  • evidence of improvement over time

6) Collaboration and working fit

How will they work with your stakeholders, constraints, and approvals?

Evidence to look for:

  • communication habits
  • stakeholder management
  • approach to feedback and iteration
  • ability to operate inside your environment

7) Commercials and value

Is pricing transparent and aligned with scope and value?

Evidence to look for:

  • pricing structure
  • scope boundaries
  • incentives
  • change control approach

8) Risk and governance

Do they have mature ways of managing risk?

Evidence to look for:

  • compliance readiness
  • data handling approach
  • brand safety
  • escalation and decision processes
  • documentation and governance discipline

Weighting (there isn’t a default)

There isn’t a universal weighting model.

Weightings should shift based on:

  • what you’re hiring for
  • what success means for this appointment
  • what risks matter most
  • and what the business needs right now

The purpose of weighting isn’t precision.
It’s alignment.

A simple method that works well:

  1. Each evaluator allocates 100 points across the criteria
  2. Compare allocations
  3. Discuss the top 2 differences
  4. Agree final weightings you can defend

This surfaces priorities quickly and avoids circular debates.


How priorities shift by marketing agency type (examples)

These examples aren’t rules.
They’re prompts to help you tune weightings to the appointment.

Creative / brand agencies

You may prioritise:

  • strategic understanding
  • creative judgement and craft
  • team seniority
  • brand protection and risk

Performance media agencies

You may prioritise:

  • measurement and learning
  • optimisation capability
  • reporting discipline
  • delivery rigour

Influencer agencies

You may prioritise:

  • brand safety and governance
  • workflow and approvals
  • rights and usage
  • measurement and partner transparency

Martech agencies

You may prioritise:

  • delivery methodology
  • security and compliance
  • documentation and handover
  • capability building vs dependency

Content agencies

You may prioritise:

  • workflow reliability
  • quality control
  • planning and production rhythm
  • adaptability at scale

The criteria set can stay stable.
The weightings and evidence should reflect what you’re actually trying to achieve.


Scoring scale

Use a 1–5 scoring scale:

  • 1 = Not demonstrated / high risk
  • 2 = Weak
  • 3 = Meets expectations
  • 4 = Strong
  • 5 = Exceptional

Capture evidence for every score.
That’s what turns scoring into governance.


Evaluators, scoring rights, and decision authority

Scorecards fail when the right people aren’t involved early.

Nobody wants a senior sponsor turning up on pitch day without context and overriding weeks of work because previous work hasn’t been evidenced effectively or witnessed by the correct people.

To avoid that, define who is evaluating, who is scoring, and who is accountable before you begin.

1) Align evaluators on criteria early

Everyone who will score should be consulted on the criteria and weighting upfront.
If evaluators don’t agree what “good” looks like, scoring becomes a proxy for personal preference.

2) Evaluators should be involved throughout

People can’t score fairly if they only join at the final stage.
If someone needs a vote, they need context, and that usually means involvement across the key stages of the process.

3) Being on the pitch team doesn’t automatically mean you score

Some roles are consultative.
You might want input without a formal vote - for example:

  • subject matter experts
  • legal / compliance
  • brand governance
  • channel specialists

Their role can be to advise, challenge, and add evidence without assigning a final score.

4) Not all scores need equal weight

Some organisations treat all evaluators equally.
Many don’t, and there are good reasons for that.

For example:

  • a senior sponsor may carry greater decision authority
  • an operational lead may carry greater delivery accountability
  • other stakeholders may contribute input without being the final decider

A useful approach is to define scoring influence upfront, rather than debating it after the pitch.

5) Procurement’s role can vary

Procurement often supports governance and fairness, but doesn’t always score.

Common approaches include:

  • Procurement abstains from scoring to manage the process neutrally
  • Procurement scores only commercials and risk
  • Procurement contributes input without a score
  • Procurement participates fully, depending on how your organisation operates

What matters is that the role is explicit.

6) Document scoring rules before pitch sessions

Before pitches begin, agree:

  • who scores and who consults
  • whether any roles have weighted influence
  • how commercial and risk scoring is handled
  • how divergence will be resolved
  • who is accountable for the final decision

This prevents late stage surprises and protects the integrity of the process.


Governance and auditability

Marketing agency appointments often need to hold up to scrutiny.

A defensible decision comes from the right evaluators, involved early, applying agreed criteria consistently.

A scorecard supports this by creating:

  • consistent evaluation criteria
  • evidence linked to scores
  • stakeholder scoring records
  • and a decision trail that can be explained

Procurement can support governance without dictating the outcome, by helping teams agree criteria, manage documentation, and ensure the process stays consistent.


FAQs - Agency evaluation scorecard

What is an agency evaluation scorecard?

A structured way to evaluate agencies against agreed criteria, with scoring and evidence captured alongside each score.

What criteria should we include?

Use criteria that match your agency type, sector, and outcomes. Many marketing appointments cover strategy, delivery approach, team, measurement, working fit, commercials, and governance - but weightings should shift.

How many criteria should we use?

Most scorecards work well with 5–10 criteria. Too many makes scoring harder and less meaningful.

Should we share the scorecard with agencies?

Share the criteria and weighting early. You don’t need to share scores, but transparency about what matters improves responses and fairness.

Who should be allowed to score agencies?

Only evaluators who have been involved throughout and are qualified to judge the work should score. Some roles may be consultative only, and some scores may carry more decision weight depending on how your organisation operates.

How do we reduce bias when scoring?

Score independently, capture evidence, and review divergence in a structured discussion. Avoid scoring based on confidence or rapport alone.

How should procurement use a scorecard?

A scorecard supports consistent evaluation and produces a decision trail. Procurement can help structure criteria and documentation without deciding the outcome.