The 5-point ROI calculator



If you’re anything like me, you’ll have been paying lip service to the concept of ROI for years. I’m generally sceptical of the validity of most ROI calculations, but don’t have a credible argument for why we shouldn’t attempt them. I generally mumble something about ‘qualitative’ data and look a bit sheepish.

So in yesterday’s post on ten things to look for in a digital channel as part of a 21st century ‘beating the bounds’ visit, point 10 was about attempting a simple ROI assessment for the channel, to see whether it is worthy of further scrutiny with a view to making it more productive for the organisation. Here’s a quick suggestion for how you might attempt that.

Take three dimensions:

  1. Significance: how important is this channel to the organisation? This might relate to whether people would reasonably question its absence (e.g. a corporate website), its role in delivering important goals for the organisation, or its importance to senior management (but don’t over-egg the last one if it’s just a vanity channel).
  2. Resource: how much time and financial resource do we put – or should we be putting – into maintaining this? Just a sense of effort/cost, no hard numbers needed.
  3. Value: what does it provide us with in terms of helping to meet the goals we’ve set for it? This might be a cash saving, it could be a sustained increase in useful feedback received to a consultation, or might be the enthusiasm from colleagues for the insights they get from it – be open-minded.

For each of those dimensions, give then channel a High, Medium or Low score – it’s more important to complete the exercise than generate numbers. Be honest, be decisive.

Then apply this matrix:

Measure: High: Medium: Low:
Significance 1 point 2 points 3 points
Resource 3 points 2 points 1 point
Value 1 point 2 points 3 points

If anything scores 5 points or more, put it on a watch list of channels to be made more useful or less resource-intensive to maintain. 7 points or more? Put it top of the list. Anything score 3 points? Make a note of it for the business case next time you’re asked to demonstrate your team’s efficiency.

Finally, here’s a Google Doc version of the matrix, in case that helps (download the sheet to put in your own numbers):


Coming up with maturity model for digital in the public sector

maturity models

Hands up who says they work in ‘new media’? Me neither. While we’re not quite in a digital by default world, this stuff has been around for a decade and a half. Even in the public sector.

One topic I’d like to think about at this week’s UKGovcamp on Saturday (the ‘doing’ day) is whether we can come up with a way of thinking about public sector digital activity in terms of a maturity or capability model, that could be applied to help teams and individuals set goals and maybe even benchmark their effectiveness. For instance, it might:

  • Help teams to think about how sophisticated the organisation is at adopting and managing social media as part of official Communications and day to day communication
  • Provide some material for people thinking about their CMS features and procurement, to factor in the kinds of activities and processes those tools should be supporting in 2012
  • Offer insights into team size and structure, what the roles are in managing digital projects effectively (I’m deliberately not saying ‘digital communication’, for now)
  • Give everyone some ready-made benchmarks to help evaluate impact, and if not hard numbers, then at least an open-source process for getting to an assessment of digital effectiveness

I’ve got a small commission – a day’s paid time – from the digital team at the National Audit Office (update: actually, this was never actually used) to contribute towards managing the process of collating this, writing it up and sharing it for the benefit of their own team and others. We’d really appreciate input from a wide group on what a maturity model might look like – and indeed, whether it’s the best approach to take.

The idea would be to brainstorm at UKGovcamp, take the ideas away and write them up into a draft structure, get more feedback on them here, and then publish a methodology or framework of some kind under a Creative Commons licence for anyone to use and take forward. Hopefully we’d make it flexible enough to work for anything from a Whitehall department to a district council, and something that anyone who’s reasonably switched-on digitally can deploy without needing to bring in an expensive consultant (or even a reasonably-priced one).

Who’s up for helping with that?

UPDATE: Here’s the notes from the UKGovcamp discussion

How should you measure the success of a digital team?

Some things are a numbers game: retail sales, top-flight athletics, fund management. There are standard yardsticks, you can compare the players, and there’s intense competition.

Some things clearly aren’t like that: teaching, poetry, social care, research science, political lobbying maybe. That’s not to say that people don’t try, or that there aren’t measurable factors or reasonable proxies for some of those factors.

So where does digital communication in government sit? Traditionally, a marketing communications discipline, it’s been an awkward fit between those from commercial marketing backgrounds who expect a quantifiable return on investment; and those from a information, behavioural psychology or news backgrounds, who don’t, really. Throw in the fact that done well, it’s a highly innovative field of work with relatively few industry conventions, and you’ve got a real challenge for evaluating success.


Stephen Hale is in characteristically thoughtful and practical mode over on his new work blog, on the subject of how to evaluate how successful his team has been in its stated aim of becoming the most effective digital communication operation in government. Knowing that no measures are perfect but you have to gather some data in order to have an objective evaluation, his team have identified 13 measures to help them monitor their progress against that goal, and he’s blogged about them with impressive openness.

I’ve always struggled to find ways to articulate the goals for the teams I’ve been part of, and universally failed to define suitable KPIs. But Stephen’s post motivated me to try, and I think that’s partly because I’m not entirely comfortable with the conclusions he reached. What follows from me here, therefore, is a mixture of half-formed ideas and rank hypocrisy, as all good blog posts are.

Stephen’s indicators are as follows:

1. Comparison to peers. KPI: Mentions in government blogs

2. Digital hero. KPI: Sentiment of Twitter references for our digital engagement lead

3. Efficiency. KPI: Percentage reduction of cost-per-visit in the 2011 report on cost, quality and usage

4. Types of digital content. KPI: Number of relevant results for “Department of Health” and “blogs” in first page of search

5. Audience engagement. KPI: Volume of referrals to

6. Platform. KPI: Invitations to talk at conferences about our web platform

7. Social media engagement. KPI: Volume of retweets/mentions for our main Twitter channel

8. Personal development. KPI: Number of people in the digital communication team with “a broad range of digital communication skills” on their CV.

9. Internal campaign. KPI: Positive answer to the question: “Do you understand the role of the Digital communications team?”

10. Staff engagement. KPI: Referrals to homepage features (corporate messages) on the staff engagement channel

11. Solving policy problems. KPI: Number of completed case studies showing how digital communication has solved policy problems

12. News and press. KPI: Number of examples of press officers including digital communication in media handling notes

13. Strategic campaigns. KPI: Sentiment of comments about our priority campaign on target websites

Clearly, there’s more thinking behind this than just these KPIs, and I don’t want to  unfairly characterise this pretty decent list as a straw man or pick on the individual items. So I tried to frame this instead from asking a more basic question: what actually makes a government digital team effective? And for me, I think there are three key aspects:


  • how wide is the reach of the team’s work?
  • does it accurately engage the intended audiences?
  • what kind of change or action results from its work?


  • how efficiently are goals achieved, in terms of staff time and budget?
  • how skilled and motivated is the team?
  • how successful is the team at maintaining quality and its compliance obligations?


  • how satisfied are the target audiences with the usefulness of the team’s work?
  • how satisfied are internal clients with the contribution the team’s work makes to their own?
  • what reputation does the team have with external stakeholders and peers?

In terms of coverage, I don’t think there’s a great deal of difference between my list and Stephen’s. But the key challenge with my list is that I’d struggle to define meaningful KPIs for many of those criteria. To me, that’s a reason to find more valid ways of evaluating performance than KPIs, rather than to use the KPIs that are readily measurable.

In fact, I think I’d go as far as to argue that given the kind of innovative knowledge work a government digital team does, probably the majority of its approach to evaluation should be qualitative, getting the participants and customers to reflect on how things went and how they might be improved – and resist the pressure to generate numbers. Partly, I think that’s because those numbers often lack relevance, and are weak proxies at best for the often complex goals and audiences involved. Often, they are beyond the control of the team to influence. But more importantly, even as part of a really well balanced scorecard approach, they distort effort and incentives by providing intermediary goals which aren’t directly aligned with the real purpose of the team. There’s a great discussion of this in DeMarco and Lister’s Peopleware – a real classic in how to manage technology teams.

One such qualitative review process I tried (somewhat half-heartedly) to introduce in one team I worked in was the idea of post-project reviews centred around a meeting to discuss three questions, looking not only at the outcomes of the project, but about how we felt about the process and what we could learn from it:

Reviewing performance

How do you gather this feedback? Well, collecting emails is one way. A simple, short internal client feedback form sent after big projects and to regular contacts is another. Review meetings focussed on projects are pretty important. And asking for freeform feedback from customers whether via comment forms, ratings, emails or Twitter replies is pretty vital too. By all means monitor the analytics and other quantitative indicators, but use them primarily as the basis for reflection and ideas for improvement. Then make the case aggressively to managers that it’s on the basis of improvement or value added that the team should really be judged.

Oddly, I think there is an exception to this qualitative approach, and that’s in quite a disciplined approach to measuring productivity. The civil service isn’t generally great at performance management (and many corporate organisations aren’t, to be fair). But in the current climate, being able to measure and demonstrate improvement in the efficiency of your team is really important. I’m in the odd minority who believe in the virtue of timesheets, as a way of tracking how clients and bureaucracy use up time, rather than as a way of incentivising excessive hours. If adding a consultation to your CMS takes a day, or publishing a new corporate tweet involves three people and an hour per tweet to draft and upload, it’s important to know that and tackle the underlying technological and process causes.

But productivity as I think about it is also about a happy, motivated team working at the edge of their capabilities, as part of a positive and supportive network. Keeping an eye on that is really about good day-to-day management rather than numbers.

Three cheers for Stephen and his team for demonstrating the scope of their work and identifying measurable aspects of their performance against it. I’m on uncertain ground here, as I’m not so naïve as to think that team performance in some organisations (not necessarily Stephen’s) isn’t often assessed on numbers and without demonstrable KPIs, they can be vulnerable.

But in designing yardsticks, let’s not underestimate the value of qualitative data and reflection in making valid assessments of success and actually improving the way government does digital.

The year of living (slightly) dangerously

Kingsgate House

(Image: Google Street View, DIUS Kingsgate House, London)

This week marked a year since I joined DIUS as the first permanent member of staff working exclusively on social media, and roughly a year or so since Justin’s pioneering social media strategy started to take shape.

It’s been a fantastic year. From being a ‘Team Leader’ of a one person team, having merged with another team and picked up some great talents along the way, we’re now briefly a team of eight (give or take). Growth isn’t everything, but it means we can do more interesting things, more quickly, across a wider swathe of policy areas, and is hopefully a good sign.

Some of the highlights of this last year for me:

Exploring what we can do with consultation: The work Michelle did on the Innovation Nation white paper supported by a Commentpress site taught us a lot about the potential for niche engagement, and we’re taking the learnings from the ups and downs of Science and Society and the HE Debate into future projects which challenge the old slap-a-PDF-on-the-website, 12-week approach.

Maintaining a JFDI attitude: I’m proud that we’ve overcome the treacle of well-meaning bureaucracy and delivered quite so many projects – some relatively successful, others undoubtedly flops – whilst remaining on good terms with IT, finance, comms and policy. We’ve taken measured risks… and the sky didn’t fall on our heads. Yet. Best of all, we’ve given courage to a few others to do some of the same, only better.

Taking the broad view of engagement: Though based in Comms as most digital teams are, we’ve consistently argued that digital engagement has a wider role, from customer insight and consultation through to marketing and press – the Mature Students project in partnership with The Student Room is perhaps the most lovely illustration.

Open sourcing our stuff: A key plank of Justin’s strategy was open innovation through sharing of our tools and experiences – I’m pleased with just what shameless cross-government networkers we’ve become, and the open sourcing of Commentariat and Bookmarklist which seem to be helping others already.

The picture isn’t entirely rosy, of course. Some days, I feel like we’ve done little more than waste time, money or – even worse – opportunity. We certainly haven’t embedded digital engagement in everyday thinking yet. When push comes to shove, many apparent enthusiasts are still sceptics at heart. We still haven’t nailed some of the basics like evaluation, the business case or routinely procuring the right kind of suppliers (with some honourable exceptions, of course). And we’re still very much feeling our way as a combined online/offline engagement team. Three months into 2009, we still need to work harder to support pioneers within the organisation to stand any chance of scaling up the impact of our work.

As I’ve posted over on Emma’s blog, the lessons of the last year have taught us:

  1. Interactive websites need interactive organisations. Don’t embark on digital engagement projects without recognition from all involved that they need to actively engage with feedback – and then do something with the outputs.
  2. Focus on the content, not the platform. Don’t get too hung up on the tool, or even online as a whole. People engage with issues, so try and bring those to life and don’t let the medium become the message.
  3. Find and support the pioneers and champions. There is enormous latent enthusiasm and goodwill towards digital engagement within big organisations – find these people, get them the permission they need, and support them to do digital engagement for themselves. (Though self-evident, I’ve found this one tough to put into practice.)
  4. Be honest about scope and boundaries. Find out up front what is up for discussion, and what’s been decided. You’ll defuse arguments and minimise hostility if you’re open about identity, remit and agendas.
  5. Protect information that needs to be protected. Manage the risks of digital engagement – not just in terms of reputation, but in terms of how the tools are used, data storage and archiving.
  6. Integrate with other partners and channels. Combine things: be nervous if a project is based on a single platform or organisation. Build it and they won’t necessarily come. Be smart about your online PR.
  7. Make it enjoyable and interesting for your different audiences. Policy discussions work at different levels: facilitate a credible, interesting discussion for the experts, but also something more accessible and – dammit – fun for public/younger groups. And we’re generally not the best people to decide what constitutes ‘fun’.
  8. Enable remixing & co-design: ask who can help us do this? Providing open data lets other people do what we can’t yet imagine, or with a frankness we simply can’t say ourselves.
  9. Enhance progressively: build from inclusive and accessible base of information. ‘Accessibility’ isn’t a tickbox, and it isn’t pass/fail either. Choose social media platforms wisely but pragmatically, on the basis of publishing core information which is multimodal, customisable and platform-neutral.
  10. Evaluate intelligently and share openly. Write down what you’re trying to achieve, work out if you achieved it, and tell people what your learned.

Thanks to everyone who has helped us on our way so far: you know who you are. Here’s to Year Two.