Skip to content

DUMB performance measures are better than SMART ones

December 17, 2013

The SMART acronym is one of the most popular in the field of organisational objective setting and performance measures. My preferred list of the SMART criteria is:
• Specific: measures one thing at a time – focused
• Measurable: can interpret as either a specific number that can be measured, or at least a verbal description that is testable.
• Achievable/ attainable: they should be achievable, but with the inference that they are not too easily achieved. Setting the level of indicator is a difficult issue. For example, just because an indicator target has been achieved easily in the past, should the target be raised? If this is done, can it act as a disincentive to exceeding targets?
• Relevant: Is it significant, does it contribute materially to the overall objectives of the organisation.
• Time-bound: Need to specify over what period the performance will be achieved.

There are, however, a number of problems with SMART. First is the ambiguity. The Wikipedia entry lists between three and thirteen meanings for each of the letters in the SMART acronym. For example, there could be significant, stretching, simple or sustainable instead of specific.

I also think there are more problems and limitations, as per the list below:

  • Specific: Specific to what? If it means let’s only measure one thing at a time, that is not very helpful. If it means that the goal should be specified, that is a sensible requirement, but it is not quite the same as defining a performance measure
  • Measurable: A good criterion
  • Achievable: This only refers to targets, not to the measures themselves. It is also a highly judgemental characteristic.
  • Relevant: Yes, but there is a potential overlap with Specific.
  • Time-based: Yes, but specifying the period over which something should be done is only one element of being well- defined

One reason for the problems in using SMART criteria for performance measures is that they were not designed for that purpose. They make more sense for personnel evaluation, where, for example, targets for individual performance do need to be specific to that person and to be achieved in a set time frame. They might also be useful for setting objectives for organisations or organisational groups. But organisational performance measures are better if they are DUMB:

:smart dumb

  • Defined: The measures should be unambiguously defined so that it is apparent what data is required, and how that data will be used in calculating the measure.
  • Useful and unbiased Measures are more useful if they are closely correlated with the organisational objective being measured. Being so correlated helps to aid against bias, distortion and gaming of performance measures.
    There is also a time and cost aspect to usefulness. It should be practicable to produce the measure frequently enough to track progress, and quickly enough for it still be useful. And ideally, there should be little additional effort required to collect and report the measure that would not have been required to be done anyway by management, meaning that the performance measure is economical to collect.
  • Measurable: The measure should be independently verifiable and able to be reproduced accurately by a second assessor.
    Normally this means a numerical measure, and one that is based on a solid chain of evidence. However, establishing a sound rubric for non-numerical assessment is often sufficient to establish measurability.
  • Balanced and benchmarked: Taken as a whole, the set of measures should cover all key elements of performance over the organisation or activity being assessed. One way of assessing completeness is to consider whether the set of performance measures can reasonably assess how the program logic – the connection from resources to activities to outputs to outcomes – is working. A complete or balanced set of performance measures also helps to prevent gaming of individual measures.
    The ability to benchmark performance using performance measures is also desirable. For this reason, if there are industry standard measures, it makes sense to use them. When a measure is revised, it becomes harder to compare it with previous periods.

Can DUMB be improved on? The acronym needs a bit of work – I have doubled up the load on ‘U’ and ‘B’ – and any suggestions for improvement are welcome. And there is some overlap in that a performance measure can’t be measurable unless it is defined. But I still think that, for organisational performance measures, it is better to be DUMB than SMART.

Graham.smith@numericaladvantage.com.au
© Numerical Advantage 2013
http://www.numericaladvantage.com.au

Advertisements

From → Articles

Comments are closed.

%d bloggers like this: