Skip to content

GPS vs PMS

Tony Fujs wrote a great blog post last year on how performance measurement systems should aspire to be more like a GPS navigation device. He compared the real-time and painless processes of collecting and presenting data inherent in a GPS to the expensive, intrusive and slow processes that are typical of most evaluations. Not only that, for a GPS, the user interface has been well designed so the advice is clear and (when voice-enabled) hard to ignore; whereas an evaluation report may be a hard to read and easily discarded volume of paper, produced long after the intervention has had its impact.

GPS image

Fujs drew the lessons that performance measurement systems should strive for automatic data collection and processing, and that results should be useful, understandable and hard to ignore. While recognising the importance of a culture of data, he pointed out that ‘drivers adopted the GPS because it is useful and easy to use, not because they developed a culture of data. Building useful, simple, and intuitive performance measurement systems can also be a powerful and sustainable strategy to generate buy-in.’

However, we need to recognise several differences between navigating the terrain of organisational performance and navigating roads. These can be separated into the issues of problem definition and of problem solution.

Problem definition

It is easy to define the problem for a GPS – simply state the selected destination, and then allow the machine to calculate the fastest route. For organisational problem solving, the aim is rarely so clear. Objectives are often multiple, contested and ambiguous. In addition, the purpose of the evaluation may not even be to recommend a specific solution, but just to map out the terrain – to provide information on the merit, worth and significance of the program, to use Scriven’s definition of evaluation. This is analogous to the ‘map’ mode of a GPS (as opposed to the ‘route’ mode – simply providing the map in front of us, and the driver can use that to decide where to go). In this case, it is worth noting, there are no instructions from the GPS on which way to go.

Problem solution

I never cease to be amazed by modern technology, including that of a GPS. From the detailed data on all the roads in the nation that have been entered into a data base somewhere, to the remarkable technology of orbiting satellites that, from a distance of many thousands of kilometres are able to provide precise location. In addition to all that, the computational power to assess all feasible routes and present a recommendation, residing in a small box in the car, is probably greater than that of a main frame computer of a few decades ago. This billion dollar investment in supporting technology – satellites, computational hardware and software – is only economically feasible because there are many millions of GPS devices all performing comparable tasks to share the costs. Most evaluations, unfortunately, are bespoke tasks that are unable to share the computational load with others. There are simply not the resources available to provide such a comprehensive data base and powerful analytical engine for such evaluations.

The nature of organisational problems is often much harder than finding the fastest route from A to B. For a GPS, there may be traffic and roadworks, but for organisations there are people, politics, and forces that are sometimes actively trying to stop you getting where you want to go. In these circumstances, mapping a route can be a complex problem that defies a simple solution.

Getting to the destination

But the lessons Fujs teaches us are nevertheless important. We may not have access to the equivalent of a detailed electronic map, but there are many sources of free or easy to get data that may be relevant. We may not have access to a suite of satellites in the sky, but perhaps we can talk to citizens in the street. And we certainly need to simplify evaluation reports and make them more timely. Perhaps, in some ideal future, decision-makers will turn to evaluation as frequently as drivers turn to their GPS; but let’s hope they also continue to keep their eyes on the road!

Advertisements

Measuring efficiency is sometimes ineffective

Striving towards greater efficiency (and hence needing to measure it) sounds like a desirable and attainable objective. Everyone has an innate understanding of what efficiency looks like; we are satisfied when we achieve the efficient completion of a task, and appreciate it when we observe it in others. However, it is difficult to measure efficiency precisely, especially in the public sector, and even more difficult to know that optimal efficiency has been achieved.

So what is efficiency? This is normally defined as the ratio between output and input, seen most easily in physical systems such as engines. Engines use a measurable amount of chemical energy in the form of fuel and produce a measurable amount of kinetic energy. Efficiency of the engine is simply the ratio of the usable output energy (disregarding waste energy, i.e. heat) divided by the input energy.

Many organisations have a looser usage of the term ‘efficiency’. A worker or a team might be called efficient if they work in such a way as to produce few errors (so there is little need for re-work) and with the task done in a time that seems reasonable.

Efficiency is a key concept in economics; or rather a set of concepts encompassing allocative efficiency, technical efficiency and production efficiency. Production efficiency occurs when an economy cannot produce additional amounts of a good or service without reducing the amount of another good or service. This does not specify the amounts of each individual product. To do this, one needs to consider allocative efficiency, which occurs when there is an optimal distribution of goods and services, taking into account consumer’s preferences. The closest economic term to the everyday concept is technical efficiency, which occurs if a firm is producing the maximum output from a given set of inputs.

In the private sector, there is a strong incentive for a firm to be as efficient as possible (in the technical sense) to avoid being outcompeted by one that is more efficient. But in the public sector, what does efficiency mean? The nature of outputs are contested, there is usually no competitive pressure and perhaps most important of all, there is little incentive to achieve high efficiency, partly because it is so difficult to measure.

Often, the nearest that we get is to the related concept of cost-effectiveness. In this analysis, two or more alternative courses of action are considered to compare the effectiveness (the extent to which an objective is realised) with the overall resources used, i.e. cost.

Yet again, we see that what appears to be a sensible approach founders on the rock of measurement difficulty. And when efficiency cannot be reliably measured, it is hard to provide incentives for its achievement, and so difficult to achieve the level of efficiency in the public service as is achieved in the private sector.

So what do we do? First, have a realistic view of what can be measured; not forgetting that sometimes it is possible to measure efficiency. But if it is not, recognise that effectiveness is often equally important, and is usually easier to measure.

© Numerical Advantage 2016
http://www.numericaladvantage.com.au

In Praise of Process

Administrators, especially those in the public sector, are often criticised for an undue focus on process. They are accused of mindlessly following the rules laid out in some arcane book of pettifogging rules, rather than focusing on achieving results, helping people, improving the bottom line. But this is often because those doing the criticising are (justifiably) held up from what they want by these rules. The most extreme example I saw of this was a grant applicant who stated that ‘the Minister should show some leadership by funding our application.’

There are several things wrong with the privileging of outcomes over process. First, it is usually (not always) the case that the processes have been carefully considered over the years and adjusted when they are seen not to be working. Sometimes that is not the case, but at least in administrations I have been exposed to, there has been an acceptance that poor process needs to be fixed – even if it sometimes takes a while to do so!

Another problem with an outcome focus is that outcomes are measured so rarely, while process is all about us. Outcomes sometimes take years to eventuate, are influenced by a host of environmental factors that are outside our control or to put it another way, are the result of plain luck. A prime example of the latter is the economy; an individual organisation has very little influence on economic conditions, but this will be a significant driver of sales, profits, income and many other key performance indicators. And sometimes the desired outcome, such as an educated adult or successful environmental remediation, will occur a decade or more after the work has been done by a primary school teacher or by an environmental lobby group. In these circumstances, we need a guide, and that guide is often in the form of good process.

Considering process as well as the bottom line is, of course, one of the features of the balanced scorecard approach. And quality assurance reviews are almost entirely assessments of adherence to process.

Even in the more short-term and simpler environment of a sporting contest, process can be prized over results. How often do you hear coaches making comments such as ‘We only need to keep doing the right things and results will come’? Accordingly, they measure large volumes of data about what the players have done: in football, for example, tackles made and missed, yards gained and accuracy of kicks among many others. In other words, process measures rather than the actual results of points scored and matches won.

This brings us back to my main interest, that of performance measurement. Because outcomes are often distant, contested and hard to attribute, they are often not good bases on which to measure performance. (I accept that if outcomes can be reliably measured, this is ideal). Normally, process measurement is easier. If processes are logically linked, by a combination of experience and theory, to the likelihood of achieving an outcome, then focusing on doing things right and measuring the extent to which they are done right will help to achieve the desired outcome.

© Numerical Advantage 2016
http://www.numericaladvantage.com.au

Risk and performance – two sides of the one coin

Managing risk and performance have always been key elements of governance. In Australia, both are now being given an enhanced importance though the Public Governance Performance and Accountability Act.  Although risk management and performance management are normally considered quite distinct, there are some key similarities and comparison can throw light on both disciplines.

The conventional meaning of risk indicates issues or factors to avoid. Performance management (which in this context means identifying performance indicators and then managing to achieve success as defined by those indicators) is based on setting targets to achieve. These may be considered as two faces of the same coin. They are both concerned with management principles designed to assess and then improve performance.

Risk management Performance management
A structured approach is undertaken to identify adverse events, and means of dealing with these so as to achieve business success. This includes: A structured approach is undertaken to identify ways of measuring and achieving business success.   This includes:
  • Setting the context
  • Defining the overall business goals and vision
  • Identifying risks
  • Identifying specific short-term targets
  • Assessing the likelihood and criticality of each risk
  • Selecting a feasible number of targets that, together, will span the business activity and therefore represent overall performance
  • Overall assessment and ranking of risks
  • Determining, where possible, the level of achievement against those targets to aim for
  • Selection of treatments to apply to risks
  • Allocate responsibility for achieving the targets, or at least monitoring performance
  • Allocation of management controls to ensure treatments are applied
  • Assess performance against targets regularly, and adjust management policies where appropriate
  • Continuous review of risks to delete irrelevant ones and consider new risks.
  • Reconsider regularly the relevance and appropriateness of the performance indicators

While there is not a strict one-to-one correspondence between the two activities, there are nevertheless considerable similarities. Indeed, one may consider risk management and performance management to be negative and positive aspects, the valleys and the peaks, of the same performance measurement issue.

Some other attributes of the risk management and performance management that are similar are described below:

  • The desire for quantification allied to the difficulty of doing so.
  • The need to integrate with general management approaches
  • The problem of identifying intermediate and final outcomes
  • The problem of setting levels to achieve or avoid

Risk management is therefore better structured to assess likelihoods and consequences of the outcome, while performance measurement is better at setting target levels and considering the chain of causality leading to achievement of that performance. Both facets of understanding are relevant to both forms of management information and control, and therefore there are opportunities for the disciplines of risk management and performance measurement to learn from each other.

The issue of whether risk management and performance measurement can be fully integrated is more problematical. One approach is to take seriously the definition of risk in ISO 31000, which refers to risk management covering not only adverse events, but also opportunities. The outcomes and goals to which performance management is directed can then be considered as ‘opportunities’ in a risk management context. For these events, we can use a risk management structure to define the probabilities and consequences of not achieving them.

Alternatively, the risk levels can all be set out in terms of performance targets. For example, there may be a lateness risk defined as a program being delivered more than seven days late. Instead, we could have a performance indicator of timeliness, with the measure being might be the proportion of programs delivered within seven days of the due date.

Neither of the above is totally satisfactory. Part of this is the different management emphasis that is understandably placed on avoiding disasters or achieving performance. Perhaps the best solution is an organisational one, to ensure that those parts of the organisation charged with coordinating risk management, and with collecting and managing performance information have close communication, or perhaps are the same group.

© Numerical Advantage 2015
http://www.numericaladvantage.com.au

Understanding 1+1=2

All a senior manager really needs to know, I sometimes say to grab attention, is to understand the following:

1+1 = 2

The simplest arithmetical equation. Surely this is totally obvious. But there is, I am told, a Sufi saying along the lines of:

Because you understand ‘1’ you think you understand ‘2’ because 1+1=2. But first you have to understand ‘+’.

In other words, knowing about the components of a system is not enough – you need to know about the nature of the interactions between them. Within a systems thinking context, with its focus on components, interactions and boundaries, this is obvious. But in general discourse, it is too easy to accumulate facts without understanding the linkages that lead to wisdom. This can be the cause, for instance, of a lot of mid-thesis angst, where a doctoral student has completed the collection of evidence, thinking the job is mostly done, but then needs to put it all together and work out what it all means. The evidence may have disproved the initial hypothesis, but does not speak for itself in suggesting a new one. The instances do not necessarily tell us about the patterns or connections.

Similarly, in investigations or evaluations of what is going on in organisations, or in society, learning about several individual instances of success or failure is not enough. Case study analysis is in itself a significant discipline in trying to extract commonalities, differences and themes from individual cases. But this type of analysis does not go to synthesising the meaning.

There is a further dimension to this: we also need to understand =. There are at least two possibilities for the meaning of =. One is that of definition, the other of calculation. We could define 2 as the sum of 1 and 1. Or, more interestingly, we can take 2 as an independent construct, and try to show that 1+1 is formally identical to 2. In mathematical terms, this is not straightforward. Tachyos.org gives a 53-line proof, and refers to Russell and Whitehead taking about 360 pages in Principia Mathematica to reach this conclusion. Part of the reason for the length of the proof is that they do need to consider the formal meaning of =.

In the organisational analogue, we need to understand how and  how reliably the components and the interaction (the various 1’s and the +) lead to the final result. Does = mean always leads to?  Usually leads to? Leads to in certain defined conditions? For example, let’s say that we understand what is going on in the various divisions of a company. We also understand the interactions between them – the informal and formal communications, the hierarchies, the controls and whether these are working or not. That is, in itself, a tall order for senior management – to get across the 1’s and the +. But the manner in which this assemblage leads to overall company success or failure is often not clear, and the point of intervention to fix any problems is likewise often hidden.

So if you really understand 1+1=2, you’ve got it made.

© Numerical Advantage 2015

http://www.numericaladvantage.com.au

Distortion of performance measures: From ‘making the numbers’ to ‘making up the numbers’

One of the significant criticisms of performance measures is their propensity to distortion. A later post will consider how to at address this, but for now, let’s focus on the types of distortion and how they arise.

distortion

The first category is outright lying. It is always possible to simply invent numbers and assume that no one will check. This is, of course, a high risk strategy for the person making up the numbers, as if someone does check, there is very little defence that would not lead to the loss of employment. However, it does happen.

Sometimes the lying is a bit more subtle. Workers may shift achievements from one period to another in order to try to get the performance measures in the period to be closer to the target. Or if the target cannot be met, then maybe shift results to the next period, so there will be one very bad result followed by an improvement next time.
The second form of distortion is that of working to the target, not to the underlying intention. For example, many years ago, there was a target in a large Australian government instrumentality that 90% of all requests for formal advice would receive an answer within 30 days. The response was to review requests that got to around 25 days old and if they were answerable in time to focus on them. If they could not be answered in time, they would allowed to be late as part of the allowable failure ratio of 10%. But then there was no incentive to ever answer these questions – they sat there, getting to 60 days old, 90 days old or even more.

A related distortion is effort shifting. If a target has been met, then even if it is quite possible to do better, workers may choose to not exceed the target. This enables focus on other things, and also addresses workers fears that if targets are exceeded, they will be raised next time.

A third form of distortion can occur in the initial setting of targets. It is often the operators who are responsible for at least advising on the performance measures and their targets. In order to maximise their chance of success, the operators recommend measures and targets that are easy to achieve, or generate reasons for changing measures that are harder to achieve.

What are the controls on this? If different units are producing comparable outputs, then a degree of competition can be encouraged. If one is offering to produce a higher output per cost (more cost-effective production), then it can receive more funding. In most cases in the public sector, however, an alternative approach is necessary. This will be covered in a later post, but in brief, it depends on setting the tone in the organisation, transmitting it to all workers, having good management at the unit level, systems for monitoring and auditing, and good executive oversight.

© Numerical Advantage 2014
http://www.numericaladvantage.com.au

 

 

Running by numbers

Most of the entries on this blog have been directed to the use of performance measures for organisations, especially government ones. But today I would like to look at a more personal use of performance measures: to assess progress of a fitness regime.

Last year, I decided to increase my fitness level in order to run a 10 km fun run. I had been told about the challenge of ‘run your age’, and being 61, my target was to complete the run in 61 minutes.

So the fitness regime began. I decided to slowly increase the time of my longer (weekly) runs until they approached an hour – trying to increase endurance – and then increase the speed and hence distance that I could complete within an hour until it approached 10 km. At the same time, I understood that running sprints on other days of the week would also help to increase speed. The performance measurement regime was limited to measuring the longer runs – I assumed that my work on the sprints would be reflected in the performance there.

Things were going pretty well for a month or two, but then a combination of being very busy (well, that was my excuse) and minor injuries meant that I was left with decreased fitness only a month or so from the run. So when I recovered, I increased the intensity, and measured a 10km run that I could benchmark myself against (thankyou Google maps – beta pedestrian version).

But the numbers were not encouraging. I could only get my measured time for the 10km run down to 65 minutes on my last long run, a week before the event. The race effect – being encouraged by all those other runners – and tapering of training would, I thought, give me two or three more minutes, but my best estimate was that I would just miss my target.

fun run

But on the day, the weather was perfect – fine and cool as I like it – and with my slightly fitter daughter effectively pushing me on for the first 4km or so, I easily achieved the target, running just over 59 minutes.

Lessons:

1. Focus objectives on the measurable – a general ‘improve fitness’ objective would not have been as useful as setting the objective of running 10km in a set time, even if it was theoretically more fundamental.

2. Targets are often arbitrary, but the setting of one that is achievable but difficult (run your age) is a key part of improving performance.

3. In measuring progress towards the target, try to stay as close as possible to the conditions that will apply at the time of the real assessment. Recognise the variations from this real assessment, and try to allow for them. (measuring times over 10km; allowing for the race day effect)

4. You don’t need to measure everything. (Not timing the sprint runs.)

5. Use interim performance measurement to adjust activities undertaken. (increasing training when times appeared too slow)

6. Accept that achieving the target includes an element of luck. (good conditions on the day)

7. Enjoy success!

© Numerical Advantage 2014
http://www.numericaladvantage.com.au