Skip to content

Performance Management Conference, Queenstown, New Zealand

November 14, 2013

The Performance Management Association (Australasia) held its two-yearly conference in Queenstown New Zealand from 30 October to 1 November. This was a small conference of some 50 people or so, which enabled an intimate feel and good interchange with all participants. The majority of attendees were from New Zealand, but there was also a strong international presence, with participants from Australia, UK, Denmark, Norway, Hungary, Indonesia, and UAE.

pma-australasia-nz-2013

In the following very brief summary of highlights from my point of view, I emphasise the ‘pure’ performance measurement and management papers; there were also a number of interesting insights more generally in the fields of management and accounting, including personnel performance measurement, collaboration and networks, integrity and organisational behaviour. The summaries come from notes from the sessions I attended, and are necessarily an incomplete record.

Jacob Eskildsen from Aarhus University in Denmark presented on satisfaction surveys. He focused on statistical analysis to form segmented populations, but a fascinating sideline was the discussion of how overall life satisfaction feeds into satisfaction surveys. Women are more positive than men. Satisfaction starts high at 18 yo, declines to about 40, then increases. Satisfaction declines with higher education – people become better at criticism. Urbanisation leads to a decline in satisfaction, and those with more influence have higher satisfaction. There are also national differences – Denmark being high, Japan and Portugal being low.

Prof. Mike Bourne of Cranfield in the UK presented a keynote address on the design and conduct of performance measurement review meetings (of organisational units). The meetings need to include both feedback on performance, as well as feed forward (management direction). Prof. Bourne produced a hierarchy of the levels of measurement, starting with no measurement, then reporting of actuals, then various types of ratios for assessment at deeper levels, and finally reviews of the metrics and targets. The unanswered question, though, is whether the process makes a difference.

Derek Gill from the New Zealand Institute of Economic Research described some research on the use of performance information in NZ government agencies. Despite some robust quotes to the effect that ‘performance information is crap’ Dr. Gill’s research found that managers do use organisational performance information more than they use informal feedback but the dominant purpose was control. The higher in the hierarchy, the less was the use of formal performance information; and regional and local offices were heavier users than head office. Organisations with direct services use it more, and if a ‘power user’ used it for one purpose, they were more likely to use it for other reasons, such as publicity. He found there was a high correlation between external and internal reporting. He also noted that, contrary to the governance assumptions that ministers purchasing outputs from departments to achieve government outcomes bureaucrats are more interested in outcomes than ministers are.

Bernie Frey from Praxxis and Mark Le Comte from Auckland Council reported on the exercise to build a performance management system for the merged Auckland Council. Challenges included complex governance based on a multi-level structure and the existence of many plans (150 and growing), and the requirements for rapid reporting. A summary of their lessons learned is: Don’t: wait for all ducks to be in a row; don’t assume big is best; don’t search for big data; don’t leave responsibilities to each silo; or rely on a sponsor. Do pay attention to strategy (and change it if necessary); do involve the organisation, on a top-down basis; do create an Enterprise Metrics Framework; do populate the framework using a top-down and bottom-up approach; and do build the system to support professional judgement; get a leader.

Richard Greatbanks from Otago University reported on work relating to grants to charities in NZ. He found most performance information was collected to report to trustees, with only 1/3 used to provide feedback to grantees.

Ishani Soyser of Massey University, in some work on Australasian non-profits. commented that a performance measurement system is easier to maintain than a balanced scorecard.

Adam Feeley, currently CE of Queenstown Lakes District Council, and previously CEO of the NZ Serious Fraud Office gave some practical advice based on his experience. Accountability documents are useful both to define purpose – enabling push back against unreasonable public expectations, and to align political and public expectations – explaining to political masters how the objectives will be achieved and measured. Internally, they can give staff a common purpose. They should start with goals; the Serious Fraud Office goals had been process-based, but were changed to reflect overall objectives. Previously, quality standards had been vague, hard to measure, and often not reported, but this has improved. The Queenstown Lakes District Council had 150 measures, some meaningless, and 1/3 not achieved. Some questions: does it make sense to ask customer opinions on everything (e.g. stormwater – a technical issue)? Are levels of use a proxy for quality? Is timeliness always a good measure? Do all businesses, including the minor ones, need to be measured? And there should be sanctions for failure to achieve targets.

Norbert Kiss of Corvinus University, Budapest talked on improving the performance of e-health networks. He commented that we look at the public policy cycle – a macro level and the managerial cycle – a micro level, but miss out on the meso level – networks Bottom-up development of performance measures often relates to voluntary, informal and market mechanisms; top-down to mandated, formal and hierarchy. The network manager can be shared, be the lead organisation or be a separate administrator. There can also be multi-level networks, e.g. at central, regional and local levels. Things work better if there is domain consensus, positive evaluation and work coordination at the local level and requirements management and defence of the paradigm at the policy level.

This summarycan only touch on selected highlights of a useful conference. The next conference of the Performance Management Association is in Aarhus, Denmark in June 2014, with abstracts due shortly. See http://www.performanceportal.org/

Advertisements

From → Activities

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: