3. PERFORMANCE INDICATORS
|
The performance indicators are quantitative or qualitative indicators which measure the direction and in some instances the speed at which the measured variable is moving with respect to the target. The use of performance indicators typically requires a descriptive analysis, which explains the use of these indicators. This is needed because the indicators are seldom self explanatory and there are a number of considerations that must be understood before a true sense of the indicator can be realised.
Another feature in the use of indicators is that the reliability of an indicator increases when the number of different sources of information and different methodologies used in analysing the information leading to it increases. Therefore the use of multiple indicators is also common.
The indicators can be classified into four classes - input, process, output and impact indicators. Their use for various evaluation and monitoring purposes are also presented in the following table.
Indicator type | Ex-ante | Monitoring | Ex-post |
Input | Realised (Expected) | (changes) | Realised |
Process | - | Realised | Realised |
Output | Expected | (changes) | Realised |
Impact | Expected | (changes) | Realised (Expected) |
The input indicators describe what the input is in terms of various resources, to the R&D activity. Input in a wider sense refers to all kinds of resources included in a project regardless of source. The typical measures are R&D costs, public funding, costs by items, the use of various subsidy types, etc. The main use of these indicators is in ex-ante and ex-post phases especially as explaining variables in various evaluation and portfolio analyses.
The process indicators describe the operational features of the activities. These typically include measures of management efficiency, project status, etc. Their main use is during the Programme execution for Monitoring purposes. These are indicators that can also be used for benchmarking purposes.
The output indicators describe the output that is realised or expected from a project. These typically include measures of new products, processes, service products or methods, new companies, applied new technologies, increased turnover, exports or jobs, etc. The main use of these indicators is in ex-ante (expected) and ex-post (realised) evaluations.
The impact indicators describe in a more general sense what has been achieved through a project or a programme in the target population, in the industry and in the society. These include descriptions of the competitiveness of companies, the strength of a sector of industry or a region, jobs created and sustained, environmental effects, social well-being, public services, etc. The main use of impact indicators is in ex-ante (expected) and ex-post (realised) evaluations.
The input and output indicators are sometimes used together in what is called cost-benefit analysis. This type of analysis is relatively easy to perform once realistic input and output indicators are available. The problem with these analyses is that as far as R&D activities are concerned and especially when public intervention is analysed, the wider impact issues should be included in the analysis. This however is not straightforward, but leads to typical problems of causality and attribution (see Chapter 10).
Another term that is used in the context of indicators is the portfolio analysis. The concept of portfolio analysis originates from the financial world. Investors use portfolio techniques for assessment of risk versus return on their investments. An investor will spread his assets over cash, bonds and shares. For each investment the risk and the return are estimated based on quantitative financial indicators. The investment portfolio consists of the aggregated data and variables.
In industry, portfolio management has also become an important technique for the follow-up of R&D projects and for monitoring the effectiveness of R&D organisations (Roussel, 1991 , Wheelwright, 1992). For each research project quantitative variables are recorded. These indicators characterise the project according to risk and potential return. Next, the R&D portfolio is visualised with mapping techniques. Comparing these maps with the company strategy allows for tuning the portfolio.
For our purpose, these portfolio techniques can be adapted for monitoring and evaluation of programmes. In a similar way, comparing these maps with the programme strategy allows for tuning the portfolio.
As an example, the product change versus process change map adapted from Wheelwright (1992) is shown. Each bubble represents an industrial project of a specific programme. The co-ordinates are determined by the expected product and process change. The area of each bubble corresponds to the subsidy provided for the project. According to Wheelwright (1992), four different project categories can be distinguished: (a) incremental or derivative projects, (b) platform or next generation projects, (c) breakthrough or radical projects and (d) R&D/advanced development projects.
For each category one can expect a different risk-return relation and potential impact. Correlating this information with the effective data on financial rate of return is an interesting challenge. Although this particular example does not give direct indicators for performance measurement, similar methods can be used for performance and competitive position measurement as well. The important thing is just to find the correct indicative variables and follow their change in the portfolio in time. As mentioned earlier, these types of methods are good in visualising the analysis.