Taftie guidelines on performance indicators for evaluation and monitoring


The most important question common to all measurement are attribution, networks, timing and sources of information. Other issues like data structures, are discussed within these topics.

10.1 Attribution

One of the major issues concerning measurement is the attribution problem. There are three basic concepts that must be related in some way to each other. These are Innovation, Project and Company.

Most of the general statistics up to now have been based on the company data. The company has been the basic unit which is used to analyse any phenomena related to R&D and innovation activities. From the Agency's point of view the basic element is usually the project carried out by a company or a number of companies in a consortium. One project can be attributed to a single innovation or a number of innovations. In other cases one innovation is a result of number of consecutive or parallel projects.

Attempts have recently been made in the field of innovation statistics. The basic idea is to try to use innovations as the basic elements in analyses. The new version of the OECD Oslo manual includes this kind of an approach in parallel with the traditional company approach.

The Agencies databases are usually based on projects. More advanced databases also include separate company tables, which are in some way connected to projects. The most simple way is to connect one project to one company, but this creates a problem when projects are executed in networks. Most Agency databases are not equipped with innovation tables, which would be connected to project and company tables. Innovations are typically related in some way to projects, but the actual connection to companies is usually missing.

Thorough analyses will require all of these - project, company and innovation - to be included in the data structure. This will require knowledge of the various roles of companies in developing and exploiting various innovations as well as various roles in particular projects in which these innovations are developed. Also there has to be some way to relate projects and innovations and their roles to each other to create a good basis for analysis. For practical purposes the time scale will have to be attached to these as well.

The definition of a company is usually quite simple, but when the industrial structures keep changing the structures of companies will also change and the pace of this change appears to be increasing. One possibility in future would be to start using individual businesses or business units in analyses instead of companies. This could give benefits especially in constructing longer term panel data for various analysis. The other question of consortia, clusters and various networks is discussed further on in this text.

The definition of a project is usually more difficult. Some Agencies try to finance complete R&D activities related to an innovation whereas others finance same activities for shorter periods. The concept of a project is thus different from the company's point of view than from the Agency's. The other problem arises from the Agency's policy of financing only specific activities. The project, as the company sees it includes a number of different activities of which only some can be financed partly by the Agency. In some cases the problem is further complicated by another national/regional body financing some of these other activities that are not financed by the Agency. The results of the company project can then be reported twice to the policy makers.

The definition of an innovation is even more difficult and depends a lot on the technology. Some innovations from basic research can eventually lead into many products and process improvements in many sectors of industry. Other innovations may be quite specific to certain products or processes with very little spin-off potential. How should an innovation be defined in these various cases? The definition problem will probably lead into some hierarchical network of innovations in which specific product innovations can be traced back to other more basic innovations. The problem from the Agency's point of view in this case will obviously be the identification of these relations and in the beginning the mapping of existing innovations (naturally only to the degree which would be practical in setting up analysis).

10.2 Networks

Networks are obviously one very interesting concept and one that seems to be increasing in its importance in any analysis related to R&D and innovation activities. This is especially important when dealing with SME's and with SME's in connection with larger companies. Also it is vitally important when analysing the birth and growth of small technology based companies.

Since the networks are very important and increasingly so, the Agencies must take them into consideration in monitoring and evaluation. The first question is to be able to identify and monitor the birth, development and break up of networks. Although companies build different networks for different purposes, like R&D, production, marketing, etc. the Agency can in practice identify only the major contacts in the company's R&D network. Some information might be available also on other networks, but the data is probably not sufficient to identify these completely or even to the degree where it would be possible to use them for analytical purposes. The situation is somewhat different for different size companies and the identification of networks is probably easiest for smaller companies.

The identification of R&D (and other) networks requires data on joint research activities and on the role of companies in these activities. This data is relevant also for attribution purposes as was discussed earlier. Company roles in the exploitation of project results and innovations reveal some of the commercialisation or even marketing networks.

For thorough analysis, the Agency data structures should facilitate the identification of various types of network connections and network structures. In other words: A network consists of partners (nodes) and relations (links). A particular network may be characterised by size (number of partners) or intensity (number and/or duration of contacts exercised). Other features of interest may include complementarity of partners (are they similar or different), location (close or distant), nationality etc. Partners in a network may also have different roles relative to the tasks faced within the project, they may supply various skills or inputs and/or extract various utility from participating in the project.

The relation between networks and success is still a bit unclear, but is usually understood to exist. A much deeper analysis is still needed on the characteristics of various networks in order to find out the real mechanisms by which various types of networking actually contribute to success. The intensity of networking must be accompanied by appropriateness, quality, timeliness, strategic closeness, etc. before we can even start to approach the best achievable networking status. While lacking a good theory for the optimal network, we still want to register the features we think might be important for how networks influence outcomes.

The other side of measuring network effects is their role as part of an innovation system on a regional or national level. While there could be some evidence found from the Agency's database regarding this role, a more systematic analysis of networking including all relevant players should take place, based on e.g. a sector of industry on a region or a group of companies and research units supplying and applying a specific technology. The analysis is always based on time series and panel data on the development of links and nodes in the networks and what has been the effect of various R&D activities (including the Agency's decisions) on the development.

10.3 Timing

One of the main problems in reporting to the policy makers is that the actual results from the society's point of view will be realised years after the investment. This presents two major measurement problems. First, there is a need to estimate the expected outcome and secondly there is again an attribution problem in connecting public funding by the Agency to the realised outcome after a long time during which a number of other phenomena has affected the outcome.

The estimation of the expected outcome is typically based on one or two of the three possible sources of information; companies, Agency officers or external experts. The reliability of these sources varies depending on a number of factors which are discussed later.

One of the major problems with estimating future outcome is again attribution. For example, say two projects are currently going on in the company. One aims at improving one of the products and the other at improving the manufacturing process. How should the expected outcome be divided between these projects?

The other main problem is how to assess the contribution of the public funding to the realised outcome, typically 3 to 15 years after the initial investment to the first R&D project. For example, say the project is to develop a new process which will be used to produce some or all of company's products in the future depending on the success of the R&D work. The new process is based on the existing one and differs only in one major phase. After two years another process development project is carried out concentrated on another part of the process. Then a third project is undertaken to improve some of the products because the new process makes it possible to do so. How should the outcome of the first project be reported?

Typically estimates of expected outcome are based case by case on specific products, where business estimates are used. For the company it is simple since calculations are done anyway by business units and by products where all R&D costs are connected. General R&D costs are typically also divided between products or business units. The Agency cannot use this approach to analyse expected outcome, unless it collects all this information from all network member companies, which is not practical. Thus the Agency must rely on the basic information coming from the company or if the Agency is using external experts on their estimates.

One possible way to approach the timing problem is to collect the same information in the beginning, during execution, at completion (of the R&D project) and after completion (follow-up). Although situations actually and naturally change, the analysis can be based on more than one observation which will improve the reliability of the analysis considerably.

10.4 Sources of information

There are two main issues of concern regarding sources of information. One is the source itself, its motives, its capacity to assess various issues at various times. The other is the number of observations when aggregating to higher levels of reporting. The first issue conforms the company, the Agency and external experts including the national/regional level. The other issue consists concerns proper sampling.

The company is typically used by the Agency to provide most of the information even though it is well known that the company's motive is to get funding and that it tends therefore to overestimate the value of the project. It is common to all Agencies that public funding covers only part of the costs for the company, thus requiring the company to take real risks. The second mechanism by which the Agency tries to improve the reliability of company information is by the use of contracts and milestones.

The company tends to overestimate expected outcome when it is applying for funding. After the funding is decided, then the estimates could be more realistic. One means of attacking this problem is to agree upon milestones where the funding is tied to achieved partial goals. Another less strict way is to agree upon the final goals in a contract form. Both of these methods can be used to get more realistic estimates from companies of the expected outcome.

The other side of this can be dealt with in a number of ex-post evaluations. Successful companies tend to underestimate the role of public funding if they no longer need it or not at that point in time. Other less successful companies sometimes overestimate the role of public funding and attribute failures partly to it. Companies applying public funding for other projects at the time of the evaluation tend to overestimate the role of public funding for their earlier projects. Generally companies in need of public funding tend to overestimate its role whereas companies not needing it or having been rejected tend to underestimate the role. This makes ex-post measurement of the role of public funding (e.g. additionality) very difficult if no information is available from the time of the project launch and execution.

Some Agencies use external experts for assessing companies. This can be costly, but can also give better information for decision making, performance monitoring and evaluation purposes. To produce a high quality and effective assessment, the Agency must build a high quality expert network around it. The benefit of this approach is that it can be flexible and cover a wide range of technologies. From the Agency's point of view the key question is the reliability of data and thus the reliability of the experts.

One problem that may rise from the use of external experts is that they can possibly have contacts with the company they are assessing or even worse with a company that is a competitor to the company they are assessing. This can lead to unreliable data and result in incorrect decisions.

External experts are frequently used for mid-term evaluations. The reason for this is usually credibility in the eyes of the policy maker. The other apparent reason is the learning effect from an external look into the Agency's activities. Evaluations performed by external experts should concentrate on specific issues and be supported as much as possible by the Agency's monitoring and assessment data and performance indicators.

The third possible source of information is the Agency and its officers who are assessing the companies for funding purposes. Most Agencies use their officers' assessment of the company given data as basic data in their own system. One approach is to keep company data separate from the officers' assessment of it while the other approach is to use the final data agreed upon between the company and the officer.

Basically the Agency officer has no bias in favour or against the company, so he/she could be considered objective in his/her ex-ante assessment, except for issues concerning the Agency or public funding (e.g. additionality). For evaluation purposes and especially if there are aspects of legitimation the Agency officer tends to overestimate the importance of public funding and the expected outcome, especially in successful cases. The Agency officer is perhaps most reliable as a data source in the early stages and at the monitoring phase.

National/regional exercises that look into the impact on technology policy implementation and overall national/regional economy are also sources of information to the Agency. The data may not be available on micro level, but can usually be used on an aggregated level. The benefit of this data is, that it may be more credible in the eyes of policy makers especially in the case of national/regional statistics. Also it is the best source of any national/regional level data which is needed by the Agency, since the Agency is not in a position to collect this kind of data without substantial expense.

10.5 Methods of data collection

Basic methods of collecting data for the performance indicators are interviews and questionnaires.

For practical purposes interviews can be used by the Agency to some degree depending on the resources for ex-ante and monitoring purposes. Typically ex-ante assessment is based on an application form with appendices (project plan, financial plan, exploitation plan, company business plan and other related documents) and on at least one interview with the applicant. This is usually when the company is most willing to answer any questions and the Agency is willing to ensure that the investment is based on correct information. Thus the interview is usually used to make sure that the basic information given is correctly understood and is completed where not sufficient. This interview is the basis for a lot of the necessary information for performance measurement including all initial estimates on expected outcome.

It is possible for some agencies who have the resources to use interviews for monitoring purposes at least to some extent. This is very useful and can improve the monitoring phase. Other Agencies rely heavily on written documents and questionnaires for the monitoring information.

Other uses for interviews are specific cases or specific programmes where the information must go beyond what can be collected by the means of questionnaires. These are not very typical considering the Agencies portfolios as a whole.

Questionnaires are perhaps the most typical form of data collection for monitoring and especially routine ex-post evaluation purposes. These can, however, be complemented by some interviews. Other typical uses for questionnaires are very large target groups, such as for national statistics purposes. Because of the wide use of questionnaires for various purposes, statistical sciences have developed a lot of rules related to them. These - including sampling, data reliability, response rates, etc. - are not discussed in this document. One good point of reference is the Oslo Manual, which deals with these issues from the R&D and innovation point of view.

The biggest problem with the questionnaires - in addition to the obvious ones - is how to compile questions which are explicit and will not be misunderstood. This is vital in ensuring data reliability and later comparability. There are a number of techniques that have also been developed for increasing data reliability based on duplicating sme questions and asking them in different ways, but this can not be used easily for definite numerical data (e.g. expected turnovers, exports, etc.) for which there are other methods which help to make realistic estimates.

The other big problem with questionnaires is that to get complicated information reliably, a large number of questions is necessary. This will inevitably reduce the response rate (unless forced by the Agency at ex-ante or monitoring phases in which case customer satisfaction is bound to decrease) and eventually also affect data reliability.

Some data can and maybe even should be collected by indirect means instead of direct questions. This is much easier to accomplish during interviews than in questionnaires. Questionnaires tend to become too large as was pointed out earlier. Interviews are better means of collecting complicated information in indirect methods, but they involve a lot of resources.