0000001087 00000 n The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. The process of evaluation involves figuring out how well the goals have been accomplished. The process of evaluation is dynamic and ongoing. Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact (Scoble et al. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. 2009; Russell Group 2009). There are a couple of types of authorship to be aware of. Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed. In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole. Reviews and guidance on developing and evidencing impact in particular disciplines include the London School of Economics (LSE) Public Policy Groups impact handbook (LSE n.d.), a review of the social and economic impacts arising from the arts produced by Reeve (Reeves 2002), and a review by Kuruvilla et al. n.d.). SROI aims to provide a valuation of the broader social, environmental, and economic impacts, providing a metric that can be used for demonstration of worth. 2006; Nason et al. Understand. What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare? What are the challenges associated with understanding and evaluating research impact? 0000008241 00000 n Using the above definition of evaluation, program evaluation approaches were classified into four categories. While looking forward, we will be able to reduce this problem in the future, identifying, capturing, and storing the evidence in such a way that it can be used in the decades to come is a difficulty that we will need to tackle. Test, measurement, and evaluation are concepts used in education to explain how the progress of learning and the final learning outcomes of students are assessed. In line with its mandate to support better evaluation, EvalNet is committed to working with partners in the global evaluation community to address these concerns, and is currently exploring options for additional work. different meanings for different people in many different contexts. 10312. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. The first category includes approaches that promote invalid or incomplete findings (referred to as pseudoevaluations), while the other three include approaches that agree, more or less, with the definition (i.e., Questions and/or Methods- There are standardized tests involved in the process of measurement assessment and evaluation to enable the students to make better use of the data available in the daily classroom. While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. 2005; Wooding et al. % For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today. HEIs overview. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems. The book also explores how different aspects of citizenship, such as attitudes towards diverse population groups and concerns for social issues, relate to classical definitions of norm-based citizenship from the political sciences. 0000334683 00000 n 0000348060 00000 n 0000001883 00000 n Cb)5. This work was supported by Jisc [DIINN10]. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. 2007). If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. 0000008591 00000 n We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. The REF will therefore assess three aspects of research: Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment (REF2014 2011a). Reviewing the research literature means finding, reading, and summarizing the published research relevant to your question. 2009). The Oxford English Dictionary defines impact as a 'Marked effect or influence', this is clearly a very broad definition. Evaluate means to assess the value of something. Definition of Assessment & Evaluation in Education by Different Authors with Its Characteristics, Evaluation is the collection, analysis and interpretation of information about any aspect of a programme of education, as part of a recognised process of judging its effectiveness, its efficiency and any other outcomes it may have., 2. It is worth considering the degree to which indicators are defined and provide broader definitions with greater flexibility. An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines. The University and College Union (University and College Union 2011) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. 0000007967 00000 n 0000342937 00000 n While assessments are often equated with traditional testsespecially the standardized tests developed by testing companies and administered to large populations . A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required. Attempts have been made to categorize impact evidence and data, for example, the aim of the MICE Project was to develop a set of impact indicators to enable impact to be fed into a based system. 0000002318 00000 n Given that the type of impact we might expect varies according to research discipline, impact-specific challenges present us with the problem that an evaluation mechanism may not fairly compare impact between research disciplines. Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. The case study approach, recommended by the RQF, was combined with significance and reach as criteria for assessment. Not only are differences in segmentation algorithm, boundary definition, and tissue contrast a likely cause of the poor correlation , but also the two different software packages used in this study are not comparable from a technical point of view. n.d.). The ability to record and log these type of data is important for enabling the path from research to impact to be established and the development of systems that can capture this would be very valuable. 0000008675 00000 n In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel (Duryea et al. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. A key concern here is that we could find that universities which can afford to employ either consultants or impact administrators will generate the best case studies. It incorporates both academic outputs and wider societal benefits (Donovan and Hanney 2011) to assess outcomes of health sciences research. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that inclusion of impact in the REF will create selection pressure, promoting academic research that has more direct economic impact or which is easier to explain to the public (Corbyn 2009). The Social Return on Investment (SROI) guide (The SROI Network 2012) suggests that The language varies impact, returns, benefits, value but the questions around what sort of difference and how much of a difference we are making are the same. When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required (Duryea, Hochman, and Parfitt 2007; Grant et al. This is a metric that has been used within the charitable sector (Berg and Mnsson 2011) and also features as evidence in the REF guidance for panel D (REF2014 2012). It has been suggested that a major problem in arriving at a definition of evaluation is confusion with related terms such as measurement, Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research (Davies et al. RAND Europe, Capturing Research Impacts. Although based on the RQF, the REF did not adopt all of the suggestions held within, for example, the option of allowing research groups to opt out of impact assessment should the nature or stage of research deem it unsuitable (Donovan 2008). Oxford University Press is a department of the University of Oxford. At least, this is the function which it should perform for society. 0000002109 00000 n Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated. 1. Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation, Volume 23, Issue 1, January 2014, Pages 2132, https://doi.org/10.1093/reseval/rvt021. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going. In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students. Any person who has made a significant . Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research. 2008; CAHS 2009; Spaapen et al. The verb evaluate means to form an idea of something or to give a judgment about something. There is a great deal of interest in collating terms for impact and indicators of impact. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline (REF2014 2012). 2007). Evaluative research has many benefits, including identifying whether a product works as intended, and uncovering areas for improvement within your solution. 0000346296 00000 n 4. These sometimes dissim- ilar views are due to the varied training and background of the writers in terms of their profession, concerned with different aspects of the education process. It is perhaps worth noting that the expert panels, who assessed the pilot exercise for the REF, commented that the evidence provided by research institutes to demonstrate impact were a unique collection. As a result, numerous and widely varying models and frameworks for assessing impact exist. Husbands-Fealing suggests that to assist identification of causality for impact assessment, it is useful to develop a theoretical framework to map the actors, activities, linkages, outputs, and impacts within the system under evaluation, which shows how later phases result from earlier ones. Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research. Co-author. Explain. It is time-intensive to both assimilate and review case studies and we therefore need to ensure that the resources required for this type of evaluation are justified by the knowledge gained. Muffat says - "Evaluation is a continuous process and is concerned with than the formal academic achievement of pupils. 2010; Hanney and Gonzlez-Block 2011) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed (Hanney and Gonzalez Block 2011). If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. 1.3. Classroom Assessment -- (sometime referred to as Course-based Assessment) - is a process of gathering data on student learning during the educational experience, designed to help the instructor determine which concepts or skills the students are not learning well, so that steps may be taken to improve the students' learning while the course is