East of the sun and west of the moon: Is measuring the impact of public engagement with science a fantasy?
Can the impact of public engagement with science (PES) be identified, measured and reported? In other words, what are the effects, outcomes, products or results of PES and how can these be evidenced? Questions like these have been the subject of much debate amongst PES practitioners, researchers and policy makers. This article draws on debates in the UK and US where arguments about measuring the impact of different public programmes, from PES to health care to education have become polarised in the literature. Some, like Slavin (2002, 2008), have argued that the positive reforms of health care systems in the US have come as a direct result of the use of impact assessments, in particular the method of randomised control trials (RCTs). He argues the same improvements could be made to education systems using these same methods. Others, such as Biesta (2007) and Oakley (2002), question whether research models from medicine can be straightforwardly translated into educational settings.
Regardless of such polarised arguments, there are serious questions about the role of impact studies, especially in PES. Not least because funders and policy makers want to know more about what impact their spending has, some have even suggested that funding must be based on evidence of impact (Haynes, Service, Goldacre, & Torgerson, 2012). It is also important to consider the impacts other participants in PES projects may be interested in, what impacts do members of the public, community groups or school students hope will arise from their involvement in PES? From whose perspective is impact measured? What kinds of impact are counted and which are overlooked?
Exploring impact raises difficult questions for the monitoring and evaluation of PES, not to mention PES practice and research. Firstly, there is little agreement about what PES 'is', which suggests any impact PES activities may have are likely to be varied and diffuse. Secondly, at least three different perspectives on measuring the impact of PES can be identified - instrumental, economic and experimental - and their implications are not always clear for PES research and practice.
The 'umbrella' of public engagement with science
PES has been described as a "multidiscipline" (Irwin & Michael, 2003, p. xi), it is hard to define, constantly changing and highly contextual. Each part of 'public', 'engagement' and 'science' has been contested by researchers, policy makers and practitioners (Dillon, 2009; Horst & Michael, 2011; Marres, 2005; Stengers, 2000). The closest to compromise reached by PES scholars has come from the development of PES frameworks. Some have been extremely complex, such as those developed by Rowe and Frewer (2005) which includes a range of engagement formats informed by different theoretical backgrounds. Others are simple, such as the public engagement triangle (British Science Association, 2010) which presents PES as part transmission and reception and part co-construction of information, opinions and ideas.
A model of PES developed by Trench (2008) provides a useful middle ground to think about the umbrella of ideas, practices, publics and theories involved in PES. Trench argues PES can be understood as a spectrum of concepts, ranging from deficit and information provision, to dialogue and communication 'with' rather than 'at' the public, to participation, and building engagement projects that are collaborative. This model - as well as the wider range of research on PES - suggests that PES is complex, contextual and varied.
In practice PES includes activities designed to teach people, such as those typical of informal science learning environments (King & Dillon, 2012), activities designed to involve people in political decisions (Horst, 2010), programmes created to 'sell' science (Gregory, Agar, Lock, & Harris, 2007) or activities designed to build knowledge collaboratively (Roth & Lee, 2002). Examples of PES practice may include therefore, radio programmes in Malawi, such as Umoyo Nkukambirana, designed to provide information about health research and involve listeners in debates on health topics (McCall, 2012), schools projects in Kenya that aim to strengthen mutual understanding between medical researchers and school students (Davies, Mbete, Fegan, Molyneux, & Kinyanjui, 2012), or projects that ask communities about their health needs through the development of community advisory boards for medical research in Thailand (Cheah et al., 2010).
As you might expect, such differences in practice, context and participants indicate differences in theoretical perspectives, aims, activity design, outcomes and impacts. These differences imply that the criteria for identifying, measuring and theorising the impact of PES could vary significantly from project to project, from country to country and from participant to participant.
To take a step further back, it is important to remember that PES events, whatever form they take, are embedded within a bigger world where most people interact with science through school and the media, if at all. From the perspective of an individual involved in a particular PES project, in the longer term, how can the impact of that one project be distinguished from all the other events, learning experiences, conversations and opinions of their life as a whole? It is also useful to keep in mind the multiple directions impact may have. For example, participating in PES may affect the researchers and facilitators involved in a project in ways that are as important to understand as the impact on participating members of the public.
Three perspectives on impact: Instrumental, economic and experimental
Research on PES suggests then that understanding the impact of PES, whether as a disorganised field of policy, research and activity or whether in the case of a specific project, is difficult. Why then has exploring impact become a focus for people involved in PES projects and their monitoring and evaluation? The reasons for trying to measure the impact of PES can be grouped into three themes: instrumental reasons, economic reasons and an experimental model of understanding PES.
Instrumental reasons for trying to measure impact are straightforward; we want PES projects to be useful, to do what they aim to do and, at the very least, not to make things worse for anyone involved. In this context, instrumental arguments for studying impact are about how to improve PES processes. In other words, to explore PES impacts as a way to make engagement better. To paraphrase Dewey (1938), people learn from experience, but that does not mean all experiences result in learning. Monitoring, evaluation and impact assessment are important professional tools that help those involved in designing and delivering PES projects to do their job.
Some have argued that PES is a "gold standard" (Felt & Fochler, 2008, p. 489), but the unquestionable value of PES is, of course, questioned. Many examples of problematic PES projects exist, with criticisms ranging from tokenism, to top-down control and political marketing, to the reproduction of social disadvantage(Burchell, 2007; Dawson, 2012; Rowe, Horlick-Jones, Walls , & Pidgeon, 2005). Clearly, under these circumstances, not all PES projects meet this 'gold standard'.
As Durant (2012) has argued, it is important that the PES community moves on from just 'doing' engagement, to doing good quality engagement. While, as argued above, what counts as 'good quality' PES is contested, monitoring, evaluating and exploring impact are key tools for developing quality PES projects and professionalising PES as a field of activity. In addition, a great deal of research on PES already exists that can be used to help design better PES projects (Nisbet & Scheufele, 2009). The point here is that it is not enough just to do research or impact studies, it is important to make use of those research outcomes, however big or small to continue to develop PES practices.
Economic reasons for investigating the impact of PES are a driving force behind the impact agenda. While the roots of economic impact assessment lie in accountability and managerialism arguments in the UK at least, it is easy to understand the importance of being able to show value for money in a global recession. In this sense, impact studies are about legitimation, about demonstrating the value of PES. This rationale does not represent a sudden shift in policy perspectives. As Slavin has suggested, "the accountability movement is hardly new, it has been the dominant education policy focus since the early 1980s" (2002, p. 19). What is new in PES is the combination of accountability related practices for science, education and policy. Elzinga describes this as the "extreme accentuation of accountability" that has resulted from the growth of "megascience" (2012, p. 422).
The background to this situation for PES is not straightforward and has its roots in arguments about the risk society; the unparalleled risks posed by the scale of contemporary science and the need for both public legitimation of science and public accountability for science (Beck, 1992; Fiorino, 1990; Ravetz, 2005; Wynne, 2006). It is important to remember that in this sense, PES is itself a form of accountability for the wider scientific community. As such, there is a certain amount of layering of accountability and legitimation in understanding the issues of impact assessment in PES.
The third theme within discussions of impact and PES is the presence of an experimental model that underlies how impact is understood. While some researchers argue that the impact of any science engagement experience must be understood as part of a nested series of events over a life time (Falk & Dierking, 2012; Roth & Van Eijck, 2010), impact in PES is often understood as the identifiable and measurable outcomes of a particular experience. This has been described as a "conveyor-belt" (Macdonald, 2002, p. 219) or "container" model (Leander, Phillips, & Taylor, 2010, p. 332) of understanding engagement events. In the experimental model, PES activities are seen as interventions, interventions which, importantly, are expected to have some effects. This is not unreasonable - people are not involved in PES because they believe it to have no impact.
More controversial is how, when and why impact is researched and the deficit perspective implicit in the 'container' model. Framing PES events as interventions suggests that PES activities can provide people with interest, power or knowledge such that the effects can be measureable within a limited time frame. One particular method, RCTs, has become emblematic of discussions about the impact of PES. RCTs have been lauded by some (Haynes et al., 2012) and lampooned by others (Biesta, 2007). Framing PES events as interventions with measurable impacts creates a perspective from which some impacts can be defined, identified and measured. It also risks stripping away contexts, differences and the unexpected as well as potentially attributing too much impact to a specific event. In particular impact measurement can narrow the focus of PES research, such that only expected impacts are looked for, and as a result, only these impacts are found. This makes it difficult to identify unexpected impacts such as negative or problematic PES outcomes.
Conclusion: Welcome to the middle ground
Exploring the impact of PES activities is an increasingly prioritised feature of monitoring and evaluation. Given the complicated and contested background of the impact agenda, it is worth thinking carefully about what measuring impact means for PES practice and research. It is important to remember the 'so what' arguments about impact studies. For example, arguments about the value of research on impact include assumptions about the extent to which impact studies will be taken into account by policy makers and funders in their decisions about resource distribution. This is a big assumption to make and one that is made by both researchers and policy makers (Haynes et al., 2012; Slavin, 2008). The extent to which this part of the economic rationale for assessing PES impacts is true is questionable.Given questions like this, the instrumental rationale for exploring the impact of PES as one aspect of developing a better quality, more professional field of practice may be a stronger argument to reflect upon.
Investigating impact is a specific aspect of PES research, and, it should be remembered, not the most important or only issue for PES research. That does not mean researching and understanding the impact of PES is not important. Exploring the impact of PES is a useful tool within PES research, monitoring and evaluation, with convincing instrumental and economic rationales. The difficulty in measuring the impact of PES is there is little agreement on what to measure or how to measure it. As Rowe and Frewer have argued: "The difficulty lies in the fact that effectiveness in this domain is not an obvious, unidimensional and objective quality (such as speed or distance) that can be easily identified, described, and then measured"(2004, p. 517). In pragmatic terms, however, this simply suggests that to turn impact studies from fantasy to reality care and thought must be given to the design of impact studies and design decisions about methods, underlying assumptions and claims made must be made transparent.
- Beck, U. (1992). Risk society: Towards a new modernity. London, Thousand Oaks, New Delhi: Sage.
- Biesta, G. (2007). Why "what works" won't work: Evidence-based practice and the democractic deficity in educational research. Educational Theory, 57(1), 1-22. doi: 10.1111/j.1741-5446.2006.00241.x
- British Science Association. (2010). The public engagement triangle Science for All - Public Engagement Conversational Tool. Version 6. . London.
- Burchell, K. (2007). [UK governmental public dialogue on science and technology, 1998-2007: Consistency, hybridity and boundary work].
- Cheah, P. Y., Lwin, K. M., Phaiphun, L., Maelankiri, L., Parker, M., Day, N. P., . . . Nosten, F. (2010). Community engagement on the Thai-Burmese border: rationale, experience and lessons learnt. International Health, 2(2), 123-129.
- Davies, A., Mbete, B., Fegan, G., Molyneux, S., & Kinyanjui, S. (2012). Seeing ‘With my Own Eyes’: Strengthening Interactions between Researchers and Schools*. IDS Bulletin, 43(5), 61-67. doi: 10.1111/j.1759-5436.2012.00364.x
- Dawson, E. (2012). Non-participation in public engagement with science: A study of four socio-economically disadvantaged, minority ethnic groups. King's College London, London.
- Dewey, J. (1938). Experience and education. New York: The Macmillan Company.
- Dillon, J. (2009). On scientific literacy and curriculum reform. International Journal of Environmental & Science Education, 4(3), 201-213.
- Durant, J. (2012). The problem is the problem: What would count as a successful problem definition in “Pus Research”? Paper presented at the 12th International Public Communication of Science and Technology Conference 18th - 20th April, Florence, Italy.
- Elzinga, A. (2012). Features of the current science policy regime: Viewed in historical perspective. Science and Public Policy, 39(4), 416-428. doi: 10.1093/scipol/scs046
- Falk, J., & Dierking, L. D. (2012). Lifelong learning for adults: The role of free-choice experiences. In B. Fraser, K. Tobin & C. J. (Eds.), Second international handbook of science education (pp. 1063-1080). London and New York: Springer.
- Felt, U., & Fochler, M. (2008). The bottom-up meanings of the concept of public participation in science and technology. Science and Public Policy, 35(7), 489-499. doi: 10.3152/030234208x329086
- Fiorino, D. (1990). Citizen participation and environmental risk: A survey of institutional mechanisms. Science, Technology & Human Values, 15(2), 226-243.
- Gregory, J., Agar, J., Lock, S. J., & Harris, S. (2007). Public engagement of science in the private sector: A new form of PR? In B. M. .W. & B. Massimiano (Eds.), Journalism, science and society (pp. 203-214). New York and Abingdon: Routledge.
- Haynes, L., Service, O., Goldacre, B., & Torgerson, D. (2012). Test, learn, adapt: Developing public policy with randomised control trials. https://update.cabinetoffice.gov.uk/resource-library/test-learn-adapt-developing-public-policy-randomised-controlled-trials; London: Cabinet Office Behavioural Insights Team.
- Horst, M. (2010). Collective Closure? Public debate as the solution to controversies about science and technology. Acta Sociologica, 53(3), 195-211. doi: 10.1177/0001699310374904
- Horst, M., & Michael, M. (2011). On the Shoulders of Idiots: Re-thinking Science Communication as ‘Event’. Science as Culture, 20(3), 283-306. doi: 10.1080/09505431.2010.524199
- Irwin, A., & Michael, M. (2003). Science, social theory and public knowledge. Maidenhead and Philadelphia: Open University Press.
- King, H., & Dillon, J. (2012). Learning in informal settings. In N. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 1905-1908). New York: Springer.
- Leander, K. M., Phillips, N. C., & Taylor, K. H. (2010). The changing social spaces of learning: Mapping new mobilities. Review of Research in Education, 34(1), 329-394. doi: 10.3102/0091732x09358129
- Macdonald, S. (2002). Behind the scenes at the Science Museum. Oxford and New York: Berg.
- Marres, N. (2005). No issue, no public: Democratic deficits after the displacement of politics. Phd Monograph, University of Amsterdam, Amsterdam.
- McCall, B. (2012). Profile: MLW optimises community engagement in research. The Lancet, 380(9839), 328.
- Nisbet, M. C., & Scheufele, D. A. (2009). What's next for science communication? Promising directions and lingering distractions. American Journal of Botany, 96(10), 1767-1778. doi: 10.3732/ajb.0900041
- Oakley, A. (2002). Social Science and Evidence-based Everything: The case of education. Educational Review, 54(3), 277-286. doi: 10.1080/0013191022000016329
- Ravetz, J. (2005). The post-normal safety of science. In M. Leach, I. Scoones & B. Wynne (Eds.), Science and citizens: Globalization and the challenge of engagement (pp. 43-53). London: Zed Books.
- Roth, W.-M., & Lee, S. (2002). Scientific literacy as collective praxis. Public Understanding of Science, 11(1), 33-56. doi: 10.1088/0963-6625/11/1/302
- Roth, W.-M., & Van Eijck, M. (2010). Fullness of life as a minimal unit: Science, technology, engineering and mathematics (STEM) learning across the life span. Science Education, 94(6), 1027-1048.
- Rowe, G., & Frewer, L. J. (2004). Evaluating public-participation exercises: A research agenda. Science, Technology & Human Values, 29(4), 512-556. doi: 10.1177/0162243903259197
- Rowe, G., & Frewer, L. J. (2005). A typology of public engagement mechanisms. Science, Technology & Human Values, 30(2), 251-290. doi: 10.1177/0162243904271724
- Rowe, G., Horlick-Jones, T., Walls , J., & Pidgeon, N. (2005). Difficulties in evaluating public engagement initiatives: Reflections on an evaluation of the UK GM Nation? public debate about transgenic crops. Public Understanding of Science, 14(4), 331-352. doi: 10.1177/0963662505056611
- Slavin, R. E. (2002). Evidence-Based Education Policies: Transforming Educational Practice and Research. Educational Researcher, 31(7), 15-21. doi: 10.3102/0013189x031007015
- Slavin, R. E. (2008). Perspectives on Evidence-Based Research in Education - What Works? Issues in Synthesizing Educational Program Evaluations. Educational Researcher, 37(1), 5-14. doi: 10.3102/0013189x08314117
- Stengers, I. (2000). The invention of modern science (D. W. Smith, Trans.). Minneapolis: University of Minnesota Press.
- Trench, B. (2008). Towards an analytical framework of science communication models. In D. Cheng, M. Claessens, T. Gascoigne, J. Metcalfe, B. Schiele & S. Shi (Eds.), Communicating science in social contexts (pp. 119-135). Netherlands: Springer
- Wynne, B. (2006). Public engagement as a means of restoring public trust in science - Hitting the notes, but missing the music? Community Genetics, 9(3), 211-220. doi: 10.1159/000092659