Imke HEDDER, Wissenschaft im Dialog, Germany
Ricarda ZIEGLER, Wissenschaft im Dialog, Germany
Liliann FISCHER, Wissenschaft im Dialog, Germany
In recent years, science communication has become increasingly institutionalized and diversified, as the rising number of new actors, networks and formats for science communication shows. This trend can also be observed in Germany, where science communicators experiment with new approaches and media channels, while academic, public and political institutions debate future directions of a good science communication practice.
To fulfill this claim of a good practice, it is not enough to simply promote a rise in science communication activities, but to ensure that these are effective and further improved. For this, it is necessary to systematically explore what these efforts achieve, who they reach and what impact they have. An essential tool for this examination is meaningful evaluation.
Scholars have repeatedly pointed out that evaluations in our field are still lacking in various respects. The Impact Unit (a project by the organisation Wissenschaft im Dialog funded by the German Federal Ministry of Education and Research) aims to contribute to an improved evaluation practice in German science communication, starting with an exploration of the status quo: A systematic review of evaluation reports, an online survey with science communicators and a series of workshops with stakeholders from research, practice and funding agencies offers insights into the state of science communication and its evaluation with an emphasis on the practitioners’ perspectives. Based on these analyses, we identified three main challenges to science communication evaluation:
(1) There is a conflation of abstract visions of long term societal impact and measurable project aims as well as a lack of precise definitions of goals and target groups. (2) Although many evaluations highlight an interest in the impact of the examined projects, the methods chosen rarely allow scientifically valid impact assessments. The lack of rigorous repeated measures and the use of self-report methods are key issues in this regard. (3) The fact that few evaluation processes are made transparent and that evaluations accompanying a project’s process are rarely published, indicate a tendency to frame evaluations as the final “success story” of a project rather than a learning process. This common understanding stands in the way of reflecting how projects did not meet their expectations. Therefore, it hinders a constructive discussion of the actual impact of science communication and its potential for improvement.
These exploratory insights reveal central deficits that need to be overcome for a meaningful evaluation practice that is based on evidence, systematically planned, scientifically sound and transparent about its process and limitations. The results also point to current needs in the field, including further training, closer collaboration and extrinsic incentives, which can only be addressed through the cooperation of practitioners, researchers, funders and managers of scientific institutions.