Exhibition evaluation explained

Exhibition evaluation is a four-step process, with opportunities at each stage to test the effectiveness of the messages and interpretive approaches.

Exhibition evaluation is a four-step process with opportunities at each stage to test the effectiveness of an exhibition's messages and interpretive approaches.

Front-end evaluation occurs during the exhibition development stage to gauge audience interest levels and prior knowledge about the subject. It is used to develop themes, audiences, goals, messages, and interpretive strategies.

The aims are to:

  • help identify the project brief
  • gain an understanding of the potential audiences' prior knowledge and interests, particularly relating to concepts
  • test theories about visitor behaviour and learning
  • find out audience needs and how can these be met
  • collect relevant information about audiences and any proposed ideas to help decision making.

The methods used include:

  • focus groups
  • large and small scale sample surveys/questionnaires
  • unstructured and semi-structured interviews
  • informal conversations and feedback
  • computer surveys, online surveys
  • community days/workshops

Other resources include:

  • existing market research studies
  • literature reviews
  • evaluation reports for similar projects

Formative evaluation happens during development and production to test exhibition components, such as text, labels, graphics and interactives, As it takes place during the development stage it allows the findings to be incorporated into the finished product. Mock-ups of proposed exhibits, texts, and other communication tools are often used.

The aims are to:

  • seek feedback related to how well the proposed program communicates the messages
  • produce the optimum program within the limits of what's possible
  • provide insight into learning and communication processes.

The methods used include:

  • small scale samples of visitors and/or others (15-20 minimum is optimal at each stage of testing)
  • semi structured interviews
  • cued and non-cued observations
  • 'workshopping' with staff and/or special interest groups

Repetitive methodologies are used to incorporate findings from each stage until the developers are satisfied with the item/s being tested.

Other resources include:

  • literature searches
  • previous evaluations conducted by other institutions
  • consultants and peers

Remedial evaluation is conducted immediately after an exhibition or program opens to see how all parts of the exhibition work together, making practical suggestions for improvements. It focuses on physical and architectural features such as lighting, placement of thematic headlines, entrances and exits and psychological factors including disorientation, crowds, thematic layout, information overload, fatigue, social activity and so on.

The aims are to:

  • check that the program 'works' in a practical sense
  • determine what maintenance/resources are needed
  • improve the short or long term effectiveness of the program for visitors
  • provide some early insights into how visitors use the program.

The methods used include:

  • observations
  • informal feedback from visitors
  • feedback sheets
  • surveys and interviews

Other resources include:

  • comments books
  • staff feedback (especially Front-of-house and floor staff)

Summative evaluation uses a variety of methods at the conclusion of an exhibition or program to check whether it delivered the messages that were intended and what learning occurred; how satisfied people were with the program; as well as the performance of the marketing strategy. It is conducted on the finished exhibit or program and its components, using a combination of internal sources (Project Team, other staff) and external feedback (visitors, special interest groups, others).

The aims are to:

  • give feedback about achievement of objectives
  • provide information on how a program is working overall, how people use it, what they learn from it, or how they are changed
  • provide reports, plan for future projects, suggest research, identify problems with visitor usage, interest and learning, identify successful strategies, layouts, etc
  • identify the relationship between the program costs and outcomes through a cost/benefit analysis.

The methods used include:

  • large scale visitor surveys
  • structured observations to gauge visitor interest, the effectiveness of the program for attracting visitors and holding their attention
  • formal 'testing' with visitors or groups
  • in-depth interviews
  • critical appraisal
  • media/critical reviews
  • visitor numbers/counts

Other resources include:

  • comments book
  • public feedback (eg letters)
  • revenue reports
  • statistics - visitor numbers, bookings, etc

General References

  • Diamond, J. (1999). Practical Evaluation Guide: tools for museums and other informal education settings. Walnut Creek, CA: Alta Mira Press.
  • Frazer, L., & Lawley, M. (2000). Questionnaire design & administration. Brisbane: John Wiley & Sons Australia, Ltd.
  • Kelly, L. (2001). Focus groups. Museum Methods Sheet 7.4. Canberra: Museums Australia Inc.
  • Krueger, R. (1988). Focus Groups: A Practical Guide for Applied Research. California: Sage.
  • Leones, J. (undated). Guide to Conducting Visitor Surveys. Paper available online at
    http://ag.arizona.edu/pubs/marketing/az1056/
  • Loomis, R. (1987). Museum Visitor Evaluation: New Tool for Management. Nashville: American Association for State and Local History.
  • McManus, P. (1991). Towards Understanding the Needs of Museum Visitors. in Lord, G & Lord, B. (Eds). The Manual of Museum Planning. London: HMSO.
  • Meehan, C. (2001). Designing questionnaires. Museum Methods Sheet 7.5. Canberra: Museums Australia Inc.
  • Scott, C. (2001). Exhibition Evaluation for museums and galleries. Museum Methods Sheet 7.2.
  • Screven, C. (1990). Uses of Evaluation Before, During and After Exhibit Design. ILVS Review, 1(2), 36-66.

Front-end Evaluation References

  • Dierking, L. & Pollock, W. (1998). Questioning Assumptions: An introduction to front-end studies. Washington, D.C.: Association of Science-Technology Centres.
  • Hayward, J. & Loomis, R. (1993). Looking Back at Front-End Studies. Visitor Studies: Theory, Research and Practice, 5, 261-265.
  • Kelly, L. & Savage, G. (1998). Front-end Evaluation Workshop. Notes from presentation at Visitor Centre Stage Action for the Future Conference, Canberra. Available online at http://amol.org.au/evrsig/evrconf.html
  • Kelly, L. & Sullivan, T. (1996). Front-end evaluation - Beyond the Field of Dreams. Paper given at Museums Australia Annual Conference, Sydney. Available online at http://amol.org.au/evrsig/evrlib.html
  • Shearman, A. & Wood, R. (1992). Front-end evaluation of an exhibition about innovation in Australian industry. Sydney: Powerhouse Publishing.
  • Shettel, H. (1992). Front-end Evaluation: Another Useful Tool. ILVS Review, 2(2), 275-280.

Formative Evaluation References

  • Griggs, S. (1981). Formative Evaluation of Exhibits at the British Museum (Natural History). Curator, 24(3), 189-201.
  • McClafferty, T., Rennie, L. & Groves, I. (1996). Easy to use instruments for formative evaluation. Paper given at Museums Australia Annual Conference, Sydney. Available online at http://amol.org.au/evrsig/evrlib.html
  • Rennie, L. & McClafferty, T. (1996). Handbook for formative evaluation of interactive exhibits. Canberra, ACT: Questacon. (available in PDF format from Questacon and Leonie Rennie: lrennie@info.curtin.edu.au)
  • Scott, C. (1994). Formative Evaluation of an exhibition about innovation in Australian industry. Sydney: Powerhouse Publishing.
  • Screven, C. (1988). Formative Evaluation: Conceptions and Misconceptions. Visitor Studies: Theory, Research and Practice, 1, 73-82.
  • Taylor, S. (Ed.) (1991). Try It! Improving exhibits through formative evaluation. Washington D.C.: Association for Science and Technology Centres.

Summative Evaluation References

  • Beer, V. (1987). Great Expectations: Do Museums Know What Visitors Are Doing? Curator, 30(3), 206-215.
  • Chiozzi, G. & Andreotii, L. (2001). Behaviour vs. Time: Understanding How Visitors Utilise the Milan Natural History Museum. Curator, 44(2), 153-165.


Lynda Kelly
Last Updated: