• Assessments

    Unlocking the Power of AI for More Creative Assessments

  • Assessments

    Unlocking the Power of AI for More Creative Assessments

The influence of ChatGPT on higher education has sparked inquiries regarding assessment validity, while also offering the potential for enhancing learning and critical thinking.

As technological progress continues to reshape the educational landscape, preserving academic integrity has become an increasingly pressing concern. On one hand, GenAI offers the potential to revolutionize education at multiple levels. It can enhance personalized learning, streamline administrative tasks, and offer novel ways of explaining complex topics. On the other hand, it can pose multiple challenges, such as phantom references, biased trained data and academic integrity where AI can generate student assessment with an input of a prompt.

However, with thoughtful usage and design, teachers can leverage GenAI as a tool to create more dynamic, engaging, and individualized learning experiences for students. The potential effects are extensive, changing how we design and implement courses, how we assess and provide feedback, and how we engage students in the learning process to dramatically revolutionize teaching and assessment methods.

The following shows suggestions for assessments that are resilient to ChatGPT. Other alternatives include collaborative Wiki project, AI-assisted peer review, game-based learning assessment, and augmented reality (AR) based learning.

You can find a full guide for integrating GenAI into assessment strategies here: Redesigning Assessment with Generative AI: A Guide for Teachers (HKU Portal login required). You can also create your own new assessments for your course.

  • Action Speaks Louder: Using OSCE to assess in non-clinical disciplines

    Objective Structured Clinical Examinations (OSCEs) are a style of examination often used in health sciences to evaluate competence in a range of skills. The application of OSCEs in Engineering and Law studies is a step towards addressing the disconnection between theoretical knowledge and real-world application. By setting up scenarios that closely mimic situations that students might face in their professional life, it pushes students to think and act in a manner that tests their understanding, critical thinking, and practical application of their knowledge.
  • Critical Evaluation of AI-Generated Essays

    Students are given AI-generated essays and are tasked with critiquing them. The assessment revolves around their ability to critically review, identify areas of strength and weakness, provide constructive feedback, and mark the essays based on predefined rubrics.
  • Elevator Pitch

    The Elevator Pitch assessment involves students succinctly presenting their concepts, ideas, or solutions in a limited time frame. The duration of the pitch is typically between 30 seconds to 2 minutes (the average length of an elevator ride) The goal of an elevator pitch is to brief, yet persuasively communicate an idea or concept in a way that captures the listener’s interest.
  • Role-Play Assessment

    Role-Play Assessment involves students taking on specific roles in real-world contexts to display their understanding of course concepts. This might include re-enacting a historical event, acting out a business negotiation, simulating a political debate, or even conducting a mock trial. Students must research their roles, understand the perspectives they need to embody, and work within their roles to address the scenario.

Case study: The impact of ChatGPT on engineering education assessments

In a recent study, nine researchers from seven Australian universities conducted a benchmarking of their assessments against ChatGPT to gain insights into the strengths and weaknesses of assessments used in engineering education. Through a subject-by-subject analysis, ChatGPT successfully passed three subjects, failed five subjects, and for two subjects, the results were inconclusive. In terms of assessment types, ChatGPT passed four types, failed three types, and two types resulted in a tie.

The researchers also provided a set of recommendations regarding assessment design. One noteworthy strategy is the utilization of oral presentation assessments, as ChatGPT is unable to participate on behalf of a student in this format. Additionally, experimentation and laboratory work were identified as reliable methods for ensuring academic integrity.

Reference
  • Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., … & Sandison, C. (2023). ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 1-56.
Copyright © 2023 TALIC. All Rights Reserved.