ISSN :2582-9793

Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large

Original Research (Published On: 28-Dec-2024 )
Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large
DOI : https://dx.doi.org/10.54364/AAIML.2024.44177

Jussi Jauhiainen

Adv. Artif. Intell. Mach. Learn., 4 (4):3097-3113

Jussi Jauhiainen : Department of Geography and Geology, University of Turku, Vesilinnanmäki 5, 20014 Turku, Finland & Institute of Ecology and the Earth Sciences, University of Tartu, Vanemuise 46, 50010 Tartu, Estonia

Download PDF Here

DOI: https://dx.doi.org/10.54364/AAIML.2024.44177

Article History: Received on: 05-Oct-24, Accepted on: 21-Dec-24, Published on: 28-Dec-24

Corresponding Author: Jussi Jauhiainen

Email: jusaja@utu.fi

Citation: Jussi Jauhiainen, Agustín Garagorry Guerra. (FINLAND) (2024). Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large. Adv. Artif. Intell. Mach. Learn., 4 (4 ):3097-3113


Abstract

    

Evaluating open-ended written examination responses from students is an essential yet time-intensive task for educators, requiring a high degree of effort, consistency, and precision. Recent developments in Large Language Models (LLMs) present a promising opportunity to balance the need for thorough evaluation with efficient use of educators' time. We explore LLMs—GPT-3.5, GPT-4, Claude-3, and Mistral-Large—in assessing university students' open-ended responses to questions about reference material they have studied. Each model was instructed to evaluate 54 responses repeatedly under two conditions: 10 times (10-shot) with a temperature setting of 0.0 and 10 times with a temperature of 0.5, expecting a total of 1,080 evaluations per model and 4,320 evaluations across all models. The RAG (Retrieval Augmented Generation) framework was used to make the LLMs to process the evaluation. Notable variations existed in studied LLMs consistency and the grading outcomes. There is a need to comprehend strengths and weaknesses of using LLMs for educational assessments

Statistics

   Article View: 406
   PDF Downloaded: 3