AI Versus Human Graders: Assessing the Role of Large Language Models in Higher Education

Mahlatse Ragolane *

Research and Content Development, School of Excellence, Regent Business School (Honoris United Universities), Johannesburg, South Africa.

Shahiem Patel

Regent Business School (Honoris United Universities), Johannesburg, South Africa.

Pranisha Salikram

School of Commerce and Management, Regent Business School (Honoris United Universities), Durban, South Africa.

*Author to whom correspondence should be addressed.


Abstract

While AI grading is seeing an increase in use and adoption, traditional educational practices are also forced to adapt and function together with AI, especially in assessment grading. In retrospect, human grading, on the other hand, has long been the cornerstone of educational assessment. Traditionally, educators have assessed student work based on established criteria, providing feedback intended to support learning and development. While human grading offers nuanced understanding and personalized feedback, it is also subject to limitations such as grading inconsistencies, biases, and significant time demands. This paper explores the role of large language models (LLMs), such as ChatGPT-3.5 and ChatGPT-4, in grading processes in higher education and compares their effectiveness with that of traditional human grading methods. The study uses both qualitative and quantitative methodologies, and the research extends across multiple academic programs and modules, providing a comprehensive assessment of how AI can complement or replace human graders. In study 1, we focused on (n=195) scripts in (n=3) modules and compared GPT 3.5, GPT 4, and human graders. Manually marked scripts exhibited an average of 24%-mark difference. Subsequently, (n=20) scripts were assessed using GPT-4, which provided a more precise evaluation, a total average of 4% difference in results. There were individual instances where marks were higher, but this could not naturally be a marker judgment. In Study 2, the results from the first study highlighted the need for a comprehensive memorandum; thus, we identified (n=4341), among which (n=3508) scripts were used. The study found that AI remains efficient when the memorandum is well-structured. Furthermore, the study found that while AI excels in scalability, human graders excel in interpreting complex answers, evaluating creativity, and picking up plagiarism. In Study 3, we evaluated formative assessments in GPT 4 (statistics n=602, Business Statistics n=859 and Logistics Management n=522). The third study demonstrated that AI marking tools can effectively manage the demands of formative assessments, particularly in modules where the questions are objective and structured, such as Statistics and Logistics Management. The first error in Statistics 102 highlighted the importance of a well-designed memorandum. The study concludes that AI tools can effectively reduce the burden on educators but should be integrated into a hybrid model in which human markers and AI systems work in tandem to achieve fairness, accuracy, and quality in assessments. This paper contributes to ongoing debates about the future of AI in education by emphasizing the importance of a well-structured memorandum and human discretion in achieving balanced and effective grading solutions.

Keywords: Artificial intelligence, LLMs, ChatGPT, higher education, assessment, AI grading


How to Cite

Ragolane, Mahlatse, Shahiem Patel, and Pranisha Salikram. 2024. “AI Versus Human Graders: Assessing the Role of Large Language Models in Higher Education”. Asian Journal of Education and Social Studies 50 (10):244-63. https://doi.org/10.9734/ajess/2024/v50i101616.