aajtak campus
ADVERTISEMENT

Study finds that answers generated by AI frequently evade detection and are exploited for cheating during exams

A research conducted at the University of Reading reveals that examiners with experience may fail to detect AI-generated answers in exams, highlighting concerns about rising cheating incidents with the advancement of AI technology.

AT-Campus
Author Image
Updated: 6/28/2024, 11:01:00 AM

A recent study conducted by the University of Reading in the UK has uncovered that even experienced examiners can fail to detect artificial intelligence (AI)-generated exam answers. Surprisingly, these AI-generated answers not only went unnoticed but also received higher scores compared to answers written by real students.

ADVERTISEMENT

This revelation raises significant concerns regarding the potential for increased cheating as AI technology continues to advance.

The study involved submitting AI-generated answers on behalf of 33 fictitious students for undergraduate psychology exams. These answers, ranging from 200-word short responses to 1,500-word essays, were evaluated by teachers from Reading's School of Psychology and Clinical Language Sciences, who were unaware of the study's nature.

Astoundingly, approximately 94 percent of these AI-generated answers escaped detection in this initial real-world "blind test" for AI identification, underscoring the formidable challenge of detecting AI use in exams.

Professor Peter Scarfe, leading the project, observed that AI performed notably well in earlier academic years but encountered difficulties in final-year modules. This variability underscores the urgency for educational institutions to devise more robust methods for identifying AI-generated work.

The study, published in the Plos One journal, also revealed that AI-generated responses frequently received higher grades than those produced by genuine students. This outcome raises alarms about the potential for students utilizing AI tools not only to cheat but also to outperform their peers who complete exams honestly.

GPT-4, a sophisticated AI developed by OpenAI, was utilized to generate responses for the study. These responses were blindly submitted to examiners, who struggled to distinguish between AI-generated and authentic student answers in most cases.

The implications for education are profound. This study serves as a critical reminder for educational institutions to adapt to the evolving landscape of AI in education. As AI technologies become more advanced, universities and schools must continuously update their policies and assessment methods to uphold academic integrity.

This issue is particularly pertinent as leading universities like those in the Russell Group, such as Oxford and Cambridge, commit to ethically integrating AI in teaching and assessments.

Professor Scarfe stressed the importance of embracing AI's "new normal" in education while ensuring its integration enhances educational standards rather than compromises them.

One proposed solution is to revert to supervised, in-person exams to mitigate AI misuse in unsupervised, take-home assessments. However, this approach may not fully address AI's potential misuse in coursework and homework, which often lack direct supervision.

The University of Reading's study highlights the urgent need for educational institutions to reconsider assessment methods in response to AI advancements. By implementing stricter monitoring and adapting to technological progress, schools and universities can better protect the integrity of assessments and maintain a fair academic environment for all students.

COMMENTS (0)

ADVERTISEMENT
IN THIS STORY
  • #AI
  • # AI 2024
  • # AI Detection
  • # AI detection 2024
  • # teachers
  • # students
  • # AI generated answers

Unlock yourpotential!

Let us know your preference and our team will guide you toward your academic journey. Stay tuned for personalized advice.

By submitting this form, you accept and agree to our “Terms & Conditions and Privacy Policy”

FAQ's

Why is detecting AI-generated exam answers challenging?

Experienced examiners struggle to distinguish AI-generated answers from those written by real students, as shown by recent research.

What were the findings of the University of Reading study on AI in exams?

The study revealed that AI-generated answers went unnoticed in exams and often received higher grades than those by real students.

How did AI perform across different academic levels in the study?

AI performed well in earlier academic years but faced challenges in final-year modules, indicating variability in its effectiveness.

What are the implications of AI's performance in exams for educational institutions?

Institutions need to adapt assessment methods to effectively detect and mitigate AI use, ensuring fairness and academic integrity.

What recommendations does the study offer for addressing AI's impact on exams?

Consideration of supervised, in-person exams and continuous updating of policies to counteract AI's potential misuse in educational settings.