Oliver Mollitt (210069747) - Newcastle University
In the development of Large Language Models (LLMs), the world has seen an increase in an overreliance on artificial intelligence. This overreliance brings the risk of students in the educational institution using artificial intelligence in everyday life without learning any programming skills. Risks of hallucinations, where the response from the LLM is incorrect and could also be harmful is a serious risk considered. This project uses clear and precise prompts to submit coding assignments along with the student’s code to receive responses in a readable format that allows the response to be read and stored in the project’s database.
This project aims to create a web application that allows teachers to set programming assignments to students, and for artificial intelligence to help mark the code submissions. The LLM implemented uses finely-tuned prompts to retrieve the code and provide accurate and detailed responses back to both teachers and students. To help mitigate the risk of hallucinations produced by the LLM, teachers are allowed to change the feedback and response of the LLM as well as providing a final grade for students. The results of this project show that the usage of artificial intelligence for coding can be a useful tool for teaching, and shows how fine-tuning LLMs can lead to a significant increase in response accuracy, clarity and precision. The main conclusion from this dissertation is that the usage of artificial intelligence to help teach programming in the educational institution is feasible, yet delicate.