Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE credits
The advent of advanced Large Language Models (LLMs) like GPT-4o and Gemini
1.5 Pro demonstrates significant potential for enhancing problem-solving in
coding. This master’s thesis provides a quantitative comparison of these two
models, focusing on their performance in solving algorithmic and data structure
problems while exploring their reasoning capabilities through experimental
research.
The study adopts a quantitative approach, presenting the models with a
diverse set of coding problems sourced from LeetCode, an online platform for
coding challenges. Each solution is rigorously evaluated through unit testing
to determine correctness and by measuring execution time to assess efficiency.
This comprehensive method offers insights into the models’ problem-solving
effectiveness and computational efficiency.
The findings reveal that GPT-4o outperformed Gemini 1.5 Pro in problemsolving
ability, particularly in handling complex algorithmic tasks. However,
Gemini 1.5 Pro demonstrated slightly better efficiency, achieving marginally
lower execution times on average. Both models significantly outperformed human
programmers in efficiency, producing more optimized solutions. These
results highlight the potential of LLMs like GPT-4o and Gemini 1.5 Pro to
address specific coding challenges effectively, paving the way for their broader
application in computational problem-solving.
2025.
Large Language Models, GPT-4o, Gemini 1.5 Pro, Algorithm and Data Structure Problem-Solving, Quantitative Research