Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluating the performance of GPT-4o and Gemini 1.5 Pro on algorithms and data structure cod- ing problems
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
2025 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

The advent of advanced Large Language Models (LLMs) like GPT-4o and Gemini

1.5 Pro demonstrates significant potential for enhancing problem-solving in

coding. This master’s thesis provides a quantitative comparison of these two

models, focusing on their performance in solving algorithmic and data structure

problems while exploring their reasoning capabilities through experimental

research.

The study adopts a quantitative approach, presenting the models with a

diverse set of coding problems sourced from LeetCode, an online platform for

coding challenges. Each solution is rigorously evaluated through unit testing

to determine correctness and by measuring execution time to assess efficiency.

This comprehensive method offers insights into the models’ problem-solving

effectiveness and computational efficiency.

The findings reveal that GPT-4o outperformed Gemini 1.5 Pro in problemsolving

ability, particularly in handling complex algorithmic tasks. However,

Gemini 1.5 Pro demonstrated slightly better efficiency, achieving marginally

lower execution times on average. Both models significantly outperformed human

programmers in efficiency, producing more optimized solutions. These

results highlight the potential of LLMs like GPT-4o and Gemini 1.5 Pro to

address specific coding challenges effectively, paving the way for their broader

application in computational problem-solving.

Place, publisher, year, edition, pages
2025.
Keywords [en]
Large Language Models, GPT-4o, Gemini 1.5 Pro, Algorithm and Data Structure Problem-Solving, Quantitative Research
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:su:diva-242704OAI: oai:DiVA.org:su-242704DiVA, id: diva2:1955595
Available from: 2025-04-30 Created: 2025-04-30

Open Access in DiVA

fulltext(749 kB)61 downloads
File information
File name FULLTEXT01.pdfFile size 749 kBChecksum SHA-512
c42c89a1acdcf42d6f1ea495f2e75462d5538210ecc549dd8d3916ec71a73b32ecb6a03ca23dde91b1d9530bd64def5a648a053fbf7f5062032891570ffecb2f
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Daa, Adam
By organisation
Department of Computer and Systems Sciences
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 61 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 45 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf