Large Language Models (LLMs) have transformed natural language processing by producingcoherent and contextually appropriate text. However, their broad adoption brings significantprivacy and security issues, especially concerning the potential for sensitive or personallyidentifiable information to be inferred from the model’s outputs. A notable risk in this context isposed by Membership Inference Attacks (MIAs).This thesis investigates the privacy challenges associated with fine-tuning LLMs, focusing onhow fine-tuned models might retain and reveal memorized information from their training data.The research aims to develop secure fine-tuning techniques to create robust language modelsthat can mitigate the privacy risks of MIAs. One key approach examined is Differential Privacy(DP), which ensures that the inclusion or exclusion of a single data record minimally impacts themodel’s output, thereby safeguarding individual privacy.Utilizing the GPT-2 model and the SPEC5G dataset, this thesis fine-tunes models for aquestion-answering application (Chatbot). Through empirical evaluations and experiments onbenchmark datasets, we evaluate the effectiveness of differential privacy in protecting againstMIAs. The study addresses the balance between privacy protection and model performance,aiming to identify practical challenges and enhance DP implementation in large-scale languagemodels.The results demonstrate that while Differential Privacy can significantly reduce the risk of MIAs,it often comes at the cost of reduced model accuracy. However, by finetuning the privacyparameters and employing DP techniques, this thesis successfully strikes a balance, achievingsubstantial privacy protection with minimal impact on model performance. These findingscontribute to the development of robust and privacypreserving natural language processingsystems, addressing the increasing concerns over data privacy in the deployment of LLMs.