The development of artificial intelligence models is today in the hands of machine learning engineers and non-expert users rarely have an understanding of how the ‘black box’ works; they can only observe the input and output. This is problematic as only a limited group of individuals understands how to train an artificial intelligence model that may have the power to make and affect important decisions. The machine learning approach, interactive machine learning, may be a solution to this as it can allow for non-expert users to be a part of the process to train artificial intelligence models through an interactive machine learning interface. To allow a non-expert user to use such an interface, it must be usable and easy to understand which highlights the need of human-computer interaction in the design. A paradigm that has grown popular for analysis with artificial intelligence is satellite images, which has been used to detect both changes to land boundaries and natural disasters. This thesis explores how to design an interactive machine learning interface for non-expert users, enabling them to train artificial intelligence models according to their specific needs. The research question is therefore similarly How can a human-centered feedback interface for fine-tuning AI models be designed for non-expert users within the image analysis domain of satellite images? The goal of the thesis is to evaluate and test a graphical user interface which allows for non-expert users to annotate and correct data in satellite images to fine-tuning artificial intelligence models. The thesis was conducted with a design science approach as well as a survey approach with qualitative thematic analysis and three iterations of testing. First iteration involved evaluating a paper prototype with a domain expert. The second iteration included evaluating a Figma prototype through observation tests with non-experts. Finally, the third iteration involved testing of a new Figma prototype using a questionnaire, also with non-experts. The gathered data was transcribed and analyzed with an inductive approach which resulted in several themes, categories and codes. Thematic analysis revealed key aspects such as a high level of user confusion regarding interaction with the results of artificial intelligence models and a lack of understanding of certain icons' functionality. This highlights a greater need for guidance and the importance of following specific design principles when creating interactive machine learning interfaces. The conclusion emphasizes the necessity of integrating design principles and gestalt laws into interface design to facilitate effective communication and user guidance. By clearly communicating the task's goal through guiding and priming design, users can more effectively perform tasks. The proposed research question was partly answered, indicating that further evaluation of the prototype is needed to fully address the research question.