PhD Scientific Days 2024

Budapest, 9-10 July 2024

Poster Session S - Health Sciences 2.

ScreenGPT – Evaluating Different Customization Techniques for Language Models in Helathcare Applications

Author(s)

Angyal Viola1
1: Semmelweis University, Doctoral School of Health Sciences

Text of the abstract

Introduction: In the field of language modeling, distinctions are commonly made based on model size, categorized as either Small Language Models (SLM) or Large Language Models (LLM). Both types are generative artificial intelligence systems that belong to Natural Language Processing (NLP) and can be used for similar purposes. When developing healthcare applications with language models, it's important to consider their size, efficiency, costs, and application areas to choose the most suitable one.
Aims: We aimed to develop a preliminary version of our healthcare-related web application, ScreenGPT, and make it testable for external testers. We customized it for primary and secondary healthcare prevention, especially in cervical cancer screenings, and gathered feedback. We examined whether a customized LLM or SLM is better suited for this purpose.
Methods: We examined SLMs through scientific publications and utilized the GPT-4 model to test various methods for LLM customization. We explored the rapid custom-GPT building offered by OpenAI's GPT store. Additionally, we prepared a training dataset and fine-tuned the GPT-4 model. Ultimately, we tried prompt engineering for customization. For cervical cancer-related information, we referenced WHO guidelines. We used Python programming language and Streamlit to make the application testable.
Results: In comparison, we found that customizing a LLM, such as the GPT-4 model, is significantly easier than developing a SLM from the begining. The Open-AI custom-GPT building is intuitive and does not require programming skills, although it provides limited customization options. Fine-tuning the GPT-4 model can be challenging, as the quality of the training dataset is crucial. The highest instances of hallucinations and errors occurred when using this technique. Prompt engineering proved to be the most effective method. This technique provided the greatest flexibility and maintained consistency.
Conclusion: The possibilities for developing customized language models are expanding rapidly, with increasing options available, sometimes even without requiring any programming skills. SLMs are often more cost-effective and can be operated offline. However, LLMs offer greater customization capabilities, can exhibit more empathetic responses, and possess more extensive informational capabilities.