Accessibility Tests
Accessibility in software development is not only a legal requirement, but also a sign of quality and user orientation. Accessibility testing...
First of all: Sorry for the poor audio quality, unfortunately we only realized this afterwards. I hope the content will make up for it :-) The development of Large Language Models (LLMs) and the role of Acceptance Test Driven Development (ATDD) are key topics in AI development. David, an expert in the development and quality assurance of AI-based telephone bots for medical practices, shares his experiences and insights into this process. The challenges and approaches to training and testing LLMs, including the use of prompt engineering and fine tuning, will be highlighted. Of particular note is the approach of applying ATDD methods to LLM developments to improve the quality and effectiveness of the models. Another focus is on the CPMAI process, which represents a modern approach to the development and implementation of AI projects.
“This is relatively demanding. At the end of the day, we have a few components. We first do speech-to-text and then we use a language model on a pure text basis.” - David Faragó
David is a deep learning engineer at Mediform, specializing in fine-tuning large language models, prompt engineering and microservices. He also runs QPR Technologies, a consultancy for innovative quality assurance, and is a member of the steering committee of the GI specialist group Test, Analysis and Verification.
Highlights of this episode:
Further links:
When developing AI-based telephone bots for medical practices, there are a lot of new quality assurance challenges to overcome. One solution approach in the testing process is Acceptance Test Driven LLM Development
David Faragó, a renowned expert in the field of AI and specifically in LLM topics, shares his extensive knowledge on the use of Large Language Models. From prompt engineering to fine-tuning foundational models, David covers all aspects. His current project at Mediform aims to develop a telephone bot for medical practices that can communicate with patients in natural language thanks to modern AI. This innovative application represents a significant step forward and demonstrates the potential of LLM in practice.
At the heart of David’s work is a solidly developed testing process that focuses on acceptance testing. Through careful analysis of real dialogs and iterative improvements, a high-quality model is created. David talks about the challenges and approaches to developing LLMs, including dealing with non-determinism and the black box nature of this technology. By using specialized tools such as Eloifa’s Language Model Evaluation Harness, the team is able to effectively measure and verify the quality of their models.
Acceptance Test Driven LLM Development is not just a method, but a philosophy. It interweaves agile methods with machine learning, enabling rapid iteration cycles with direct end-user involvement. David explains the process in detail and shows how this approach has made it possible to develop a robust and effective system for medical practices. This approach ensures that the end product meets the exact requirements while remaining flexible for future customization.
Another key element of David’s strategy is the use of CPMAI (Cognitive Process Management for AI), a modern process framework that combines agility with machine learning. This method supports the team at every stage of the development cycle - from understanding the business need to deploying the model. This structured approach allows problems to be identified and resolved quickly, enabling continuous improvement of the system.
The discussion not only highlights the complexity behind the development of Large Language Models, but also the enormous potential of this technology. Innovative approaches such as Acceptance Test Driven LLM Development are opening the door to a new era of AI development - an era in which quality assurance and agile methods go hand in hand. This episode sheds light on the exciting future of AI technology and its many possible applications.
Accessibility in software development is not only a legal requirement, but also a sign of quality and user orientation. Accessibility testing...
Unfortunately, the acceptance test is often seen as the final test stage. If many errors are then found, the excitement and the amount of work...
What will the day-to-day work of a developer look like in 2034, what environments, tools and practices will be used to create, test, deploy and...