GPT-4 and similar large models are receiving a high amount of attention across industries. Almost weekly new ways to use those models are unveiled.
From generating creative text, doing translations, correcting grammar, answering questions, classifying content, and in some cases being even tools to transform natural language into code in a variety of development languages, it is not a question if those models will be used, but how much they will change the way we work.
In this session we aim to demystify some of the technology behind those models, showcase what they are used for, and demonstrate essential mechanisms that are employed to integrate models in applications.
You will better understand base (foundation) models, why so-called “instruction” tuned or “chat” models still may hallucinate, and how to interact with Large Language Models more effectively.
We will then look at more advanced concepts such as plugins, retrieval-augmented-generation, and task planning to understand how that works.
We will then go beyond OpenAIs offerings, and what other trends in Large Language Models are that you can leverage including Open-Source models.
Since this field is innovating frequently, the precise models being featured are subject to change.
This session will feature a high number of demos. Finally, the session will cover the issue of some of the risks such as biases that are present in those models.
You will learn:
- What is this new generation of AI models and what can they accomplish
- About specific relevant use cases and "prompt engineering"
- What are the mechanisms behind advanced concepts such as “plugins”