In this session, we will build a new .NET Aspire application step-by-step and integrate Microsoft’s Semantic Kernel with Azure OpenAI Service and Azure Cosmos DB. We'll use Azure Cosmos DB's new DiskANN vector indexing and search capabilities to build a Generative AI RAG Pattern application. We will highlight the major concepts for building these types of applications then show how all of the concepts work together in code.
Learn how to generate embeddings on user input, search vectorized custom data, generate responses from an LLM, manage chat history, and build a semantic cache to enhance performance. By the end of this session, you’ll have a comprehensive understanding of each component for constructing an LLM pipeline in a RAG Pattern application to create your own AI copilot.
You will learn:
- Understand the foundational concepts for building an AI copilot
- Explore Azure Cosmos DB's DiskANN vector indexing and search
- Build a RAG pattern application