Announcement, slides. Did you miss the talk? Check out the recording on Youtube or on my backup (cut from the original stream), or read the write-up of a previous edition of the same talk.
At EuroPython 2024 I talked about RAG: how it works, how it fails, and how to evaluate its performance objectively. I gave an overview of some useful open-source tools for RAG evalution such as continuous-eval and how to use them with Haystack, and then offered some ideas on how to expand your RAG architecture further than a simple two-step process.
Some resources mentioned in the talk:
- Haystack: open-source LLM framework for RAG and beyond.
- continuous-eval by Relari AI.
- Build and evaluate RAG with Haystack: https://haystack.deepset.ai/tutorials/35_model_based_evaluation_of_rag_pipelines
- Use continuous-eval with Haystack: https://github.com/relari-ai/examples/blob/main/examples/haystack/simple_rag/app.py
- Perplexity.ai: https://www.perplexity.ai/