Featured image

Announcement, slides. Did you miss the talk? Check out the write-up.


At ODSC East 2024 I talked about RAG: how it works, how it fails, and how to evaluate its performance objectively. I gave an overview of some useful open-source tools for RAG evalution and how to use them with Haystack, and then offered some ideas on how to expand your RAG architecture further than a simple two-step process.

Some resources mentioned in the talk:

Plus, shout-out to a very interesting LLM evaluation library I discovered at ODSC: continuous-eval. Worth checking out especially if SAS or answer correctness are too vague and high level for your domain.