← Back to blog
Building tailored RAG pipelines

How we built tailored RAG pipelines in close collaboration with a flagship customer for our Agent Assist drafting functionality.

Over the past few weeks, we worked hand-in-hand with one of our flagship customers to overhaul the retrieval system behind our Agent Assist drafting functionality.

The process? Meticulous and methodical.

We carefully distilled each step in the reasoning process, mirroring what a skilled human would intuitively do if they had the time. This included fact-checking, synthesizing knowledge across sources, applying critical reasoning, and reranking outputs to ensure the result was both accurate and contextually relevant—whether it came from past tickets, knowledge bases, standard operating procedures, and custom instructions.

The result?

✅ Evaluations against calibration dataset? All passing. ✅ Evaluations across remaining dataset? 95% success, no modifications needed.

The second point was particularly exciting—it meant that our system was now able to generalize well beyond what it was tested for. We are now confident it will continue to do so for future, untested scenarios. We will keep monitoring as we collect new feedback, but the blueprint is there.

This wouldn’t have been possible without breaking the problem down into manageable, measurable steps, with heavy use of sampling and structured outputs. Combining this with continuous human feedback from our customer, we built a retrieval system that generalizes beautifully while staying true to the content as it continually evolves.

When set up right, a carefully designed RAG setup can go further than we had imagined—and we’re just getting started.