Skip to main content

Auryth research lab

Where domain expertise meets AI research

You shouldn't have to take an AI tool's word for it. We publish our methods so you can see exactly why Auryth gives better answers than generic AI — and so the entire industry can build on what we learn.

Our mission

Regulated domains demand a level of precision, temporal awareness, and jurisdictional nuance that generic AI models aren't built for.

General-purpose AI consistently fails on specialist questions — not because the technology is bad, but because it wasn't designed for these domains.

We publish our research because the entire industry benefits when the problems of legal AI are studied openly. Hallucination rates, confidence calibration, source attribution, multilingual retrieval — these are hard problems that deserve serious academic attention.

Core retrieval research

Large language models are powerful but unreliable for high-stakes professional work. Our research focuses on the retrieval layer — the systems that find, verify, and present evidence to the AI. Auryth's products are built on patent-pending technology across five areas of retrieval innovation.

Negative evidence in retrieval

Systems that actively identify when evidence contradicts or fails to support a conclusion.

Calibrated scoring

Confidence measures that correlate with actual accuracy, not just model certainty.

Confidence-gated generation

Output controls that prevent low-confidence answers from reaching users.

Adaptive query routing

Dynamic selection of retrieval strategies based on query characteristics.

Self-improving retrieval systems

Feedback loops that refine accuracy without model retraining.

Research focus areas

Confidence calibration

Can you actually trust the confidence score?

We measure whether our confidence scores match real-world accuracy. When Auryth says 85% confident, we test whether 85% of those answers are actually correct.

Multilingual legal retrieval

Ask in Dutch, find the answer in French — accurately

Multilingual regulatory landscapes create unique challenges. We research how to find the right provision regardless of which language it's written in.

Temporal versioning

Getting the right rule for the right date

Regulations change constantly. We research methods for tracking which version of a provision was in force on the date that matters — so you never cite outdated rules.

Hallucination detection

Catching fabricated citations before they reach you

How do you catch an AI that confidently cites a non-existent article? We develop methods to verify every citation against real sources before showing you the answer.

Working papers

In preparation

Confidence-calibrated retrieval for regulated domains

Our first working paper examines how to make AI confidence scores actually meaningful in professional contexts. Introduces the framework behind Auryth's confidence scoring and how we verify accuracy against real domain-specific questions.

Download paper (PDF)

Advisory board

We're building an advisory board of domain practitioners, academics, and AI researchers who share our commitment to transparent, reliable specialist AI.

If you're a researcher working on domain-specific NLP, an academic interested in AI applications in regulated fields, or a practitioner who wants to help shape the next generation of specialist AI tools — we'd love to hear from you.

Partnerships

We're actively exploring partnerships across three areas:

University research centres

Joint projects on legal AI, NLP, and computational law

Professional associations

ITAA, IBR/IRE, and regional accounting bodies

EU research programmes

Digital governance and AI innovation grants

Interested in collaborating?

Whether you're a researcher, academic, or practitioner — we're always open to conversations about advancing legal AI together.

Get in touch