SOTA
Try free

About

What we do, and why we exist.

Last updated: May 13, 2026

Our mission

Research should not require a PhD to understand.

We aim to democratize access to research, whether you're trying to follow the latest in your field without being overwhelmed by technical language, or a researcher surveying a new domain.

Principles

Two products, one set of principles. What PaperCast and Debrief have in common:

Why PaperCast

The jargon in a paper is the obvious tax for non-academics. The bigger one is the heuristics academics build over years to dissect a paper: which sections to trust, which figures to interrogate, what a study's claims are actually worth. Build them over a decade and they're reflex. Without them, the substance of the paper stays out of reach even with the PDF open in front of you.

The fallbacks are thin:

PaperCast is the version of research consumption we wanted for ourselves. A faithful narration of a paper, in plain language, with the source always one tap away. Today it ships at one level: novice. We're working toward a dilution spectrum on the same paper, from layman to practitioner to academic, but that's a goal, not the current state.

Why Debrief

For working researchers, lit review is muscle memory. Which abstracts to skip, which methods sections to read in detail, which references to chase. The constraint is time, not skill.

AI-generated summaries don't fix this. The hallucination risk is too high to fold into a workflow where wrong-but-fluent could end up in your own paper.

Debrief sits next to the existing reflex, not in front of it:

Under the hood

A reasonable question: why isn't this just a paper handed to a frontier model with questions on top? Two structural reasons it can't be, and what doing it properly actually requires.

Nondeterminism breaks the trust contract

Ask a frontier model the same question twice and you get two slightly different answers. For casual use that's harmless. For research it isn't. A summary that drifts on every regeneration cannot ground a citation; there is no stable answer the model is returning to. It is improvising each pass.

Long context is not the workaround

The obvious shortcut once a long-context window is available is to stuff the whole paper in and start interrogating. The failure mode is context rot: the deeper into the conversation, the less attention the model pays to any specific fact. Numbers blur. Claims drift to the wrong section. The fluent, confident wrong answer is the worst kind of failure for a research tool.

What it actually takes

Three layers, in order:

Team

Sota Institute is a small team of builders and researchers based in California. If you want to work with us, email hello@sotainstitute.io.

Contact

General inquiries: hello@sotainstitute.io Account requests and bug reports: support@sotainstitute.io