Technology
Precision at scale for public market research
The Hudson Labs Co-Analyst is AI designed for institutional workflows where accuracy is non-negotiable, relevance matters and long time-horizon context is crucial.
Deep AI Expertise
Anyone can say they're an expert. But only one of us wrote the textbook.
Hudson Labs is AI-native rather than AI-adjacent. Our CTO and co-founder Suhas Pai authored Designing Large Language Model Applications (O'Reilly Media), widely used by engineers building real-world AI systems.
Suhas Pai
CTO & Co-Founder, Hudson Labs

Capabilities
Core capabilities
Multi-period financial precision
Financial research requires consistency across time. Most competitive tools rely on generalist models via ChatGPT-style wrappers and face similar accuracy limitations. Hudson Labs maintains accuracy across multi-document, multi-period financial queries.
Guidance identification
Subtle guidance cues are frequently missed by generalist AI systems. Common failure modes include missing forward-looking statements, confusing historical/current/future tense, and losing conditional guidance in longer responses. Hudson Labs uses specialized models and task-specific logic to consistently surface forward-looking statements.
Materiality (relevance) ranking
Correct information isn't always useful information. As document volume increases, irrelevance becomes as costly as hallucination. Hudson Labs applies proprietary relevance ranking to reduce noise, prioritize information tied to research questions, and preserve full traceability to source.
Verbatim, source-faithful quotes
Generalist AI systems frequently paraphrase while presenting text as quoted or surface language that doesn't exist in source material. Hudson Labs retrieves verbatim excerpts only, with direct linkage to underlying documents.
Architecture
Why Hudson Labs works
01
AI systems architecture over AI models
Building effective AI products is far more about systems, pipelines, and architecture than models. This is achieved through context engineering optimized for long-horizon tasks, topic spans, unique pre-processing and meta-tagging, proprietary retrieval systems, and task-specific system design.
02
Deep, internal AI expertise
Hudson Labs is AI-native rather than AI-adjacent, with credible in-house AI expertise rather than outsourced research or thin wrappers around generalist models. The CTO and co-founder authored a bestselling O'Reilly textbook on Large Language Models, widely used by engineers building real-world AI systems.
03
LLM-powered history
Hudson Labs launched the first fully LLM-powered financial services software application in 2021, when large language models were still largely experimental. Proprietary relevance ranking developed during that period remains a core platform component.
Benchmarks
Hudson Labs vs. general AI models
The Co-Analyst extracts detailed quantitative and qualitative financial data across multiple documents and long time periods, with nearly 100% accuracy and zero hallucination.
On the Vals AI financial retrieval benchmark, leading general-purpose models (GPT-5.2, GPT-5.1, Opus 4.5) score 53–57% in accuracy on financial tasks involving retrieval and analysis of specific financial information.
Source: Vals AI benchmark, December 2025
Philosophy
Design principles
01
Replace grunt work, not critical thinking
We automate the tedious parts of financial research -- data gathering, cross-referencing, and formatting -- so analysts can focus on judgment and decision-making.
02
Narrow and effective systems outperform complex, error-prone ones
Rather than building a general-purpose AI, we build purpose-specific models trained on financial documents. Narrow scope means higher precision.
03
Every output must have a direct and traceable connection to source material
If you cannot verify it, it should not inform a trade. Full transparency is non-negotiable.
See it in action
Experience the difference
Try the Co-Analyst free for 14 days and see how purpose-built financial AI compares to general-purpose tools.