Be cautious with generative AI in equity research

Blog Thimble Image
Suhas Pai, CTO of Hudson Labs

Artificial intelligence has changed the way we live and work. Language models are responsible for many of the powerful tools we use every day from auto-suggest on our phones to Google translate to better versions of spell check.

Huge generative language models like the ones behind ChatGPT and Bing Chat are heralding a new era of innovation. Large language models (LLMs) are incredibly powerful for many tasks but also have significant limitations. Understanding the failure modes of generative AI will help you avoid making career-ending mistakes.

The most serious drawback of generative LLMs is that they frequently make things up. This is often referred to as hallucinating. Unfortunately, the information that’s invented by the models is generally indistinguishable from factual output. This can make these models dangerous to use in a context where your reputation is at stake. For instance, a law professor was falsely accused of sexual assault by ChatGPT, citing articles and other evidence that never existed.

However, just because generative AI has drawbacks, doesn’t mean you shouldn’t be using AI in your workflow. We’ve included some tips for using AI effectively below, as well as a list of useful AI-driven tools.

Why generative artificial intelligence doesn’t work for many financial questions

ChatGPT and other models are apt to hallucinate in any context. However, they are most likely to offer up incorrect information in areas where there is limited training data i.e. limited existing information on the web. Finance is one of the most opaque industries, in part because there is so much to lose and so little to gain from sharing information. Alpha is generated from having an edge over the market i.e. knowing things that you’re competitors don’t. So much of capital markets knowledge is contained in PDF reports that cost thousands of dollars a year to obtain or are safely in a firm’s private research management system. Less training data means less “knowledge” and worse answers.

ChatGPT can’t make sense of an audit report

Audit and accounting is a niche subject for which there is very little plain language content on the web. It is therefore no surprise that ChatGPT failed miserably when asked to describe a Critical Audit Matter.

Here’s what ChatGPT thinks a CAM is: “A critical audit matter (CAM) is a significant issue identified by the auditor during the audit process”.

WRONG.

A Critical Audit Matter (CAM) is a standard and required part of an audit. Since 2019 (or thereabouts) most publicly listed company audit reports have included a CAM. CAMs describe areas of financial reporting that involve judgment and complexity. They do not indicate errors or issues. For example Alphabet, Microsoft, Apple, Walmart etc. all had CAMs in their audit reports last year.

As you might imagine, only really dedicated investors show much interest in CAMs so they aren’t written about frequently.  We did write about them here but there's very little information about them elsewhere on the web.

Unfortunately, some AI-generated misinformation about CAMs has already made it onto the web. It’s possible that this misinformation will be used to train future LLMs, reinforcing their misconceptions. All the more reason for you to use caution when querying these models.

We asked a ChatGPT plugin about ICOFR weaknesses at Tesla and legal action at Uber...

Some claim that if generative LLMs have access to the web in real-time, they can get more factual and real-time results. However, the ChatGPT plugin we tried got stuck in a loop when we asked about internal control issues at Tesla. (Note, as of the date of publishing Tesla had reported none.)

Expect to see similar results where search engines aren’t able to clearly surface relevant information which is true for many financial and company-specifi

ChatGPT problems

We asked Bing Chat about internal control issues at Vinco Ventures, a company that has had many. It first provided a link to a completely random SEC filing and then refused to answer.

Bing Chat poor results

We asked an SEC-filing specific tool (ChatGPT on the back end) about legal issues at Uber. Here's what we got.... Hopefully Uber resolves their [insert trademark synonym] investigations soon!

 generative tool repeating synonyms

Using AI effectively in your equity research workflow

Just because ChatGPT has drawbacks, doesn’t mean you shouldn’t be using these powerful tools. Hudson Labs language models are specifically trained on SEC filings data, and use SEC filings as the main source for answering questions rather than drawing it from its internal memory. We also offer immediate auditability by making the source material available with a click of a button.

Learn more about how Hudson Labs is disrupting financial research with our unique approach to language model-driven applications - Hudson Labs' language models bring lasting changes to the investment research industry

More investment research tools that effectively use AI

Interested in exploring tools that use AI to make your workflow more efficient and effective? Explore our List of Top AI Tools for Equity Research, which includes tools using generative AI as well as more traditional forms of natural language processing and machine learning.