MW We're wired to react to bad news - and that messes with our investments. Can AI help?
By Mark Hulbert
No news is good news for stock investors
The stock of a company that is the subject of conflicting news - both positive and negative - performs poorly.
Negativity bias - that practically universal human tendency to give more weight to negative developments - is almost certainly costing you. Many researchers believe this trait is hardwired into our brains for evolutionary reasons. That animal outside your cave could be harmless - or a predator; the survival of the species depends on assuming the worst and taking precautions.
Negativity bias also affects our investment decisions. If we hear news about a company that is equally divided between both good and bad, for example, we're likely to be more pessimistic than justified by a simple averaging of the good and bad news. The stock of a company that is the subject of conflicting news will therefore perform poorly.
That is the finding of a new research study, "Conflicting News, Negativity Bias, and Short-Term Return Predictability." It was conducted by Alok Kumar of the University of Miami and three researchers at the University of Exeter in the U.K.: Linquan Chen, Yao Chen, and Chendi Zhang.
The professors studied millions of news articles since 2000 about publicly traded companies, using a large language model (LLM) to classify each firm's daily news events to determine whether the day's coverage contained conflicting messages. Stocks with conflicting news underperformed both stocks with no conflicting signals and stocks with no news.
An example of conflicting news that Kumar provided me appeared this fall in the Wall Street Journal, entitled "Hershey Raises Outlook as Cocoa Prices Moderate." The headline was positive - but the article contained negative news, noting that Hershey's $(HSY)$ Halloween sales were weak and its third-quarter sales were lower than in the year-earlier period. This article was published in the morning of Oct. 30, Eastern time; Hershey's stock fell 2.5% that day, 0.9% the next and 4.3% the day after that.
It may be surprising that companies that are the subject of conflicting news would underperform, even when the balance of the news is positive. But it makes sense, Kumar said in an interview, given that negative news appears even more negative even when there's also positive news.
To illustrate the magnitude of negativity bias's impact, the researchers constructed a hypothetical market-neutral portfolio that bought stocks that the LLM classified as neutral (neither positive nor negative) and shorted an equal dollar amount of stocks for which recent news was conflicting. This portfolio produced a double-digit annualized percentage profit - as much as 29% depending on how the portfolio was constructed.
Can AI save us from negativity bias?
It would be difficult for any of us to construct such a portfolio, since it would require employing an LLM to analyze thousands of news articles each day. Even if we could do that, our portfolio's real-world return would be lower, since the researchers' calculations don't take transaction costs into account - which could be substantial. Nonetheless, the portfolio's impressive return reminds us of the need to cultivate scrupulous objectivity when digesting the financial news.
Might AI be helpful here? Kumar said he's skeptical, since several recent studies have found that LLMs show the same behavioral biases as humans. This isn't as surprising, since LLMs are "trained" on articles in which analysts interpret the news, and those analysts themselves will be guilty of negativity bias.
It isn't clear that LLM models can be designed to overcome behavioral biases, according to Philip Resnik, a professor at the University of Maryland. In the September 2025 issue of the journal Computational Linguistics, he wrote:
"For all their power and potential, large language models (LLMs) come with a big catch: They contain harmful biases that can emerge unpredictably in their behavior. Efforts to remove or mitigate large language model biases... have not yet met with anything resembling decisive success, and apparent successes can leave the same problem to emerge in other ways... I take the position that this problem will not and cannot be solved without facing the fact that harmful biases are thoroughly baked into what LLMs are. There is no bug to be fixed here. The problem cannot be avoided in large language models as they are currently conceived, precisely because they are large language models."
Mark Hulbert is a regular contributor to MarketWatch. His Hulbert Ratings tracks investment newsletters that pay a flat fee to be audited. He can be reached at mark@hulbertratings.com
More: Investors' bearishness is often overdone - but their market bubble fears may be spot-on
Also read: These investments for the elite are opening to all - but think twice before saying yes
-Mark Hulbert
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
December 24, 2025 11:19 ET (16:19 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.

