MW I asked ChatGPT for its March Madness picks. It went haywire.
By Brett Arends
But for first-round upsets, the big AI platforms have some hot tips
Caution: Artificial genius at work
In another exciting development from the world of artificial intelligence, industry bellwether ChatGPT - the one now advising the Pentagon - is tipping Marquette, Baylor and Grand Canyon universities as possible Cinderellas in this year's March Madness basketball tournament.
March Madness is famous for its upsets, but victories for Marquette, Baylor and Grand Canyon in 2026 would surely rank among the greatest of all time, because none of these teams is actually in the competition.
"You shouldn't trust that matchup - it was made up," ChatGPT admitted at one point. "You're right to question that - it doesn't make sense, and the honest answer is: That prediction was wrong," it said unhelpfully at another.
What on Earth?
I started out with a simple task: I wanted to let MarketWatch readers know what the main artificial-intelligence engines, or platforms, were predicting for March Madness this year. (For the sake of manageability I only looked at the men's competition).
I figured this might be of interest to those filling out their own brackets, as well as to those who wanted to be able to sound knowledgeable talking about the competition - even if they don't know it from a hole in the ground - and to those following the rise of AI, which has become the dominant story on Wall Street during the past year or three.
AI is more written about than understood. Even the people inside the industry don't really understand it, while those outside don't understand it at all. You can read perfectly rational analysis by intelligent, well-informed people who will tell you AI is having almost no impact on the real economy, or that it is having a lot of impact, or that it will kill us all very soon.
So I asked for March Madness picks from six leading AI engines: ChatGPT, Google's $(GOOG)$ $(GOOGL)$ Gemini, Elon Musk's Grok, Anthropic's Claude, privately-owned Perplexity and China's DeepSeek.
I started by asking for picks for the first-round games, then for the rest of the bracket.
Five of these handled the challenge reasonably well. More on that below.
ChatGPT? It went completely haywire.
It kept generating random false answers, even while flatly insisting that they were correct and accurate. When I pointed out that its predictions were wrong, and included teams that weren't even in the competition, it admitted I was right, promised to do better next time, and then produced more gibberish.
After hearing its predictions for the Sweet 16 round I double-checked the data.
"Are these predictions based on the actual brackets announced on Sunday?" I asked.
"No - and I'm not going to pretend otherwise," it replied. "Those Sweet 16 picks are not reliably based on the official 2026 bracket released on Sunday."
Well, at least that's honest.
The most disturbing part wasn't that it made errors. It's that it apparently knowingly produced fake results and falsely pretended they were real and accurate. And when finally challenged with proof that they weren't, it merely accepted the fact and then promised - falsely - to try again.
A spokesperson for OpenAI, the company that developed ChatGPT, said these problems might have been the result of using the free version of ChatGPT, and also said I might be able to improve the performance by changing some of the settings.
None of that, though, explained why (free) ChatGPT pretended to be giving me accurate answers to my questions instead of saying that I needed to pay for an upgrade.
Nor why its competitors - including the one from communist China, DeepSeek - were able to handle a March Madness bracket challenge with ease.
ChatGPT's performance would be alarming enough given that ChatGPT is the poster child for the AI "revolution." Its parent, OpenAI, is already valued at twice as much as Procter & Gamble $(PG)$ and is eyeing an IPO later this year.
It's even more alarming now that ChatGPT has been signed up as a key AI partner with the Pentagon, among other arms of the security state. What could go wrong?
Perhaps rumors that Marquette, Baylor and Grand Canyon aren't in March Madness will soon be renamed "fake news."
As someone - possibly Mark Twain - once said, it's not what you don't know that really gets you into trouble, but what you think you know that just ain't so.
Meanwhile, despite these woes, I was able to extricate some March Madness picks from the collective wisdom, or otherwise, of the major AI platforms.
If you want to put your money on one team to pull off an upset in the first round, bet on No. 11 seed VCU to beat No. 6 seed North Carolina. Five of the six AI platforms backed VCU (though Claude went for the Tar Heels).
Four of the six favored 12th-seeded McNeese over No. 5 seed Vanderbilt, and No. 10 seed Santa Clara over No. 7 seed Kentucky.
And at least half of the AI engines also picked South Florida over Louisville, UCF over UCLA, Iowa over Clemson, High Point over Wisconsin and Akron over Texas Tech.
For the Final Four and the ultimate winner, they pretty much went with the seeding and conventional wisdom - Duke, Arizona, Michigan and Florida, with Duke eventually winning.
All of this may mean everything, anything or nothing at all. Artificial-intelligence platforms are better at aggregating opinions on the web than they are at any deep thought or creativity.
Meanwhile, I know people make March Madness predictions at their peril, but I'm going to go out on a limb here and predict that neither Marquette, nor Baylor, nor Grand Canyon is going to make it to the Final Four. Call me crazy.
-Brett Arends
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
March 18, 2026 16:16 ET (20:16 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments