BREAKING NEWS
Logo
Select Language
search
New AI Sports Betting Report Reveals Shocking Losses
AI Apr 12, 2026 · min read

New AI Sports Betting Report Reveals Shocking Losses

Editorial Staff

Civic News India

Summary

A recent study has revealed that the world’s most advanced artificial intelligence models are surprisingly bad at sports betting. Researchers tested eight top AI systems, including those from Google, OpenAI, and Anthropic, by having them bet on a full season of English Premier League soccer. Every single model failed to make a profit, with most losing significant amounts of money. This experiment shows that while AI is excellent at writing and coding, it still struggles to understand the unpredictable nature of real-world events.

Main Impact

The failure of these AI models highlights a major gap in current technology. We often think of AI as being smarter than humans because it can process huge amounts of data instantly. However, this study proves that "processing data" is not the same as "understanding logic." The inability of these systems to beat a sports betting market suggests that AI still lacks the ability to handle risk and uncertainty. This is a wake-up call for industries that want to use AI for financial forecasting or complex decision-making.

Key Details

What Happened

A London-based startup called General Reasoning conducted a study titled "KellyBench." They created a virtual simulation of the 2023–2024 Premier League season. They gave eight leading AI models access to deep historical data, team statistics, and results from previous matches. The models were then asked to place bets on games with the goal of making as much money as possible while managing their risk. Despite having all the information available, the models could not create a winning strategy.

Important Numbers and Facts

The test covered all 380 matches of the Premier League season. The AI models included famous names like OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude. Among all the participants, xAI’s Grok—the model created by Elon Musk’s company—performed the worst. While some models lost money slowly, others made risky bets that led to fast losses. The study showed that the models often ignored basic mathematical rules for betting, leading to their financial downfall in the simulation.

Background and Context

Predicting the outcome of a soccer match is difficult because the sport is low-scoring and often decided by luck. A single red card, a missed penalty, or a lucky bounce can change the entire result. Human betting markets are also very efficient, meaning the odds already reflect most of the available information. For an AI to win, it has to find a pattern that the rest of the world has missed. Currently, AI models are trained mostly on text from the internet. This makes them great at conversation but poor at the type of statistical reasoning needed to win at gambling.

Public or Industry Reaction

The results of the KellyBench report have caused a stir in the tech community. Many experts believe this proves that AI is currently "overhyped" when it comes to practical reasoning. Critics point out that if an AI cannot figure out a soccer game—where the rules and data are clear—it should not be trusted with more important tasks like stock market trading or medical diagnoses. On the other hand, some developers argue that these models were never designed for gambling and that specialized AI, rather than general-purpose bots, would perform better.

What This Means Going Forward

This study will likely push AI companies to change how they train their models. Instead of just teaching them to talk, developers may focus more on "probabilistic reasoning." This means teaching the AI how to understand the chances of something happening and how to protect its resources when things go wrong. For the average person, this is a reminder that AI is not a magic crystal ball. It can summarize a book or write an email, but it cannot yet predict the future or guarantee a win in the world of sports.

Final Take

The world of sports remains one of the few places where human intuition and specialized math still beat the biggest machines in the world. While AI continues to improve in many areas, the "KellyBench" study shows that the real world is far more complex than a line of code. For now, soccer fans and bettors can rest easy knowing that their knowledge of the game is still more valuable than the algorithms running at Google or OpenAI.

Frequently Asked Questions

Which AI model performed the worst in the soccer betting test?

The Grok model, developed by Elon Musk’s company xAI, was identified as the worst performer in the study, failing significantly to manage risk or predict outcomes correctly.

Why did the AI models lose money on soccer matches?

The models struggled with the unpredictability of sports and failed to follow proper risk management strategies. They often made bets that did not align with the actual statistical probability of a team winning.

What was the "KellyBench" study?

KellyBench is a report by the startup General Reasoning that tested how well eight top AI models could analyze data and make profitable decisions during a simulated 2023–24 Premier League season.