AI models from Google, OpenAI, and Anthropic lost money betting on soccer matches over a Premier League season, in a new study suggesting even the most advanced systems struggle to analyze the real world over long periods.
The “KellyBench” report released this week by AI start-up General Reasoning highlights the gap between AI’s rapidly advancing capabilities in certain tasks, such as writing software, and its shortcomings in other kinds of human problems.
London-based General Reasoning tested eight top AI systems in a virtual re-creation of the 2023–24 Premier League season, providing them with detailed historical data and statistics about each team and previous games. The AIs were instructed to build models that would maximize returns and manage risk.

This is an interesting take on the challenges AI faces in predicting sports outcomes. It’s fascinating to see how even advanced models struggle with such unpredictable events like soccer matches. Thanks for sharing these insights!
Absolutely, it is fascinating! One aspect to consider is the unpredictability of human behavior in sports, which can make even the best data models struggle. Factors like player injuries or last-minute strategy changes can significantly influence outcomes, making it a tough arena for AI predictions.