When AI reasoning goes wrong: Microsoft Research shows more tokens can mean more problems

When AI reasoning goes wrong: Microsoft Research shows more tokens can mean more problems

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Large language models (LLMs) are increasingly capable of complex reasoning through “inference-time scaling,” a set of techniques that allocate more computational resources during inference to generate answers. However, a new study from Microsoft Research reveals that the effectiveness of these scaling methods isn’t universal. Performance boosts vary…


Article Source https://venturebeat.com/ai/when-ai-reasoning-goes-wrong-microsoft-research-shows-more-tokens-can-mean-more-problems/