- Researchers from top US universities warn extending pre-training can be detrimental to performance
- Too much pre-training can deliver worse performance due to something akin to the butterfly effect
- The more they are pre-trained, the more they become sensitive to small changes that could disrupt the end result
Researchers from Carnegie Mellon, Stanford, Harvard, and Princeton are challenging one of AI development’s accepted core beliefs – that the more pre-training data the better the…
Article Source
https://www.techradar.com/pro/catastrophic-overtraining-could-harm-large-language-ai-models-that-are-trained-on-more-data-for-the-sake-of-training