Interview Despite the billions of dollars spent each year training large language models (LLMs), there remains a sizable gap between building a model and actually integrating it into an application in a way that’s useful.
In principle, fine tuning or retrieval augmented generation (RAG) are well-understood methods for expanding the knowledge and capabilities of pre-trained AI models, like Meta’s Llama, Google’s Gemma, or Microsoft’s Phi. In practice, however, things aren’t always so…
Article Source
https://www.theregister.com/2025/02/23/aleph_alpha_sovereign_ai/