Analytics.gov, developed by GovTech Singapore, is a machine learning platform that helps government agencies implement AI projects efficiently. By leveraging open-source models through AG’s AWS Sagemaker Endpoints, agencies can deploy quantised models quickly and at a lower cost, reducing the barriers to using large language models (LLMs) for public good.
Quantisation is a technique that reduces the size of AI models by decreasing the number of bits used to store model weights. This leads to lower memory requirements, faster inference speeds, and cost savings. While quantisation can impact model output quality, higher bit quantisation levels like Q8 often show minimal loss in quality with significant performance improvements.
AWS Sagemaker Endpoints provide a convenient way to host model inference, with features like auto-scaling and zero downtime updates. By customising containers to support open-source inference engines like GGUF or GPTQ, users can deploy LLMs with ease. Comparative benchmarks show the performance gains and cost savings of deploying quantised models using different frameworks and engine combinations.
AG’s commitment to enhancing GenAI capabilities includes plans to integrate with closed-source models like Azure OpenAI and VertexAI’s Gemini. By providing streamlined access to diverse model options, AG empowers agencies to develop efficient and cost-effective AI applications for the public sector.
In conclusion, AG’s efforts to facilitate the deployment of quantised LLMs through AWS Sagemaker Endpoints offer a practical solution for government agencies seeking to leverage AI technology. By optimising model size, memory requirements, and inference speeds, AG aims to support the development of impactful AI applications that benefit society as a whole.
Article Source
https://towardsdatascience.com/applied-llm-quantisation-with-aws-sagemaker-analytics-gov-ab210bd6697d