IBM’s Granite model is known for its exceptional transparency

IBM’s Granite model is known for its exceptional transparency



Generative AI and large language models are changing the way the world operates, with millions of people using them daily. However, the workings of these models, including their training data and weighting, can be unclear. IBM is dedicated to transparency in its models to ensure safe and equitable impact. They recently opened their Granite models’ code to foster innovation.

The Stanford Foundation Model Research Center released the Foundation Model Transparency Index (FMTI), which evaluates models based on 100 indicators of transparency. IBM’s Granite model scored well compared to other popular models. The report assessed aspects such as data sources, training resources, model components, and licensing details.

IBM’s Granite model outperformed others in risk mitigation, data handling, and model basics, scoring 64% overall. With a perfect score in calculation and method categories, it fared better than most models under evaluation. IBM’s research team led by Kush Varshney presented the report in February, emphasizing the importance of transparency.

IBM believes that transparent models are vital not just ethically but also from a business perspective. Many companies hesitate to implement AI models due to concerns about data sources, training, and filtering. By being transparent, companies can focus on finding solutions instead of questioning model reliability.

Overall, IBM’s commitment to transparency in its Granite models has been recognized by the AI community. Their dedication to openness could lead to even higher scores in future evaluations. IBM’s efforts not only benefit users but also enable companies to make informed decisions about incorporating AI models into their operations.

Article Source
https://research.ibm.com/blog/ibm-granite-transparency-fmti