At global summit, Google, Meta, and OpenAI commit to developing AI technology responsibly

Spread the love

Some of the biggest names in technology, including Google, Meta, Microsoft, and OpenAI, have come together with other companies to commit to developing artificial intelligence (AI) technology safely. This commitment comes at a time when regulators are struggling to keep up with the rapid innovation and emerging risks in the AI space.

The second global meeting included companies from China, South Korea, the United Arab Emirates, and others. These companies were supported by a statement from the G7 major economies, the EU, Singapore, Australia, and South Korea. The virtual meeting was hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.

During the meeting, the nations agreed to prioritize AI safety, innovation, and inclusion. South Korean President Yoon emphasized the importance of ensuring the safety of AI to protect society’s well-being and democracy, pointing out concerns about risks like deepfake technology.

Participants highlighted the need for interoperability between governance frameworks, plans for a network of security institutes, and engagement with international organizations to address risks associated with AI. Companies such as Zhipu.ai, Tencent, Meituan, Xiaomi, Amazon, IBM, and Samsung Electronics have all made security commitments, including publishing security frameworks, mitigating risks, and ensuring governance and transparency.

Beth Barnes, founder of METR, emphasized the importance of reaching international agreement on “red lines” where AI development could pose unacceptable dangers to public safety. Computer scientist Yoshua Bengio welcomed the commitments but stressed the need for regulation to accompany voluntary commitments.

The debate on AI regulation has shifted from long-term apocalyptic scenarios to more practical concerns, such as the use of AI in areas like medicine and finance. The next meeting will be held in France, with participants including industry leaders like Elon Musk, Eric Schmidt, and Jay Y. Lee.

In summary, the commitment by leading technology companies to develop AI technology safely reflects a recognition of the importance of AI safety, innovation, and inclusion. The meeting highlighted the need for international cooperation, security frameworks, and governance to address risks associated with AI development. This commitment comes amid a shifting regulatory landscape and a focus on practical concerns in the field of AI. The next steps include further discussions at a ministerial session in person, with France hosting the next meeting.

Article Source
https://nypost.com/2024/05/21/business/google-meta-openai-pledge-to-develop-ai-safely-at-global-summit/