Four leading companies in the field of artificial intelligence have joined forces to create the Frontier Model Forum, an industry body aimed at overseeing the safe development of the most advanced AI models.
The founding members of the forum are OpenAI, Anthropic, Microsoft, and Google, the owner of DeepMind.
The primary focus of the Frontier Model Forum is to ensure the “safe and responsible” advancement of frontier AI models. These models represent AI technology that surpasses the capabilities of the current state-of-the-art examples.
Microsoft President, Brad Smith, emphasized the importance of tech companies taking responsibility for the safety and security of AI technology and ensuring that it remains under human control.
Also Read: Hollywood’s Lavish AI Investments Amidst Actor Strikes And Legal Complexities
He stated, “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
The objectives of the forum include promoting research in AI safety, setting standards for evaluating models, advocating for responsible deployment of advanced AI, engaging in discussions with policymakers and academics on trust and safety risks, and exploring the positive applications of AI in areas such as climate crisis mitigation and cancer detection.
Membership in the Frontier Model Forum is open to organizations involved in the development of frontier models.
These models are defined as large-scale machine-learning models that outperform existing advanced models and are capable of performing a wide range of tasks.
The announcement of the Frontier Model Forum comes at a time when regulations for AI technology are gaining momentum.
Recently, tech companies, including the founding members of the forum, made commitments to new AI safeguards after a meeting with US President Joe Biden at the White House.
These safeguards include measures to detect and combat misleading AI-generated content, such as deepfakes, and to allow independent experts to assess AI models.
Despite these efforts, some campaigners remain skeptical, pointing out the tech industry’s history of failing to adhere to self-regulation pledges.
Meta’s recent release of an AI model to the public, for example, drew concerns from experts who compared it to providing a template to build a dangerous weapon.
The Frontier Model Forum recognizes the importance of contributions to AI safety from entities such as the UK government, which organized a global summit on AI safety, and the EU, which is introducing the AI act for comprehensive regulation of the technology.
However, Dr. Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey warns against “regulatory capture,” where companies’ interests dominate the regulatory process.
He advocates for independent oversight that represents the interests of people, economies, and societies that will be impacted by AI in the future, stressing the need for responsible and ethical AI development.
Email your news TIPS to Editor@kahawatungu.com or WhatsApp +254707482874