Google has outlined seven key principles for responsible AI regulation, emphasising the need for balanced legislation that promotes innovation while addressing potential risks.
Google has proposed a 'Magnificent Seven' framework for effective AI regulation. The tech giant emphasises that AI is "too important not to regulate, and too important not to regulate well," and has put forward seven principles to guide policymakers:
1. Support responsible innovation
2. Focus on outputs rather than evolving techniques
3. Strike a balance in copyright laws
4. Fill gaps in existing legislation
5. Empower existing agencies
6. Adopt a hub-and-spoke model for regulation
7. Strive for alignment with international standards
The company highlights the U.S. government's thoughtful approach to AI regulation, praising efforts such as the bipartisan congressional committee and the Senate's AI Working Group. Google emphasises three key aspects of these efforts:
1. Recognition of AI's potential in various fields
2. Awareness of AI's economic impact
3. Emphasis on public-private sector collaboration
Kent Walker, President of Global Affairs at Google, stated, "AI is a unique tool, a new general-purpose technology. And as with the steam engine, electricity, or the internet, seizing its potential will require public and private stakeholders to collaborate to bridge the gap from AI theory to productive practice."