Governance Models for Responsible and Controlled AI Systems
Synopsis
Amid growing societal concerns about artificial intelligence (AI) systems’ potential negative effects and the need to ensure that AI systems are built and used responsibly, several principles for responsible artificial intelligence (AI) and machine learning have emerged and expanded across industry, academia, civil society, and governments. Central in these discussions is the need for appropriate governance mechanisms to ensure that AI technologies are not only developed but also used in responsible ways. Governance encompasses a multitude of aspects, from establishing legal frameworks that impose requirements covering the entire development life cycle of an AI system to the strategic and operational decisions made to execute projects for AI systems. The AI-SMP concludes three major areas of governance: (a) objectives that define outcomes, (b) control and assurance mechanisms that ensure those outcomes are achieved, and (c) privacy and data governance.








