Principles of Agentic AI and Autonomous Decision-Making Systems
Synopsis
A systems-oriented approach to AI Ethics coupled with a risk-based framework can inform the development of Agentic AI and Autonomous Decision-Making Systems capable of behaving in an ethical manner without the need for Manual Ethical Control. Artificial Intelligence (AI) an autonomous decision-making systems can significantly benefit humanity, as demonstrated by their deployment in a range of critical sectors including healthcare, energy, education, scientific research, and safety-critical activities. However, the development of Agentic AI with the capacity to independently achieve goals in a manner similar to that of humans remains a subject of active ongoing research.
The decision-making processes of these Agentic AIs can differ from traditional statistical decision-making processes, which are able to support Decision Automation and Guidance without the need for Manual Ethical Control. In contrast, Ethical Control Mechanisms deal with an entirely different aspect of AI safety. These systems have the capacity of Ethical Operation that remains incomparable to current AI and autonomous decision-making systems.








