The World Health Organization recently published its guidance on Ethics & Governance of Artificial Intelligence for Health. Recognizing AI’s potential and current utilization in public health and medicine, the WHO cautions “ethical considerations and human rights must be placed at the cent[er] of the design, development, and deployment of AI technologies for health.” Id. at v.
The WHO identifies six principles “used as a basis for governments, technology developers, companies, civil society[,] and inter-governmental organizations to adopt ethical approaches to appropriate use of AI for health.” Id. at xii. Those principles are as follows:
- Protecting human autonomy;
- Promoting human well-being and safety and the public interest;
- Ensuring transparency, explainability, and intelligibility;
- Fostering responsibility and accountability;
- Ensuring inclusiveness and equity; and
- Promoting AI that is responsive and sustainable.
The report further identifies a non-comprehensive classification and examples of AI technologies for health as well as proposed legal frameworks. AI applications include diagnostics, clinical care, health research, drug development, health system management and planning, public health and surveillance, and outbreak response. Id. at 6–16. “ While AI may not replace clinical decision-making, it could improve decisions made by clinicians.” Id. at 16. And any framework governing such AI application must protect human rights. Id. at 17. For example, data protection ensures privacy and a person's ability to opt out of any automated process, consistent with informed consent. Id. at 19, 21. And more specifically, the WHO proposes a regulatory framework that includes “ documentation and transparency, risk management and the life-cycle approach, data quality, analytical and clinical validation, engagement and collaboration, and privacy and data protection.” Id. at 22.
Of course, the United States by congressional activity and by executive order has already made—and continues to make—great strides in facilitating AI research, development, use, and oversight. Nixon Peabody will continue to monitor legislative developments and provide practical considerations as AI-focused bills are debated or passed into law and executive orders are issued.