Catalyzing Ethical AI in Global Healthcare: Principles, Challenges, and Pathways Forward

Explore the ethical principles and governance frameworks essential for responsible AI and digital health technology deployment in healthcare, with a particular emphasis on addressing emerging ethical challenges in low and middle-income countries and marginalized communities.

Michael Friebe, PhD
Michael Friebe, PhD

Digital health technologies, including artificial intelligence (AI), have proliferated worldwide. This paper explores the ethical principles and governance frameworks essential for responsible AI and digital health technology deployment in healthcare systems. Six key ethical principles proposed by the WHO Expert Group guide this discussion: protecting autonomy, promoting well-being and safety, ensuring transparency and accountability, fostering inclusiveness and equity, and promoting sustainable AI. The paper addresses emerging ethical challenges, especially for low and middle-income countries and marginalized communities. The concept of "technological solutionism" is presented, highlighting potential pitfalls. Governance, both in the public and private sectors, is vital, with a focus on transparency, data privacy, and collaboration with the private sector. Practical guidance for ministries of health and regulators are summarised to protect patient health and safety, promote transparency, address bias, safeguard privacy, and institute regular review mechanisms. This paper also provides a roadmap for AI Health developers for navigating the ethical and governance complexities of AI aiming to ensure their benefits are maximized while risks are mitigated.

INTRODUCTION

Artificial Intelligence and related technologies (e.g. machine learning, federated learning) already collect and analyse huge amounts of data in the context of an individuals health journey. That does not even need to be as part of an actual diagnosis or therapy in a healthcare facility, but also in the normal setting and as part of health monitoring. And, we all agree that this data use will significantly improve in the next years and decades and likely will transform healthcare as we know it at the moment.

Current healthcare provision is almost entirely focussed on diagnosis and therapy of a health problem and will need to become more focussed on prevention and prediction of a disease, with a subsequent follow-up using precision medicine approaches, and with a goal to increase healthy lifespan.

Digital health technologies have become increasingly prevalent and impactful in both high-income countries (HIC) and low- and middle-income countries (LMIC). These technologies encompass a wide range of applications, including data collection, mobile phone-based health information dissemination, electronic medical records on open- software platforms, cloud computing, and artificial intelligence (AI) in drug development, health system management, and public health programs.

While the potential benefits of these technologies are immense, they also raise significant ethical and governance challenges.

This paper will explore the ethical principles and governance frameworks that are crucial for the responsible development and deployment of AI and digital health technologies in the healthcare sector. The central focus is on ensuring the protection of human rights, promoting equitable access to healthcare, and fostering transparency and accountability in the use of these technologies.

The ethical landscape surrounding AI and digital health technologies is evolving rapidly, with various guidelines and principles developed to guide their application. However, there is still no global consensus on best practices, and different legal regimes and governance models are associated with each set of principles. Furthermore, the implementation of ethical principles may vary depending on cultural, religious, and social contexts.

The core of this paper is based on the key ethical principles proposed by the WHO Expert Group to guide the development and use of AI technology for health. These principles include protecting autonomy, promoting human well-being and safety, ensuring transparency and accountability, fostering inclusiveness and equity, and promoting sustainable AI.

Additionally the emerging ethical challenges related to AI in healthcare will be explored, particularly those relevant to LMIC and marginalized communities. These challenges encompass issues such as the digital divide, data quality and biases, privacy and confidentiality risks, and the availability of treatment options after diagnosis.

"Technological solutionism" and the potential pitfalls of overestimating the benefits of AI, which can divert attention and resources from proven healthcare interventions will also be discussed.

Furthermore, the governance of AI in healthcare, both in the public and private sectors, is a critical aspect to consider. The private sector, including technology companies, plays a central role in the development and delivery of AI for healthcare, posing challenges related to transparency, data collection, and commercialization. Governments must regulate and co-regulate AI technologies in collaboration with the private sector to ensure patient safety and care.

As AI and digital health technologies continue to transform the healthcare landscape, it is essential to navigate the ethical and governance complexities they bring. Striking a balance between harnessing the potential benefits and mitigating the risks is crucial to ensure that these technologies ultimately improve healthcare outcomes, protect individual rights, and promote global health equity.

Laws, policies and principles for regulating and managing the use of AI, and specifically use of AI for health, are intended to help safeguard human rights.

Ethical principles for the application of AI for health are intended to guide developers, users and regulators in improving and overseeing the design and use of such technologies.

This paper will also provide a coarse checklist guideline for assessing data-based Digital Health Technologies from a government perspective and also for consideration by the actual developers and developing entities.

Human dignity and the inherent worth of humans are the central values upon which all other ethical principles rest.

Key Ethical Principles:

The six ethical principles, identified by the WHO Expert Group to guide the development and use of AI technology for health, are:

  1. Protect autonomy;        
  2. Promote human well-being, human safety and the public interest;   
  3. Ensure transparency, explainability and intelligibility;        
  4. Foster responsibility and accountability;        
  5. Ensure inclusiveness and equity;        
  6. Promote artificial intelligence that is responsive and sustainable.

Though ethical principles are universal, it should be noted that their implementation may differ according to the cultural, religious and other social contexts.

Several ethical challenges are emerging for the use of AI for health. Ethical issues relate to all populations, but some concerns are particularly relevant for LMIC and in marginalized communities in HIC:

  • AI could reinforce the digital divide (people that have access to digital tools and are able to use them versus no access and no understanding);     
  • AI design may suffer from a lack of good-quality data;        
  • Data collected may incorporate clinical biases;        
  • Data privacy and confidentiality risks;        
  • There might be a lack of treatment options after diagnosis.

These challenges must be addressed if AI technologies are to support the achievement of universal health coverage.

TECHNOLOGY SOLUTIONISM

Overestimating the benefits of AI and dismissing its challenges can lead to misguided healthcare policies and investments, diverting resources from proven interventions in low- and middle-income countries.        

Successful AI systems in healthcare rely on high-quality, diverse, and representative data, which can improve diagnosis speed, care quality, and reduce subjective decision-making, but “Black box" AI, where the system's inputs and processes are not transparent, raises ethical concerns, particularly related to data bias and intelligibility.

Autonomous decision-making with AI augments human decisions but can displace humans from knowledge production, leading to concerns about legal delegation, automation bias, and loss of human control.

While AI can aid in predicting diseases and health events, predictions are uncertain, and AI should address societal bias and discrimination, considering factors like gender, race, and sexual orientation.

The healthcare workforce will require digital and technological proficiency, including navigating data-rich environments and digital and genomics literacy, impacting the workforce with both optimism and pessimism.

AI AND COMMERCIALISM

The adoption of AI in healthcare, driven by companies ranging from startups to tech giants, is important, but it also comes with significant ethical challenges. Concerns revolve around transparency, data collection, and utilization, as well as the power wielded by some corporate entities.

One major ethical issue involves the collection and exploitation of health data by private companies. Excessive data can be used for purposes like developing AI technologies for marketing or creating prediction-based products without the explicit consent of data providers. This raises concerns about individual autonomy, loss of data control, usage of data, profit generation by companies, and the duty of confidentiality, especially in cases of data breaches.

To ensure the well-being of individuals and maintain public, provider, and patient trust, ethical and transparent design of AI technologies for healthcare is crucial. While AI performance is improving, errors can still occur, particularly when algorithms are trained with incomplete or inappropriate data. Lawmakers and regulators must enforce safety rules and frameworks for AI in healthcare and integrate them proactively into technology design and deployment. Additionally, liability rules should align with existing healthcare standards, but considering the unique risks posed by AI technologies, there may be a need for additional obligations and damages.

WHO RECOMMENDATIONS

WHO recommends several key actions for the liability regimes surrounding AI in healthcare. These include ensuring clinical guidelines adapt to evolving AI technologies, supporting national regulatory agencies in evaluating AI for health, and establishing international norms and legal standards to ensure accountability for patient safety. Human rights standards, data protection laws, and ethical principles should also guide AI use in healthcare.

However, the complexity of AI in healthcare necessitates commonly accepted ethical principles, as challenges and risks are not fully understood and may change over time. Many existing principles, laws, and standards cater to high-income countries, making it crucial for low- and middle-income nations to be aware of and adhere to ethical principles while implementing suitable governance.

Health governance involves rule-making functions for achieving national health policy goals, and WHO's global strategy and governance frameworks can contribute to AI in healthcare governance.

Governance of user-generated health data from devices, chatbots, social media, and online communities is intricate due to international boundaries, inconsistent legal regulation, and insufficient self-regulation by technology companies.

Governments should adopt transparent and inclusive impact assessments for AI in healthcare, covering ethics, human rights, safety, and data protection. They should set legal and ethical standards for AI procurement, promote transparency in AI use, and involve a wide range of stakeholders in decision-making. Data collection and use should align with international data protection principles to avoid bias. AI should be used inclusively to avoid exacerbating health and social inequalities, and the risks using AI should be assessed and mitigated.

What is meant by transparency, data privacy, and risk assesmment?

TRANSPARENCY:

The introduction of AI technology in healthcare requires transparency, including disclosing source code, allowing criticism by experts, and sharing details about data, development, and deployment. AI should comply with data protection laws and privacy norms, with regular audits and independent reviews.

Developers must also disclose their data policy and engage with the public for input on design, safety, and security. AI should empower society without worsening existing inequalities, addressing biases, and adapting to social and cultural diversity for broader acceptance.

PRIVACY:

Healthcare providers must prevent re-identification of individuals in datasets and address privacy concerns, including encryption and anonymization. Anonymize patient identifiers effectively and prioritize privacy when considering AI technology in clinical settings. Obtain informed consent, prevent data leakage, and give users control over their data. Apply additional security measures for AI using biometric data and secure consent for data sharing or repurposing.

RISK ASSESSMENT:

AI should be used in healthcare only if the risk-benefit ratio is positive and if it meets regulatory safety, accuracy, and efficacy requirements. Clinicians should have the ability to override AI decisions. Risk assessment and mitigation are vital throughout development and should be reassessed regularly. Developers should aim to reduce risk while achieving intended outcomes and consider major trade-offs. Regulatory frameworks are evolving to include data protection, security, privacy, equal access, and human autonomy. Robust data management and protection guidelines are essential. Ensure AI technology's compliance with other healthcare technologies, adopt relevant standards, and continuously assess its outcomes and impact for improvement.

CONCLUSION:

The rapid proliferation of digital health technologies, particularly artificial intelligence, presents immense opportunities and challenges for healthcare systems worldwide. This paper has examined the ethical principles and governance frameworks necessary to ensure the responsible development and deployment of these technologies.

The six ethical principles proposed by the WHO Expert Group - protecting autonomy, promoting well-being and safety, ensuring transparency and accountability, fostering inclusiveness and equity, and promoting sustainable AI - provide a robust foundation for guiding the ethical use of AI in healthcare. However, it is crucial to recognize that the application of these principles may vary across different cultural and societal contexts.

Ethical challenges are multifaceted, ranging from addressing the digital divide to ensuring data quality, privacy, and the availability of treatment options. These challenges must be addressed comprehensively to harness the full potential of AI in healthcare.

Effective governance, both in the public and private sectors, is vital to ensure the ethical use of AI technologies. Collaboration between governments and technology companies, alongside transparent regulation and oversight, is essential to safeguard patient safety and privacy.

To ensure that the necessary steps are taken starting starting with the DESIGN PHASE of a AI based system, over the DEVELOPMENT phase and process and including a susequent DEPLOYMENT and IMPROVEMENT phase, a practical guidance and checklist is provided below. A developing company should be able to address all the points.

A similar checklist for Ministries of Health (MOH) and regulators is shown below. In the opinion of the author it is essential that the MOH, also in LMIC are asking relevant question and assessing the technologies based on their own needs, especially with respect to BIASES and the promised solutions. These solutions could be incredibly valuable in resource poor environments, but just because they work in HIC does not mean that they are equally useful in LMIC (or vice versa).

NEXT STEPS

Global Consensus on Ethical Principles: The global healthcare community should strive to establish a consensus on ethical principles for AI in healthcare. This consensus should consider cultural and contextual variations while upholding fundamental human rights.

Capacity Building: LMICs should receive support and capacity-building efforts to ensure they can effectively implement AI technologies in their healthcare systems. This includes access to training, infrastructure, and resources.

Data Quality and Bias Mitigation: Ongoing research and development efforts should focus on improving data quality and addressing biases in AI algorithms. Collaborations between healthcare providers, researchers, and technology companies are crucial in this regard.

Regular Ethical Audits: Healthcare institutions and technology developers should conduct regular ethical audits of AI systems to assess their impact on patients and communities, with a focus on equity and fairness.

Interoperability and Standards: Governments and international bodies should work towards establishing interoperability standards and data-sharing agreements to ensure seamless integration of AI technologies into healthcare systems.

Public Awareness and Engagement: Initiatives to raise public awareness about AI in healthcare and to engage patients and communities in decision-making processes should be prioritized. Public input is essential to ensure that AI technologies align with societal values and expectations.

Research and Innovation: Continued research and innovation in AI for healthcare are essential. This includes exploring novel applications, improving algorithm transparency, and developing AI systems that can adapt to diverse cultural contexts.

The responsible use of AI in healthcare demands a collaborative, ethical, and inclusive approach. By adhering to ethical principles, strengthening governance, and taking concrete next steps, healthcare systems can harness the transformative potential of AI while safeguarding the well-being, autonomy, and rights of patients and communities.

ACKNOWLEDGMENT

This paper and summary is based to a large extent upon the “WHO Guidance on Ethics & Governance of Artificial Intelligence for Health” published in June 2021 and the course and the Online course “Ethics and Governance of Artificial Intelligence for Health” - https://openwho.org/courses/ethics-ai.

#HealthTech #AIinHealthcare #DigitalHealth #EthicalTech #HealthcareEthics #AIethics #HealthcareInnovation #DataPrivacy #HealthEquity #GlobalHealth #PatientRights #HealthData #EthicalAI #MedicalTechnology #TechGovernance #PublicHealth #HealthcareForAll #AIforGood #Inclusivity #SustainableHealth

Immerse yourself in the game-changing ideas of OpenExO.

Begin your journey here 🎟️ExOPass & 📚Exponential Organizations 2.0

Future of HealthDigital WellnessEthical AIEquityInclusive EnvironmentData ProtectionDigital Health

Michael Friebe, PhD

Professor of HealthTec Innovation at the medical faculty of the OVGU in Magdeburg, Germany and at AGH UST in Krakow, Poland. Inventor of 100 patents, >300 science papers, >35 Medtec start-ups.