AI Health Ethical Review: A Value Design Methodology

Capa

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

As our world becomes more dependent on data, algorithms are increasingly being used to make informed decisions in areas ranging from finance to HR. The healthcare sector is no exception, and artificial intelligence systems are becoming more and more widespread in this area. While AI can help us make more informed and efficient decisions, it also presents many moral and ethical challenges. One of the biggest issues is the issue of trust. When "machine" replaces "human" decision making, it can be difficult for patients and healthcare professionals to trust the outcome. In addition, the "black box" mechanisms in artificial intelligence systems make it unclear who is responsible for the decisions made, which can lead to ethical dilemmas. In addition, there is a risk of emotional frustration for patients and healthcare professionals, as AI may not be able to provide the kind of human touch that is often needed in healthcare. Despite increased attention to these issues in recent years, technical solutions to these complex moral and ethical issues are often developed without regard to the social context and opinions of the advocates affected by the technology. In addition, calls for more ethical and socially responsible AI often focus on basic legal principles such as "transparency" and "responsibility" and leave out the much more problematic area of human values. To solve this problem, the article proposes a "value-sensitive" approach to the development of AI, which can help translate basic human rights and values into context-sensitive requirements for AI algorithms. This approach can help create a route from human values to clear and understandable requirements for AI design. It can also help overcome ethical issues that hinder the responsible implementation of AI in healthcare and everyday life.

Sobre autores

Elizaveta Karpova

National Research University ″Higher School of Economics″

21/4, Staraya Basmannaya Str., Moscow 105066, Russian Federation

Bibliografia

  1. Aizenberg E., Van den Hoven J. Designing for human rights in AI. Big Data & Society. 2020. Vol. 7, Iss. 2. P.1–30. doi: 10.1177/2053951720949566
  2. Azenkot S., Prasain S., Borning A., et al. Enhancing independence and safety for blind and deaf-blind public transit riders. Proceedings of the 2011 annual conference on Human factors in computing systems CHI ’11. New York, NY, 2011. P. 3247–3256.
  3. Davis J., Nathan L.P. Value Sensitive Design: Applications, Adaptations, and Critiques. Handbook of Ethics, Values, and Technological Design. Van den Hoven J., Vermaas P.E., Van de Poel I. (eds.). Dordrecht: Springer, 2015. P. 11–40.
  4. Friedman B. Human Values and the Design of Computer Technology. Cambridge: Cambridge Univ. Press, 1997.
  5. MacIntyre A. After Virtue: A Study in Moral Theory. Notre Dame, IN: Univ. of Notre Dame Press, 1981.
  6. Charter of Fundamental Rights of the European Union. Official Journal of the European Union. 2012. Vol. 55. P. 391–407.
  7. Santoni de Sio F., Van Wynsberghe A. When Should We Use Care Robots? The Nature- of-Activities Approach. Science and Engineering Ethics. 2016. Vol. 22, N 6. P. 1745–1760. doi: 10.1007/s11948-015-9715-4
  8. Van de Poel I. Translating Values into Design Requirements. Philosophy and Engineering: Reflections on Practice, 28 Principles and Process. Michelfelder D.P., McCarthy N., Goldberg D.E. (eds.). Dordrecht: Springer, 2013. P. 253–266.
  9. Van der Velden M., Mörtberg C. Participatory Design and Design for Values. Handbook of Ethics, Values, and Technological Design, Van den Hoven J., Vermaas P.E., Van de Poel I. (eds.). Dordrecht: Springer, 2015. P. 41–66.

Declaração de direitos autorais © Russian Academy of Sciences, 2023

Este site utiliza cookies

Ao continuar usando nosso site, você concorda com o procedimento de cookies que mantêm o site funcionando normalmente.

Informação sobre cookies