Ethical aspects of the using artificial intelligence technologies: state of the art and prospects for regulation
- Autores: Kozyreva A.A.1, Tikhomirov I.A.2, Devyatkin D.A.3, Sochenkov I.V.3
-
Afiliações:
- National Research University «Higher School of Economics»
- Russian Science Foundation
- Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences
- Edição: Nº 4 (2024)
- Páginas: 3-14
- Seção: AI-enabled Systems
- URL: https://journals.rcsi.science/2071-8594/article/view/278062
- DOI: https://doi.org/10.14357/20718594240401
- EDN: https://elibrary.ru/FUOZSA
- ID: 278062
Citar
Texto integral
Resumo
The paper discusses the ethical aspects of introducing artificial intelligence technologies into various spheres of human activity, and provides examples of ethical violations of generally accepted social norms in particular countries. It also introduces a formal definition of such a common ethical violation as discrimination. Providing that the negative consequences of the violations are minimized, the paper presents the prospects for the development and application of artificial intelligence technologies. Namely, it considers methods for reducing discrimination in artificial intelligence technologies for natural language processing and synthesis. Finally, the paper proposes possible regulation of artificial intelligence technologies to comply with ethical standards.
Sobre autores
Anna Kozyreva
National Research University «Higher School of Economics»
Autor responsável pela correspondência
Email: an.ksandrovna@yandex.ru
Director of the Center for Coordination of Financial and Administrative Activities
Rússia, MoscowIlya Tikhomirov
Russian Science Foundation
Email: tia@rscf.ru
Candidate of technical sciences, docent, Head of the office for supporting the Advisory group on scientific and technological development
Rússia, MoscowDmitry Devyatkin
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences
Email: devyatkin@isa.ru
Candidate of physical and mathematical sciences, Senior scientific researcher
Rússia, MoscowIlya Sochenkov
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences
Email: sochenkov@isa.ru
Candidate of physical and mathematical sciences, Leading researcher
Rússia, MoscowBibliografia
- Stobbs N., Hunter D., Bagaric M. Can Sentencing Be Enhanced by the Use of Artificial Intelligence? // Electronic resource. URL: https://eprints.qut.edu.au/115410/2/CLJprooffinal25Nov2017.pdf (accessed 14.05.2024).
- Is artificial intelligence making racial profiling worse? // Electronic resource. URL: https://www.cbsnews.com/news/artificial-intelligence-racial-profiling-2-0-cbsn-originals-documentary/ (accessed 14.05.2024).
- Wu X., Zhang X. Responses to Critiques on Machine Learning of Criminality Perceptions // arXiv preprint arXiv:1611.04135. 2017.
- DataRobot // Electronic resource. URL: https://www.data-robot.com/ (accessed 14.05.2024).
- Homeland security will let computers predict who might be a terrorist on your plane — just don’t ask how it works // Electronic resource. URL: https://theintercept.com/2018/12/03/air-travel-surveillance-homeland-security/ (accessed 14.04.2021).
- The Gaurdian: Whatever happened to the DeepMind AI ethics board Google promised // Electronic resource. URL: https://www.theguardian.com/technology/2017/jan/26/google-deepmind-ai-ethics-board (accessed 14.04.2021).
- NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI)// National Science Foundation // Electronic resource. URL: https://www.nsf.gov/pubs/2019/nsf19571/nsf19571.htm?WT.mc_id=USNSF_25&WT.mc_ev=click (accessed 14.05.2024).
- Kodeks jetiki v sfere II [AI Ethics Code] // Electronic resource. URL: https://ethics.a-ai.ru (accessed 29.05.2024).
- Al'jans v sfere iskusstvennogo intellekta [AI Alliance Russia] // Electronic resource. URL: https://a-ai.ru (accessed 29.05.2024).
- Ugolovnyj kodeks Rossijskoj Federacii ot 13.06.1996 N 63-FZ [Criminal Code of the Russian Federation dated 13.06.1996 N 63-FZ] (ed.14.02.2024).
- Mehrabi N., Morstatter F., Saxena N., Lerman K., Galstyan A. A survey on bias and fairness in machine learning // ACM computing surveys (CSUR). 2021. V. 54. No 6. P. 1-35.
- Kamiran F., Calders T. Classification with no discrimination by preferential sampling // Proc. 19th Machine Learning Conf. Belgium and The Netherlands. Citeseer, 2010. V. 1. No 6.
- Do K., Nguyen D., Le H., Le T., Nguyen D., Harikumar H., Venkatesh S. Revisiting the Dataset Bias Problem from a Statistical Perspective // arXiv preprint arXiv:2402.03577. 2024.
- Mancuhan K., Clifton C. Combating discrimination using bayesian networks // Artificial intelligence and law. 2014. V. 22. P. 211-238.
- May C., Wang A., Bordia S., Bowman S.R., Rudinger R. On measuring social biases in sentence encoders // arXiv preprint arXiv:1903.10561. 2019.
- Peters M. E., Neumann M., Zettlemoyer L., Yih W. T. Dissecting Contextual Word Embeddings: Architecture and Representation // Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. P. 1499–1509.
- Devlin J., Chang M. W., Toutanova L. K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding // Proceedings of NAACL-HLT. 2019. P. 4171-4186.
- Han X., Shen A., Li Y., Frermann L., Baldwin T., Cohn, T. Fairlib: A unified framework for assessing and improving fairness // Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2022. P. 60-71.
- Kuzmin G., Yadav N., Smirnov I., Baldwin T., Shelmanov Inference-Time Selective Debiasing //arXiv preprint arXiv:2407.19345. 2024.
- Pleiss G., Raghavan M., Wu F., Kleinberg, J., Weinberger K. On fairness and calibration // 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 2017. V. 30. URL: https://proceedings.neurips.cc/paper/2017/hash/b8b9c74ac526fffbeb2d39ab038d1cd7-Abstract.html.
- Xian R., Yin L., Zhao H. Fair and optimal classification via post-processing // International Conference on Machine Learning. PMLR, 2023. P. 37977-38012.
- Donini M., Oneto L., Ben-David S., Shawe-Taylor J. S., Pontil M. Empirical risk minimization under fairness constraints // Advances in neural information processing systems (NIPS 2018). Neural Information Processing Systems (NIPS), 2018. V. 32.
- Zafar M. B. et al. Fairness constraints: A flexible approach for fair classification // The Journal of Machine Learning Research. 2019. V. 20. No 1. P. 2737-2778.
- Bordia S., Bowman S. Identifying and Reducing Gender Bias in Word-Level Language Models // In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, 2019. P. 7–15.
- Chen H., Zhu T., Zhang T., Zhou W., Yu P. S. Privacy and Fairness in Federated Learning: on the Perspective of Trade-off // ACM Computing Surveys, 2023. V. 56. No 2. P. 1-37.
- Zhang D. Y., Kou Z., Wang D. Fairfl: A fair federated learning approach to reducing demographic bias in privacy-sensitive classification models // 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. P. 1051-1060.
- Chu L., Wang L., Dong Y., Pei J., Zhou Z., Zhang, Y. Fedfair: Training fair models in cross-silo federated learning //arXiv preprint arXiv:2109.05662, 2021.
- Fadeeva E., Vashurin R., Tsvigun A. et al, LM-Polygraph: Uncertainty Estimation for Language Models // Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2023. P. 446-461.
- Kuzmin G., Vazhentsev A., Shelmanov A. et al. Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability? // Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), 2023. P. 744-770.
- Mukhoti J. et al. Deep deterministic uncertainty: A new simple baseline // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. P. 24384-24394.
- Qian Y., Muaz U., Zhang B., Won Hyun J. Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function // Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, 2019. P. 223-228.
- Garimella A., Amarnath A., Kumar K., Yalla A. P., Anandhavelu N., Chhaya N., Srinivasan B. V. He is very intelligent, she is very beautiful? On mitigating social biases in language modelling and generation // Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021. P. 4534-4545.
- Zhang Y., Wang G., Li C., Gan Z., Brockett C., Dolan B. POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-training // Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020. P. 8649-8670.
- Yang K., Liu D., Lei W., Yang B., Xue M., Chen B., Xie J. Tailor: A prompt-based approach to attribute-based controlled text generation //arXiv preprint arXiv:2204.13362. 2022.
- Liu A., Sap M., Lu X., Swayamdipta S., Bhagavatula C., Smith N.A., Choi Y. DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts // In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics. Online, 2021. V. 1. P. 6691–6706. https://doi.org/10.18653/v1/2021.acl-long.522.
- Zayed A., Mordido G., Shabanian S., Baldini I., Chandar S. Fairness-Aware Structured Pruning in Transformers // Proceedings of the AAAI Conference on Artificial Intelligence, 2024. V. 38. No 20. P. 22484-22492.
- Liu R., Xu G., Jia C., Ma W., Wang L., Vosoughi S. Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation // Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020. P. 9031-9041.
- Liu R., Jia C., Wei J., Xu G., Wang L., Vosoughi S. Mitigating political bias in language models through reinforced calibration // In Proceedings of the AAAI Conference on Artificial Intelligence, 2021. V. 35. P. 14857–14866.
- Fadeeva E., Rubashevskii A., Shelmanov A. Fact-checking the output of large language models via token-level uncertainty quantification //arXiv preprint arXiv:2403.04696. 2024.
- Farquhar S., Kossen J., Kuhn L., Gal Y. Detecting hallucinations in large language models using semantic entropy // Nature. 2024. V. 630. No 8017. P. 625-630.
- Kozyreva A.A. Ispol'zovanie instrumentov mjagkogo prava pri vnedrenii tehnologij iskusstvennogo intellekta [Using soft law tools when introducing artificial intelligence technologies] // Juridicheskij mir [Legal World]. 2021. No 8. P. 34-36.
- Filipova I.A. Iskusstvennyj intellekt: evropejskij podhod k regulirovaniju [Artificial intelligence: European approach to regulation] // Zhurnal zarubezhnogo zakonodatel'stva i sravnitel'nogo pravovedenija [Journal of Foreign Legislation and Comparative Law]. 2023. V. 19. No 2. P. 54-65.
Arquivos suplementares
