Limits of Admissibility of the Participation of Artificial Intelligence in the Decision of the Verdict

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

The introduction of artificial intelligence (hereinafter referred to as AI) in criminal proceedings raises problems that legal science has not yet encountered, and gives rise to many questions, the answers to which are not so unambiguous. The aim of the study is to identify and assess potential risks, the limits of admissibility of using AI in sentencing, and the possibility of AI influence on improving the quality of decisions made by the court. The article deals with the problems that have arisen in some countries when using automated systems in court. In particular, the problem of “non-transparency” of AI is one of the most intractable at present, causing significant potential risks of using AI in criminal proceedings and, for this reason, preventing its use. The dependence of AI on the customer and the developer is also a potentially dangerous circumstance of its use in the administration of justice. Some factors of the positive impact of AI on the quality of court decisions are considered. The conclusion is made about the possibility of the influence of AI on improving the quality of decisions made by the court, subject to the limitation of the limits of admissibility of its use in sentencing. Auxiliary systems using AI, including those proposed by domestic authors, can solve the problems of a judge's lack of time and free up his cognitive resources. Complete replacement of the referee - a human AI - is dangerous. It can lead to the dehumanization of justice. The legitimization of AI in criminal proceedings should be carried out by the state, since the criminal process is of a state nature.

About the authors

Elena S. Papysheva

Ufa University of Science and Technology

Author for correspondence.
Email: papyshev-01@yandex.ru

Cand.Sci.(Law), Associate Professor, Institute of Law

Russian Federation, Ufa

References

  1. Alikberov Kh.D. Electronic system for determining the optimal measure of punishment (problem statement) // Criminology: yesterday, today, tomorrow. 2018. No. 4 (51). pp.13-22.
  2. Alikperov Kh.D. Electronic technology for determining the measure of punishment. SPb.: Jurid. Center Press, 2020. 168 p.
  3. Maslov I.V. Feedback on the monograph of Doctor of Law, Professor Alikperov Khanlar Dzhafarovich "Electronic technology for determining the measure of punishment" ("Electronic scales of justice") // Russian judge. 2020. No. 11. P.55-60.
  4. Polyakov S.B., Gilev I.A. Subject area of informatization of judicial decisions // Bulletin of the Perm University. Legal sciences.2021. No. 3 P.462-487.
  5. Polyakov S.B., Gilev I.A. Words and deeds of "digitalization of law" // Russian Journal of Law. 2023. No. 1 (148). P.85-96.
  6. Polyakov S.B. Our opinion: only artificial intelligence will force a judge to justice // Bulletin of the Moscow University of the Ministry of Internal Affairs of Russia. 2021. №3. pp.213-218.
  7. Barabas, C. 2021. Beyond Bias: Reimagining the Terms “Ethical AI” in Criminal Law. Georgetwon Law.
  8. de Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666.
  9. Khademi, A., & Honavar, V. (2020, April). Algorithmic bias in recidivism prediction: A causal perspective (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 10, pp. 13839-13840).
  10. Noiret, S., Lumetzberger, J., & Kampel, M. (2021, December). Bias and Fairness in Computer Vision Applications of the Criminal Justice System. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1-8). IEEE.
  11. Papyshev, G., & Yarime, M. (2022). The limitation of ethics-based approaches to regulating artificial intelligence: regulatory gifting in the context of Russia. AI & SOCIETY, 1-16.
  12. Reichel, P. L., & Suzuki, Y. E. (2015). Japan’s lay judge system: A summary of its development, evaluation, and current status. International Criminal Justice Review, 25(3), 247-262.
  13. Strauß, S. (2021). Deep automation bias: How to tackle a wicked problem of ai?. Big Data and Cognitive Computing, 5(2), 18.
  14. Watch, A. (2020). How Dutch activists got an invasive fraud detection algorithm banned. Algorithmic Watch’s Automating Society Report, 160-163.
  15. Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64, 243-252.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies