Neural Network Algorithm for Intercepting Targets Moving along Known Trajectories by a Dubins’ Car

Capa

Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

The task of intercepting a target moving along a rectilinear or circular trajectory by a Dubins’ car is formulated as a problem of time-optimal control with an arbitrary direction of the car’s velocity at the time of interception. To solve this problem and to synthesize interception trajectories, neural network methods of unsupervised learning based on the Deep Deterministic Policy Gradient algorithm are used. The analysis of the obtained control laws and interception trajectories is carried out in comparison with the analytical solutions of the interception problem. Mathematical modeling of the target motion parameters, which the neural network had not previously seen during training, is carried out. Model experiments are conducted to test the stability of the neural solution. The effectiveness of using neural network methods for the synthesis of interception trajectories for given classes of target movements is shown.

Sobre autores

A. Galyaev

Trapeznikov Institute of Control Sciences, Russian Academy of Sciences

Email: galaev@ipu.ru
Moscow, Russia

A. Medvedev

Trapeznikov Institute of Control Sciences, Russian Academy of Sciences

Email: medvedev.ai18@physics.msu.ru
Moscow, Russia

I. Nasonov

Trapeznikov Institute of Control Sciences, Russian Academy of Sciences

Autor responsável pela correspondência
Email: nasonov.ia18@physics.msu.ru
Moscow, Russia

Bibliografia

  1. Isaacs R. Differential games. New York: John Wiley and sons, 1965.
  2. Markov A.A. A few examples of solving special problems on the largest and smallest values / The communications of the Kharkov mathematical society. 1889. Ser. 2. V. 1. P. 250-276.
  3. Dubins L.E. On curves of minimal length with a constraint on average curvature and with prescribed initial and terminal positions and tangents // Amer. J. Math. 1957. No. 79. P. 497-516.
  4. Galyaev A.A., Buzikov M.E. Time-Optimal Interception of a Moving Target by a Dubins Car // Autom Remote Control. 2021. V. 82. P. 745-758.
  5. Glizer V.Y., Shinar J. On the structure of a class of time-optimal trajectories // Optim. Control Appl. Method. 1993. V. 14. No. 4. P. 271-279.
  6. Бердышев Ю.И. О задачах последовательного обхода одним нелинейным объектом двух точек // Тр. ИММ УрО РАН. 2005. Т. 11. № 1. C. 43-52.
  7. Xing Z. Algorithm for Path Planning of Curvature-constrained UAVs Performing Surveillance of Multiple Ground Targets // Chin. J. Aeronaut. 2014. V. 27. No. 3. P. 622-633.
  8. Ny J.L., Feron E., Frazzoli E. On the Dubins Traveling Salesman Problem // IEEE Transactions on Automatic Control. 2014. V. 57. P. 265-270.
  9. Yang D., Li D., Sun H. 2D Dubins Path in Environments with Obstacle // Math. Problem. Engineer. 2013. V. 2013. P. 1-6.
  10. Manyam S.G. et al. Optimal dubins paths to intercept a moving target on a circle // Proceedings of the American Control Conference. 2019. V. 2019-July. P. 828-834.
  11. Caruana R., Niculescu-Mizil A. An empirical comparison of supervised learning algorithms // ICML Proceedings of the 23rd international conference on Machine learning. June 2006. P. 161-168.
  12. Arulkumaran K., Deisenroth M.P., Brundage M., Bharath A.A. Deep Reinforcement Learning: A Brief Survey // IEEE Signal Processing Magazine. 2017. V. 34. No. 6. P. 26-38.
  13. Perot E., Jaritz M., Toromanoff M., de Charette R. End-to-End Driving in a Realistic Racing Game with Deep Reinforcement Learning // IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017. P. 474-475.
  14. Al-Talabi A.A., Schwartz H.M. Kalman fuzzy actor-critic learning automaton algorithm for the pursuit-evasion differential game // IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). 2016. P. 1015-1022.
  15. Hartmann G., Shiller Z., Azaria A. Deep Reinforcement Learning for Time Optimal Velocity Control using Prior Knowledge // IEEE 31st International Conference on Tools with Artificial Intelligence. 2019. P. 186-193.
  16. Helvig C.S., Gabriel Robins, Alex Zelikovsky The moving-target traveling salesman problem // J. Algorithm. 2003. V. 49. No. 1. 2003. P. 153-174.
  17. Mnih V., Kavukcuoglu K., Silver D. et al. Human-level control through deep reinforcement learning // Nature. 2015. V. 518. P. 529-533.
  18. Uhlenbeck G.E., Ornstein L.S. On the theory of the brownian motion // Physic. Rev. 1930. V. 36. P. 823-841.
  19. Hinton G.E., Srivastava N., Krizhevsky A. et al. Improving neural networks by preventing co-adaptation of feature detectors. arXiv. 2012.
  20. Klambauer G., Unterthiner T., Mayr A. et al. Self-normalizing neural networks // Advances in Neural Information Processing Systems. 2017. P. 972-981.
  21. Buzikov M.E., Galyaev A.A. Minimum-time lateral interception of a moving target by a Dubins car // Automatica. 2022. V. 135. 109968.

Declaração de direitos autorais © The Russian Academy of Sciences, 2023

Este site utiliza cookies

Ao continuar usando nosso site, você concorda com o procedimento de cookies que mantêm o site funcionando normalmente.

Informação sobre cookies