Reinforcement Learning for Model Problems of Optimal Control

Capa

Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

The functionals of dynamic systems of various types are optimized using modern methods of reinforcement learning. The linear resource allocation problem, as well as the optimal consumption problem and its stochastic modifications are considered. In the reinforcement learning strategy gradient methods are used.

Sobre autores

S. Semenov

Moscow Institute of Physics and Technology, 141701, Dolgoprudny, Moscow Oblast, Russia

Email: semenov.ss@phystech.edu
Россия, МО, Долгопрудный

V. Tsurkov

Federal Research Center “Computer Science and Control,” Russian Academy of Sciences, 119333, Moscow, Russia

Autor responsável pela correspondência
Email: tsur@ccas.ru
Россия, Москва

Bibliografia

  1. Sewak M. Deterministic Policy Gradient and the DDPG: Deterministic-Policy-Gradient-Based Approaches. 2019.
  2. Schulman J. Trust Region Policy Optimization. 2015. https://arxiv.org/abs/1502.05477.
  3. Haarnoja T. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Rein-forcement Learning with a Stochastic Actor. 2018. https://arxiv.org/abs/1801.01290.
  4. Huang S. A2C is a special case of PPO. 2022. https://arxiv.org/abs/2205.09123.
  5. Schulman J. Proximal Policy Optimization Algorithms. 2017. https://arxiv.org/abs/1707.06347.
  6. Zhang L. Penalized Proximal Policy Optimization for Safe Reinforcement Learning. 2022. https://arxiv.org/abs/2205.11814.
  7. Chen X. The Sufficiency of Off-policyness: PPO is insufficient according to an Off-policy Measure. 2022. https://arxiv.org/abs/2205.10047.
  8. Ghosh A. Provably Efficient Model-Free Constrained RL with Linear Function Approximation. 2022. https://arxiv.org/abs/2206.11889.
  9. Song Z. Safe-FinRL: A Low Bias and Variance Deep Reinforcement Learning Implementation for High-Freq Stock Trading. 2022. https://arxiv.org/abs/2206.05910.
  10. Kaledin M. Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization. 2022. https://arxiv.org/abs/2206.06827.
  11. Luo Q. Finite-Time Analysis of Fully Decentralized Single-Timescale Actor- Critic. 2022. https://arxiv.org/abs/2206.05733.
  12. Deka A. ARC – Actor Residual Critic for Adversarial Imitation Learning. 2022. https://arxiv.org/abs/2206.02095.
  13. Цурков В.И. Динамические задачи большой размерности. М.: Наука, 1988. 287 с.
  14. Бекларян Л.А., Флёрова А.Ю., Жукова А.А. Методы оптимального управления: учеб. пособие. М.: Наука, 2018.
  15. Оксендаль Б. Стохастические дифференциальные уравнения. Введение в теорию и приложеия. М.: Мир, 2003.
  16. Понтрягин Л.С. Принцип максимума в оптимальном управлении. М.: Наука, 2004.

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML
2.

Baixar (35KB)
3.

Baixar (561KB)
4.

Baixar (239KB)
5.

Baixar (70KB)
6.

Baixar (98KB)
7.

Baixar (90KB)
8.

Baixar (125KB)
9.

Baixar (52KB)
10.

Baixar (86KB)
11.

Baixar (132KB)
12.

Baixar (131KB)

Declaração de direitos autorais © С.С. Семенов, В.И. Цурков, 2023

Creative Commons License
Este artigo é disponível sob a Licença Creative Commons Atribuição–NãoComercial–SemDerivações 4.0 Internacional.

Este site utiliza cookies

Ao continuar usando nosso site, você concorda com o procedimento de cookies que mantêm o site funcionando normalmente.

Informação sobre cookies