A Method For Autoregression Modeling of a Speech Signal

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

The problem of autoregressive modeling of a speech signal based on the data of the discrete Fourier transform in the mode of a sliding observation window of small duration (milliseconds) is considered. The problem of stability of the formed autoregressive model is investigated. To overcome it, it is proposed to use the envelope of the Schuster periodogram as a reference spectral sample. A new method of autoregressive modeling has been developed, in which the detection of the spectral envelope is carried out using a recirculator of a sequence of samples in the frequency domain. An example of its practical implementation is considered, a full-scale experiment is set up and carried out. Based on the results of the experiment, conclusions were drawn about achieving a significant gain in terms of not only stability, but also the accuracy of the autoregressive model of the speech signal.

About the authors

V. V. Savchenko

National Research University Higher School of Economics

Author for correspondence.
Email: vvsavchenko@yandex.ru
Nizhny Novgorod, 603155 Russia

References

  1. Gibson J. // Entropy. 2018. V. 20. № 10. P. 7502018. https://doi.org/10.3390/e20100750
  2. Gudnason J. Speech Production Modeling and Analysis. Academic Press Library. In Signal Processing, Elsevier. 2014. V. 4. P. 985. https://doi.org/10.1016/B978-0-12-396501-1.00034-0
  3. Ando Sh. // The J. Acoustical Society of America. 2019. V. 146. P. 2846. https://doi.org/10.1121/1.5136873
  4. Cui S., Li E., Kang X. // IEEE Int. Conf. Multimedia and Expo (ICME). London. 06–10 Jul. 2020. N.Y.: IEEE, 2020. P. 9102765. https://doi.org/10.1109/ICME46284.2020.9102765
  5. Savchenko V.V. // Radioelectronics and Communications Systems. 2021. V. 64. № 11. P. 592. https://doi.org/10.3103/S0735272721110030
  6. Castanié F. Digital Spectral Analysis. Parametric, Non-Parametric and Advanced Methods. Hoboken–London: Wiley-ISTE. 2011. https://doi.org/10.1002/9781118601877
  7. Rabiner L.R., Shafer R.W. Theory and Applications of Digital Speech Processing. Boston: Pearson, 2010.
  8. Marple Jr. S.L. Digital Spectral Analysis with Applications. Mineola, N.Y.: Dover Publications, 2019.
  9. Савченко В.В., Савченко Л.В. // РЭ. 2021. Т. 66. № 11. С. 1100. https://doi.org/10.31857/S0033849421110085
  10. Kazemipour A., Miran S., Pal P. et al. // IEEE Trans. 2017. V. SP-65. № 9. P. 2333. https://doi.org/10.1109/TSP.2017.2656848
  11. Гоноровский И.С. Радиотехнические цепи и сигналы. М.: Сов. радио, 1977.
  12. Mustiere F., Bouchard M., Bolic M. // IEEE Trans. 2012. V. ASLP-20. № 2. P. 705. https://doi.org/10.1109/TASL.2011.2163511
  13. Savchenko A.V., Savchenko V.V. // Radioelectronics and Communications Systems. 2021. V. 64. № 6. P. 300. https://doi.org/10.3103/S0735272721060030
  14. Tohyama M. // Acoustic Signals and Hearing. Acad. Press, 2020. P. 89. https://doi.org/10.1016/B978-0-12-816391-7.00013-9
  15. Савченко А. В., Савченко В. В. // Измерит. техника. 2022. № 6. С. 60. https://doi.org/10.32446/0368-1025it.2022-6-60-66
  16. Palaparthi A., Titze I.R. // Speech Commun. 2020. V. 123. P. 98. https://doi.org/10.1016/j.specom.2020.07.003
  17. Ding J., Tarokh V., Yang Y. // IEEE Trans. 2018. V. IT-64. № 6. P. 4024. https://doi.org/10.1109/TIT.2017.2717599
  18. Min S.Y., Kim Y.K. // J. Korea Academia-industrial Cooperation Society. 2010. № 11. P. 3558. https://doi.org/10.5762/KAIS.2010.11.9.3558
  19. Савченко В.В. // Научные ведомости Белгород. ГУ. Сер. Экономика. Информатика. 2015. № 7. Вып. 34/1. С. 84.
  20. Sharma G., Umapathy K., Krishnan S. // Appl. Acoustics. 2020. V. 158. P. 107020. https://doi.org/10.1016/j.apacoust.2019.107020

Supplementary files

Supplementary Files
Action
1. JATS XML
2.

Download (63KB)
3.

Download (60KB)
4.

Download (41KB)
5.

Download (65KB)
6.

Download (39KB)
7.

Download (110KB)
8.

Download (32KB)

Copyright (c) 2023 В.В. Савченко

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies