Learning mechanism for a collective classifier based on competition driven by training examples

封面

如何引用文章

全文:

详细

The purpose of this work is to modify the learning mechanism of a collective classifier in order to provide learning by population dynamics alone, without requiring an external sorting device. A collective classifier is an ensemble of non-identical simple elements, which do not have any intrinsic dynamics neither variable parameters; the classifier admits learning by adjusting the composition of the ensemble, which was provided in the preceding literature by selecting the ensemble elements using a sorting device. Methods. The population dynamics model of a collective classifier is extended by adding a “learning subsystem”, which is controlled by a sequence of training examples and, in turn, controls the strength of intraspecific competition in the population dynamics. The learning subsystem dynamics is reduced to a linear mapping with random parameters expressed via training examples. The solution to the mapping is an asymptotically stationary Markovian random process, for which we analytically find asymptotic expectation and show its variance to vanish in the limit under the specified assumptions, thus allowing an approximate deterministic description of the coupled population dynamics based on available results from the preceding literature. Results. We show analytically and illustrate it by numerical simulation that the decision rule of our classifier in the course of learning converges to the Bayesian rule under assumptions which are essentially in line with available literature on collective classifiers. The implementation of the required competitive dynamics does not require an external sorting device. Conclusion. We propose a conceptual model for a collective classifier, whose learning is fully provided by its own population dynamics. We expect that our classifier, similarly to the approaches taken in the preceding literature, can be implemented as an ensemble of living cells equipped with synthetic genetic circuits, when a mechanism of population dynamics with synthetically controlled intraspecific competition becomes available.

作者简介

A. Sutyagin

Lobachevsky State University of Nizhny Novgorod

603950 Nizhny Novgorod, Gagarin Avenue, 23

Oleg Kanakov

Lobachevsky State University of Nizhny Novgorod

ORCID iD: 0000-0001-9041-2209
SPIN 代码: 3677-7338
Scopus 作者 ID: 8958614400
Researcher ID: J-9418-2013
603950 Nizhny Novgorod, Gagarin Avenue, 23

参考

  1. Айвазян С. А., Бухштабер В. М., Енюков И. С., Мешалкин Л. Д. Прикладная статистика: Классификация и снижение размерности. М.: Финансы и статистика, 1989. 608 с.
  2. Alpaydin E. Introduction to Machine Learning. Fourth Edition. Cambridge, Massachusetts: MIT Press, 2020. 683 p.
  3. Сутягин А. А., Канаков О. И. Метод обучения коллективного классификатора на основе конкуренции в режиме сосуществования // Известия вузов. ПНД. 2021. Т. 29, № 2. С. 220–239. doi: 10.18500/0869-6632-2021-29-2-220-239.
  4. Didovyk A., Kanakov O. I., Ivanchenko M. V., Hasty J., Huerta R., Tsimring L. Distributed classifier based on genetically engineered bacterial cell cultures // ACS Synthetic Biology. 2015. Vol. 4, no. 1. P. 72–82. doi: 10.1021/sb500235p.
  5. Kanakov O., Kotelnikov R., Alsaedi A., Tsimring L., Huerta R., Zaikin A., Ivanchenko M. Multiinput distributed classifiers for synthetic genetic circuits // PLoS ONE. 2015. Vol. 10, no. 5. P. e0125144. doi: 10.1371/journal.pone.0125144.
  6. Goh B. S. Global stability in many-species systems // The American Naturalist. 1977. Vol. 111, no. 977. P. 135–143. doi: 10.1086/283144.
  7. Гнеденко Б. В. Курс теории вероятностей. М.: ЛЕНАНД, 2022. 456 с.
##common.cookie##