Dataless Black-Box Model Comparison


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

In a time where the training of new machine learning models is extremely time-consuming and resource-intensive and the sale of these models or the access to them is more popular than ever, it is important to think about ways to ensure the protection of these models against theft. In this paper, we present a method for estimating the similarity or distance between two black-box models. Our approach does not depend on the knowledge about specific training data and therefore may be used to identify copies of or stolen machine learning models. It can also be applied to detect instances of license violations regarding the use of datasets. We validate our proposed method empirically on the CIFAR-10 and MNIST datasets using convolutional neural networks, generative adversarial networks and support vector machines. We show that it can clearly distinguish between models trained on different datasets. Theoretical foundations of our work are also given.

作者简介

C. Theiss

Computer Vision Group

编辑信件的主要联系方式.
Email: christoph.theiss@uni-jena.de
德国, Jena

C. Brust

Computer Vision Group

Email: christoph.theiss@uni-jena.de
德国, Jena

J. Denzler

Computer Vision Group

Email: christoph.theiss@uni-jena.de
德国, Jena

补充文件

附件文件
动作
1. JATS XML

版权所有 © Pleiades Publishing, Ltd., 2018