I'm making information systems about domain-specific models.
There is an actor trainer. Suppose the data provider is concerned his data is being stolen by the trainer.
How to keep a trainer to train on the entire data but prevent stealing the entire data?
If there is the least restriction in data provider, I think one sample is enough to give it to the trainer. Since neural networks are about tensor shape, all trainers need to know is only about the input shape and output shape.
Thus, taking the first dimension, a.k.a. sample dimension (axis=0), specifically, I can just allow the trainer to take one sample data only, and the trainer can train that one sample before he gives the deep learning architecture to me to be trained on entire samples.
For the summary, I'm (the system) just a middleman here to facilitate (user interfacing) between the trainer and data provider. So, is there any clever way to solve this issue?
I accept specific tech stack solution such as using AWS/GCP/Azure there is feature to handle that, but I prefer generic way like giving one sample only to trainer.