2

I'm trying to create a text-based game using AI. I was playing around with text classification AutoML from Vertex AI just to learn AI, and then pick the best solution for my use case. Is it possible that I can train my data online (using any cloud provider) using a text classification model, and then export the model already trained (in the form of a script, I don't know how the output is) to be just consumed inside an offline game application? All the options I see so far use API endpoints, but I want it to be totally offline, so each game client has his own version of the trained model and the dataset.

Thanks

1 Answers1

4

Is it possible that I can train my data online (using any cloud provider) using a text classification model, and then export the model already trained (in the form of a script, I don't know how the output is) to be just consumed inside an offline game application?

Yes this is possible in principle, and commonly done. All mainstream machine learning libraries, such as TensforFlow, PyTorch etc, support saving model parameters to disk and loading them later, perhaps on another machine. Even within a single library there can multiple formats that this could work in, depending on whether you want to save raw parameters and code the model's structure in a script, or have the full model structure and parameters encoded in the saved file.

The downloaded model most likely will not run itself - saved files from trained ML are typically data only, not executables. You will need to write code within your game client that loads the model, and runs it when required.

The specifics of how to do this depend on the libraries and services you are using. But in principle this is no different from creating any other asset online, and transferring it to use in some other software. For instance, the same issues would arise if your service helped you design 3D assets for a graphical game.

You will need to research what is available to you in your chosen language and service (or vice-versa, look for choices that make this part work best for you). There is no one-size-fits-all solution.

In addition, you may want to check what resources you will have available on game clients, and whether the model can be made to run efficiently given those resources. Dedicated machine learning training services typically have fast GPUs to run on. When you are no longer training, and the model only needs to perform inference, which uses less resources than training, then simply using the CPU may still be fast enough for your game. It seems you are not creating something that needs to run fast in real time, so probably this will be fine, but it should be something you check early on.

There is also nothing stopping you using an API-based service to access the trained model. This has very different characteristics, including needing to account for the cost of the service, but for some use cases could still be a good fit. For example for large models that need powerful machines to run, this may be the only choice that allows clients to use the model.

Neil Slater
  • 33,739
  • 3
  • 47
  • 66