INFO: To update the model periodically with new data, simply set up a cron job to call `pio train` and `pio deploy`. The engine will continue to serve prediction results during the re-train process. After the training is completed, `pio deploy` will automatically shutdown the existing engine server and bring up a new process on the same port.

INFO: **Note that if you import a *large* data set** and the training seems to be taking forever or getting stuck, it's likely that there is not enough executor memory. It's recommended to setup a Spark standalone cluster, you'll need to specify more driver and executor memory when training with a large data set. Please see [FAQ here](/resources/faq/#engine-training) for instructions.
