Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
## | |
<pre> | |
from accelerate import Accelerator | |
accelerator = Accelerator() | |
dataloader, model, optimizer scheduler = accelerator.prepare( | |
dataloader, model, optimizer, scheduler | |
) | |
for batch in dataloader: | |
optimizer.zero_grad() | |
inputs, targets = batch | |
outputs = model(inputs) | |
loss = loss_function(outputs, targets) | |
accelerator.backward(loss) | |
optimizer.step() | |
scheduler.step() | |
+accelerator.save_state("checkpoint_dir") | |
+accelerator.load_state("checkpoint_dir")</pre> | |
## | |
To save or load a checkpoint in, `Accelerator` provides the `save_state` and `load_state` methods. | |
These methods will save or load the state of the model, optimizer, scheduler, as well as random states and | |
any custom registered objects from the main process on each device to a passed in folder. | |
**This API is designed to save and resume training states only from within the same python script or training setup.** | |
## | |
To learn more checkout the related documentation: | |
- <a href="https://huggingface.co/docs/accelerate/v0.15.0/package_reference/accelerator#accelerate.Accelerator.save_state" target="_blank">`save_state` reference</a> | |
- <a href="https://huggingface.co/docs/accelerate/v0.15.0/package_reference/accelerator#accelerate.Accelerator.load_state" target="_blank">`load_state` reference</a> | |
- <a href="https://github.com/huggingface/accelerate/blob/main/examples/by_feature/checkpointing.py" target="_blank">Example script</a> |