File size: 1,354 Bytes
b91e31d
 
 
 
13e3de7
b91e31d
 
 
 
 
 
 
 
 
 
 
 
18db29a
 
b91e31d
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
##
<pre>
+from accelerate import Accelerator
+accelerator = Accelerator()
+dataloader, model, optimizer, scheduler = accelerator.prepare(
+        dataloader, model, optimizer, scheduler
+)

for batch in dataloader:
    inputs, targets = batch
-    inputs = inputs.to(device)
-    targets = targets.to(device)
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
-    loss.backward()
+    accelerator.backward(loss)
    optimizer.step()
    scheduler.step()
    optimizer.zero_grad()</pre>
##
Everything around `accelerate` occurs with the `Accelerator` class. To use it, first make an object.
Then call `.prepare` passing in the PyTorch objects that you would normally train with. This will
return the same objects, but they will be on the correct device and distributed if needed. Then
you can train as normal, but instead of calling `loss.backward()` you call `accelerator.backward(loss)`.
Also note that you don't need to call `model.to(device)` or `inputs.to(device)` anymore, as this
is done automatically by `accelerator.prepare()`.

##
To learn more checkout the related documentation:
- <a href="https://huggingface.co/docs/accelerate/basic_tutorials/migration" target="_blank">Migrating to 🤗 Accelerate</a>
- <a href="https://huggingface.co/docs/accelerate/package_reference/accelerator" target="_blank">The Accelerator</a>