justheuristic
commited on
Commit
•
c13bbb6
1
Parent(s):
6f7b9df
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ As a result, the larger batch size you can fit, the more efficient you will trai
|
|
33 |
|
34 |
### Where can I train for free?
|
35 |
|
36 |
-
You can train fine in colab, but if you get a
|
37 |
|
38 |
|
39 |
### Can I use this technique with other models?
|
|
|
33 |
|
34 |
### Where can I train for free?
|
35 |
|
36 |
+
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
|
37 |
|
38 |
|
39 |
### Can I use this technique with other models?
|