a.scherbin
commited on
Commit
•
d1c182e
1
Parent(s):
1c1ad30
Fix gramma
Browse files
README.md
CHANGED
@@ -21,11 +21,11 @@ Please, keep in mind that acceleration by device latency differs from accelerati
|
|
21 |
| baseline | 302.39 B | 1.0 | 3.381 |
|
22 |
| ENOT optimized | 120.95 B | 2.5 | 3.386 |
|
23 |
|
24 |
-
You can use `Baseline_model.pth`
|
25 |
```python
|
26 |
generator = torch.load("ENOT_optimized_model.pth")
|
27 |
```
|
28 |
|
29 |
-
|
30 |
|
31 |
If you want to book a demo, please contact us: enot@enot.ai .
|
|
|
21 |
| baseline | 302.39 B | 1.0 | 3.381 |
|
22 |
| ENOT optimized | 120.95 B | 2.5 | 3.386 |
|
23 |
|
24 |
+
You can use `Baseline_model.pth` or `ENOT_optimized_model.pth` in the original repo by loading a model as generator in the following way:
|
25 |
```python
|
26 |
generator = torch.load("ENOT_optimized_model.pth")
|
27 |
```
|
28 |
|
29 |
+
Each of these two files contain a model object, saved by `torch.save`, so you can load them only from the original repository root because of imports.
|
30 |
|
31 |
If you want to book a demo, please contact us: enot@enot.ai .
|