sharpenb commited on
Commit
5a31bd5
1 Parent(s): 158a91b

Upload folder using huggingface_hub (#3)

Browse files

- 8b5ebc647dac15a4680be8b0ad5bbf3474aed6b94d6384b3d19fe8a2decf3c0c (c180ed0012557675b2964bd9e873db046bac8855)
- ca92ecf136f31e8e4840c04c37bb63dc21f096893981aad170cdab601c7be94c (6b31a35d84be2f8f45c83d3ed4d370923c83e703)

Files changed (3) hide show
  1. README.md +4 -3
  2. model/optimized_model.pkl +2 -2
  3. plots.png +0 -0
README.md CHANGED
@@ -37,16 +37,17 @@ metrics:
37
  ![image info](./plots.png)
38
 
39
  **Important remarks:**
40
- - The quality of the model output might slightly vary compared to the base model. There might be minimal quality loss.
41
  - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...).
42
  - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
 
43
 
44
  ## Setup
45
 
46
  You can run the smashed model with these steps:
47
 
48
- 0. Check cuda, torch, packaging requirements are installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. For packaging and torch, run `pip install packaging torch`.
49
- 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take 15 minutes to install.
50
  ```bash
51
  pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
52
  ```
 
37
  ![image info](./plots.png)
38
 
39
  **Important remarks:**
40
+ - The quality of the model output might slightly vary compared to the base model.
41
  - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...).
42
  - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
43
+ - Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
44
 
45
  ## Setup
46
 
47
  You can run the smashed model with these steps:
48
 
49
+ 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
50
+ 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
51
  ```bash
52
  pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
53
  ```
model/optimized_model.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:96e3b4105faf330e5afc18f445e7ca53170f5ec790f5676ebdb5e449026571b4
3
- size 2743425658
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b54023c1ffb297433bae149f737d6eb661fe88c84b199ca694db6cc3c31eeec
3
+ size 2743426225
plots.png CHANGED