sharpenb's picture
Update README.md
16f0a4c verified
metadata
license: apache-2.0
library_name: pruna-engine
thumbnail: >-
  https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg
metrics:
  - memory_disk
  - memory_inference
  - inference_latency
  - inference_throughput
  - inference_CO2_emissions
  - inference_energy_consumption

Simply make AI models cheaper, smaller, faster, and greener!

Results

image info Above results for 768x768 image size with 4-step inference on A100. Dynamic image sizes supported.

Setup

You can run the smashed model by:

  1. Installing and importing the pruna-engine (version 0.2.9) package. Use pip install pruna-engine==0.2.9 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com for installation. See Pypi for details on the package.
  2. Downloading the model files. This can be done using the Hugging Face CLI with the following commands:
    mkdir SimianLuo-LCM_Dreamshaper_v7-smashed
    huggingface-cli download PrunaAI/SimianLuo-LCM_Dreamshaper_v7-turbo-tiny-green-smashed --local-dir SimianLuo-LCM_Dreamshaper_v7-turbo-tiny-green-smashed --local-dir-use-symlinks False
    
    Alternatively, you can download them manually.
  3. Loading the model.
  4. Running the model.

You can achieve this by running the following code:

from pruna_engine.PrunaModel import PrunaModel  # Step (1): install and import `pruna-engine` package.

model_path = "SimianLuo-LCM_Dreamshaper_v7-turbo-tiny-green-smashed/model" # Step (2): specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path)  # Step (3): load the model.
smashed_model("Self-portrait oil painting, a beautiful cyborg with golden hair, 8k", num_inference_steps=4)[0]  # Step (4): run the model.

Configurations

The configuration info are in config.json.

License

We follow the same license as the original model. Please check the license of the original model before using this model.

Want to compress other models?

  • Contact us and tell us which model to compress next here.
  • Request access to easily compress your own AI models here.