shreyajn commited on
Commit
f666d88
·
verified ·
1 Parent(s): 7d468b9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -33
README.md CHANGED
@@ -37,8 +37,8 @@ More details on model performance across various devices, can be found
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 162.955 ms | 8 - 444 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 151.842 ms | 10 - 26 MB | FP16 | NPU | [Unet-Segmentation.so](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.so)
42
 
43
 
44
 
@@ -99,10 +99,10 @@ python -m qai_hub_models.models.unet_segmentation.export
99
  ```
100
  Profile Job summary of Unet-Segmentation
101
  --------------------------------------------------
102
- Device: SA8255 (Proxy) (13)
103
- Estimated Inference Time: 157.83 ms
104
- Estimated Peak Memory Range: 9.41-27.04 MB
105
- Compute Units: NPU (51) | Total (51)
106
 
107
 
108
  ```
@@ -123,29 +123,13 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
123
  import torch
124
 
125
  import qai_hub as hub
126
- from qai_hub_models.models.unet_segmentation import Model
127
 
128
  # Load the model
129
- torch_model = Model.from_pretrained()
130
 
131
  # Device
132
  device = hub.Device("Samsung Galaxy S23")
133
 
134
- # Trace model
135
- input_shape = torch_model.get_input_spec()
136
- sample_inputs = torch_model.sample_inputs()
137
-
138
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
139
-
140
- # Compile model on a specific device
141
- compile_job = hub.submit_compile_job(
142
- model=pt_model,
143
- device=device,
144
- input_specs=torch_model.get_input_spec(),
145
- )
146
-
147
- # Get target model to run on-device
148
- target_model = compile_job.get_target_model()
149
 
150
  ```
151
 
@@ -158,10 +142,10 @@ provisioned in the cloud. Once the job is submitted, you can navigate to a
158
  provided job URL to view a variety of on-device performance metrics.
159
  ```python
160
  profile_job = hub.submit_profile_job(
161
- model=target_model,
162
- device=device,
163
- )
164
-
165
  ```
166
 
167
  Step 3: **Verify on-device accuracy**
@@ -171,12 +155,11 @@ on sample input data on the same cloud hosted device.
171
  ```python
172
  input_data = torch_model.sample_inputs()
173
  inference_job = hub.submit_inference_job(
174
- model=target_model,
175
- device=device,
176
- inputs=input_data,
177
- )
178
-
179
- on_device_output = inference_job.download_output_data()
180
 
181
  ```
182
  With the output of the model, you can compute like PSNR, relative errors or
 
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 154.294 ms | 6 - 8 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 151.839 ms | 10 - 29 MB | FP16 | NPU | [Unet-Segmentation.so](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.so)
42
 
43
 
44
 
 
99
  ```
100
  Profile Job summary of Unet-Segmentation
101
  --------------------------------------------------
102
+ Device: Snapdragon X Elite CRD (11)
103
+ Estimated Inference Time: 135.74 ms
104
+ Estimated Peak Memory Range: 9.39-9.39 MB
105
+ Compute Units: NPU (52) | Total (52)
106
 
107
 
108
  ```
 
123
  import torch
124
 
125
  import qai_hub as hub
126
+ from qai_hub_models.models.unet_segmentation import
127
 
128
  # Load the model
 
129
 
130
  # Device
131
  device = hub.Device("Samsung Galaxy S23")
132
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
 
134
  ```
135
 
 
142
  provided job URL to view a variety of on-device performance metrics.
143
  ```python
144
  profile_job = hub.submit_profile_job(
145
+ model=target_model,
146
+ device=device,
147
+ )
148
+
149
  ```
150
 
151
  Step 3: **Verify on-device accuracy**
 
155
  ```python
156
  input_data = torch_model.sample_inputs()
157
  inference_job = hub.submit_inference_job(
158
+ model=target_model,
159
+ device=device,
160
+ inputs=input_data,
161
+ )
162
+ on_device_output = inference_job.download_output_data()
 
163
 
164
  ```
165
  With the output of the model, you can compute like PSNR, relative errors or