NeMo
Haoxiang-Wang commited on
Commit
3f20bfe
1 Parent(s): 32f92ed

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +329 -0
README.md ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
+ library_name: nemo
7
+ ---
8
+ # **Cosmos Tokenizer**: A suite of image and video tokenizers
9
+
10
+ [**Website**](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [**Code**](https://github.com/NVIDIA/Cosmos-Tokenizer) | **Video**
11
+
12
+
13
+ # Model Overview
14
+
15
+ ## Description:
16
+ **Cosmos Tokenizer** is a suite of visual tokenizers for images and videos that delivers various compression rates while maintaining high reconstruction quality. Cosmos Tokenizer can serve as an effective and efficient building block in both diffusion-based and autoregressive models for image and video generation.
17
+
18
+
19
+ Our tokenizers come in two types: **Continuous** (C) and **Discrete** (D), each with **Image** (I) and **Video** (V) variants:
20
+ * Continuous tokenizers encode visual data into continuous latent embeddings, as shown in latent diffusion models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion). These embeddings are suitable for models that generate data by sampling from continuous distributions.
21
+ * Discrete tokenizers encode visual data into discrete latent codes, mapping them into quantized indices, as seen in autoregressive transformers such as [VideoPoet](https://sites.research.google/videopoet/). This discretization is required for models that generate data by optimizing the cross-entropy loss, such as the GPT models.
22
+
23
+
24
+ | | Continuous ( C ) | Discrete ( D ) |
25
+ | ------------------|---------------------|---------------------|
26
+ | **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI |
27
+ | **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV |
28
+
29
+
30
+ Given an image or a video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x8 or 16x16 and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16). Cosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers.
31
+
32
+ **Model Developer**: NVIDIA
33
+
34
+ ## Model Versions
35
+
36
+ The initial release (v1.0) of Cosmos Tokenizer includes the following tokenizers:
37
+ * **Continuous Tokenizers**
38
+ * Continuous Image (CI) Tokenizer
39
+ * [Cosmos-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) (8x8 spatial compression)
40
+ * [Cosmos-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) (16x16 spatial compression)
41
+ * Continuous Video (CV) Tokenizer
42
+ * [Cosmos-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) (4x temporal compression, 8x8 spatial compression)
43
+ * [Cosmos-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) (8x temporal compression, 8x8 spatial compression)
44
+ * [Cosmos-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) (8x temporal compression, 16x16 spatial compression)
45
+ * **Discrete Tokenizers**
46
+ * Discrete Image (DI) Tokenizer
47
+ * [Cosmos-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) (8x8 spatial compression)
48
+ * [Cosmos-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) (16x16 spatial compression)
49
+ * Discrete Video (DV) Tokenizer
50
+ * [Cosmos-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) (4x temporal compression, 8x8 spatial compression)
51
+ * [Cosmos-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) (8x temporal compression, 8x8 spatial compression)
52
+ * [Cosmos-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) (8x temporal compression, 16x16 spatial compression)
53
+
54
+
55
+ ### License/Terms of Use:
56
+ [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
57
+
58
+ Under the NVIDIA Open Model License, NVIDIA confirms:
59
+
60
+ * Models are commercially usable.
61
+ * You are free to create and distribute Derivative Models.
62
+ * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
63
+
64
+ ## Model Architecture:
65
+
66
+ We designed Cosmos Tokenizer using a lightweight and computationally efficient architecture, featuring a temporally causal design. Specifically, we employ causal temporal convolution and causal temporal attention layers to preserve the natural temporal order of video frames, ensuring seamless tokenization of images and videos using a single unified network architecture. The encoder and decoder form a symmetrical pair, which are mirrors of each other. The encoder starts with a 2-level [Haar wavelet](https://link.springer.com/book/10.1007/978-3-319-04295-4) transform layer, which down-samples inputs by a factor of 4 in both spatial and temporal dimensions. Likewise, the decoder ends with an inverse wavelet transform. We employ the vanilla autoencoder (AE) formulation to model the latent space for continuous tokenizers. For discrete tokenizers, we adopt the [Finite-Scalar-Quantization](https://openreview.net/forum?id=8ishA3LxN8) (FSQ) as the latent space quantizer.
67
+
68
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/638fb8cf2380ffd99caf8c2a/gQH5n9iCEtqZc7uutUwdL.jpeg)
69
+
70
+
71
+
72
+ ## Input/Output Specifications
73
+
74
+ ### Encoder
75
+ * **Input**
76
+ * **Types:** Images or Videos
77
+ * **Format:** RGB (Red, Green, Blue)
78
+ * **Resolution:**
79
+ * Minimum: 256px (shorter side)
80
+ * Maximum: Up to 4K
81
+ * **Video Length:** Up to 8 seconds for 1080p videos (bounded by A100 80G GPU memory; higher resolutions will have shorter supported durations)
82
+
83
+ * **Output**
84
+ * **Types:** Tokens
85
+ * Continuous Image/Video Tokenizers: Continuous value feature vectors
86
+ * Discrete Image/Video Tokenizers: Integer indices
87
+
88
+ ### Decoder
89
+ * **Input**
90
+ * **Types:** Tokens from encoder
91
+
92
+ * **Output**
93
+ * **Types:** Images or Videos (matching input type)
94
+ * **Format:** RGB (Red, Green, Blue)
95
+ * **Resolution:** Same as input resolution
96
+ * **Video Length:** Same as input video length
97
+
98
+ ## Software Integration (Required For NVIDIA Models Only):
99
+ **Runtime Engine(s):**
100
+ * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer)
101
+ * [NeMo](https://github.com/NVIDIA/NeMo) (please install the latest version from the GitHub main branch)
102
+
103
+ **Supported Hardware Microarchitecture Compatibility:**
104
+ * NVIDIA Ampere (e.g., A100)
105
+ * NVIDIA Hopper (e.g., H100)
106
+
107
+ Note: We have only tested Cosmos Tokenizer with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision.
108
+
109
+
110
+ **Operating System(s):**
111
+ * Linux (We have not tested on other operating systems.)
112
+
113
+ # Usage
114
+ Inference Engines:
115
+ * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) (PyTorch)
116
+ * [NeMo](https://github.com/NVIDIA/NeMo)
117
+
118
+ ## Inference with `Cosmos-Tokenizer` (PyTorch)
119
+ ### Step-1: Installation of `Cosmos-Tokenizer`
120
+ Note: Currently, the `Cosmos-Tokenizer` code is only supported on Linux.
121
+
122
+ - Please clone the `Cosmos-Tokenizer` from GitHub repo [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer).
123
+
124
+ ```bash
125
+ git clone https://github.com/NVIDIA/Cosmos-Tokenizer.git
126
+ cd Cosmos-Tokenizer
127
+ ```
128
+ - Install dependencies
129
+
130
+ ```bash
131
+ pip3 install -r requirements.txt
132
+ apt-get install -y ffmpeg
133
+ ```
134
+
135
+ - Preferably, you could build a docker image using our provided Dockerfile.
136
+ ```bash
137
+ docker build -t cosmos-docker -f Dockerfile.
138
+ # You can run the container as:
139
+ docker run --gpus all -it --rm -v /home/${USER}:/home/${USER} \
140
+ --workdir ${PWD} cosmos-docker /bin/bash
141
+ ```
142
+
143
+ ### Step-2: Download Pre-trained Checkpoints
144
+ - Create a local directory for the pre-trained checkpoints and download the
145
+ pre-trained checkpoints from HuggingFace.
146
+
147
+ ```python
148
+ from huggingface_hub import login, snapshot_download
149
+ import os
150
+ # You could get your Hugging Face token from https://huggingface.co/settings/tokens
151
+ login(token=<YOUT-HF-TOKEN>, add_to_git_credential=True)
152
+ # You could specify the tokenizers you want to download.
153
+ model_names = [
154
+ "Cosmos-Tokenizer-CI8x8",
155
+ "Cosmos-Tokenizer-CI16x16",
156
+ "Cosmos-Tokenizer-CV4x8x8",
157
+ "Cosmos-Tokenizer-CV8x8x8",
158
+ "Cosmos-Tokenizer-CV8x16x16",
159
+ "Cosmos-Tokenizer-DI8x8",
160
+ "Cosmos-Tokenizer-DI16x16",
161
+ "Cosmos-Tokenizer-DV4x8x8",
162
+ "Cosmos-Tokenizer-DV8x8x8",
163
+ "Cosmos-Tokenizer-DV8x16x16",
164
+ ]
165
+ for model_name in model_names:
166
+ hf_repo = "nvidia/" + model_name
167
+ local_dir = "pretrained_ckpts/" + model_name
168
+ os.makedirs(local_dir, exist_ok=True)
169
+ print(f"downloading {model_name} to {local_dir}...")
170
+ snapshot_download(repo_id=hf_repo, allow_patterns=["*.jit"], local_dir=local_dir)
171
+ ```
172
+
173
+ - Under the ech checkpoint directory `pretrained_ckpts/<model-name>`, we provide the encoder,
174
+ decoder and the full autoencoder JIT models.
175
+
176
+ ```bash
177
+ ├── pretrained_ckpts/
178
+ │ ├── Cosmos-Tokenizer-DV8x8x8/
179
+ │ │ ├── encoder.jit
180
+ │ │ ├── decoder.jit
181
+ │ │ ├── autoencoder.jit
182
+ │ ...
183
+ ```
184
+
185
+ ### Step-3: Run Inference
186
+ You can use the following example commands to encode and decode images or videos. For each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`.
187
+
188
+ ```python
189
+ import torch
190
+ from cosmos_tokenizer.image_lib import ImageTokenizer
191
+
192
+ model_name = "Cosmos-Tokenizer-DI8x8"
193
+ input_tensor = torch.randn(1, 3, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, H, W]
194
+ encoder = ImageTokenizer(checkpoint_enc=f'pretrained_ckpts/{model_name}/encoder.jit')
195
+ (indices,codes) = encoder.encode(input_tensor)
196
+ torch.testing.assert_close(indices.shape, (1, 64, 64))
197
+ torch.testing.assert_close(codes.shape, (1, 6, 64, 64))
198
+
199
+ # The input tensor can be reconstructed by the decoder as:
200
+ decoder = ImageTokenizer(checkpoint_dec=f'pretrained_ckpts/{model_name}/decoder.jit')
201
+ reconstructed_tensor = decoder.decode(indices)
202
+ torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape)
203
+ ```
204
+
205
+ The `indices` will have the shape `(1, 64, 64)` and contain integral values in the range `[1..64K]`.
206
+ The `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 64, 64)`, where `C=6` represents the number of FSQ levels.
207
+
208
+
209
+ **Note**: More inference usage commands, including both TorchScript (JIT) and PyTorch Inference APIs on real images and videos, can be found on our GitHub repository [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer).
210
+
211
+
212
+ ## Inference with NeMo
213
+
214
+ ### Step-1: Install NeMo
215
+ Please install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch).
216
+
217
+ ### Step-2: Run Inference
218
+ Run the following code to tokenize the video:
219
+
220
+ ```python
221
+ import torch
222
+ from nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer
223
+ model_name = "Cosmos-Tokenizer-DI8x8"
224
+ model = CausalVideoTokenizer.from_pretrained(model_name)
225
+ input_tensor = torch.randn(1, 3, 512, 512).to('cuda').to(torch.bfloat16)
226
+ (indices, codes) = model.encode(input_tensor)
227
+ ```
228
+
229
+ # Evaluation
230
+
231
+ ## TokenizationPerformance Comparison
232
+ We have extensively evaluated the **Cosmos Tokenizer** suite on various image and video benchmark datasets.
233
+ For the evaluation of image tokenizers, we follow prior art to evaluate on MS-COCO 2017, ImageNet-1K, FFHQ, and CelebA-HQ. We use the MS-COCO 2017 validation subset of 5,000 images, ImageNet-1K validation subset of 50,000 images, FFHQ subset of 10,000 images, and CelebA-HQ subset of 14,645 images as image evaluation benchmark.
234
+
235
+ | Tokenizer | Compression Ratio | Quantization | PSNR (MS-COCO) | SSIM (MS-COCO) | rFID (MS-COCO) | PSNR (ImageNet-1K) | SSIM (ImageNet-1K) | rFID (ImageNet-1K) | PSNR (FFHQ) | SSIM (FFHQ) | rFID (FFHQ) | PSNR (CelebA-HQ) | SSIM (CelebA-HQ) | rFID (CelebA-HQ) |
236
+ |----------------------|-------------------|--------------|----------------|----------------|----------------|---------------------|---------------------|---------------------|----------------|----------------|----------------|-------------------|-------------------|-------------------|
237
+ | Open-MAGVIT2 | 16×16 | LFQ | 30.06 | 0.502 | 6.649 | 29.62 | 0.398 | 2.701 | 31.77 | 0.774 | 1.994 | 32.36 | 0.844 | 2.865 |
238
+ | LlamaGen | 8×8 | VQ | 30.71 | 0.616 | **4.123** | 30.28 | 0.498 | **1.403** | 33.39 | 0.868 | 0.701 | 34.82 | 0.937 | 0.502 |
239
+ | LlamaGen | 16×16 | VQ | 29.93 | 0.491 | 6.077 | 29.81 | 0.448 | 1.657 | 31.58 | 0.772 | 1.366 | 32.18 | 0.837 | 1.113 |
240
+ | Cosmos-Tokenizer-DI | 8×8 | FSQ | **31.74** | **0.730** | 4.564 | **31.73** | **0.725** | 1.841 | **35.35** | **0.892** | **0.555** | **37.77** | **0.948** | **0.261** |
241
+ | Cosmos-Tokenizer-DI | 16×16 | FSQ | 30.74 | 0.591 | 12.252 | 30.69 | 0.582 | 6.529 | 33.17 | 0.808 | 7.663 | 33.86 | 0.854 | 5.953 |
242
+
243
+
244
+ * We compare with the state-of-the-art discrete image tokenizers, [Open-MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) and [LlamaGen](https://github.com/FoundationVision/LlamaGen).
245
+ * Evaluation metrics:
246
+ * Peak Signal-to-Noise Ratio (PSNR)
247
+ * Structural Similarity (SSIM)
248
+ * Reconstruction Fréchet Inception Distance (rFID)
249
+
250
+ ## Runtime Comparison
251
+
252
+ The following table shows the number of parameters and the averaged encoding and decoding times per image or video frame, measured on a single A100 80GB GPU. For comparison, we also list the parameters and average speeds of prior state-of-the-art tokenizer(s) with the same compression ratio.
253
+
254
+ | Tokenizer | Resolution | Compression Ratio | Parameters | Time (ms) |
255
+ |----------------------|------------|-------------------|------------|-----------|
256
+ | LlamaGen | 1024x1024 | 8×8 | 70M | 475 |
257
+ | Cosmos-Tokenizer-DI | 1024x1024 | 8×8 | 79M | **64.2** |
258
+
259
+
260
+ Note: We benchmarked the runtime for images under the 8x8 compression and videos under the 4×8×8 compression. Tokenizers with different compression ratios are not included in this comparison.
261
+
262
+ ## Ethical Considerations
263
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
264
+
265
+ For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
266
+
267
+ ### Bias
268
+
269
+ Field | Response
270
+ :---------------------------------------------------------------------------------------------------|:---------------
271
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
272
+ Measures taken to mitigate against unwanted bias: | None
273
+
274
+
275
+ ### Explainability
276
+
277
+ Field | Response
278
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
279
+ Intended Application & Domain: | Tokenization of images and videos
280
+ Model Type: | Auto-Encoder
281
+ Intended Users: | Generative AI developers for image and video generation models
282
+ Output: | Images/Videos and Latent Tokens
283
+ Describe how the model works: | Compresses and decompresses visual input (image/video).
284
+ Technical Limitations: | Some visual information (such as small text) may not be reconstructed accurately by the model.
285
+ Verified to have met prescribed NVIDIA quality standards: | Yes
286
+ Performance Metrics: | Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Reconstruction Fréchet Video Distance (rFVD), Reconstruction Fréchet Inception Distance (rFID), Latency
287
+ Potential Known Risks: | Tokenizer's output can parse all forms of input, including what may be considered toxic, offensive, or indecent.
288
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
289
+
290
+
291
+ ### Privacy
292
+ Field | Response
293
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
294
+ Generatable or reverse engineerable personal information? | No
295
+ Protected class data used to create this model? | None Known
296
+ Was consent obtained for any personal data used? | None Known
297
+ How often is dataset reviewed? | Before Release
298
+ Is a mechanism in place to honor data subject right of access or deletion of personal data? | Not Applicable
299
+ If personal collected for the development of the model, was it collected directly by NVIDIA? | Not Applicable
300
+ If personal collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Not Applicable
301
+ If personal collected for the development of this AI model, was it minimized to only what was required? | Not Applicable
302
+ Is there provenance for all datasets used in training? | Yes
303
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
304
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Not Applicable
305
+
306
+ ### Safety
307
+
308
+ Field | Response
309
+ :---------------------------------------------------|:----------------------------------
310
+ Model Application(s): | Tokenization of images and videos
311
+ Describe the life critical impact (if present). | None Known
312
+ Use Case Restrictions: | See [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
313
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.
314
+
315
+
316
+ ### Plus Plus (++) Promise
317
+
318
+ We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
319
+ * Verified to comply with current applicable disclosure laws, regulations, and industry standards.
320
+ * Verified to comply with applicable privacy labeling requirements.
321
+ * Annotated to describe the collector/source (NVIDIA or a third-party).
322
+ * Characterized for technical limitations.
323
+ * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
324
+ * Reviewed before release.
325
+ * Tagged for known restrictions and potential safety implications.
326
+
327
+
328
+ # Core Contributors
329
+ Fitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu