PommesPeter commited on
Commit
61b5a5d
β€’
1 Parent(s): ba76561

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
 
8
  # Lumina-Next-T2I
9
 
10
- The `Lumina-Next-T2I` model that uses Next-DiT with a 2B parameters model as well as using [Gemma-2B](https://huggingface.co/google/gemma-2b) as a text encoder. Compared with `Lumina-T2I`, it has faster inference speed, richer generation style, and more multilingual support, etc.
11
 
12
  Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
13
 
@@ -19,7 +19,7 @@ Our generative model has `Next-DiT` as the backbone, the text encoder is the `Ge
19
 
20
  ## πŸ“° News
21
 
22
- - [2024-5-28] πŸš€πŸš€πŸš€ We updated the `Lumina-Next-T2I` model for supporting 2K Resolution image generation.
23
 
24
  - [2024-5-16] ❗❗❗ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
25
 
@@ -51,7 +51,7 @@ On some outdated distros (e.g., CentOS 7), you may also want to check that a lat
51
  gcc --version
52
  ```
53
 
54
- Downloading Lumina-T2X repo from github:
55
 
56
  ```bash
57
  git clone https://github.com/Alpha-VLLM/Lumina-T2X
@@ -125,9 +125,9 @@ To ensure that our generative model is ready to use right out of the box, we pro
125
  pip install -e .
126
  ```
127
 
128
- 2. Prepare the pretrained model
129
 
130
- ⭐⭐ (Recommanded) you can use huggingface_cli downloading our model:
131
 
132
  ```bash
133
  huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
@@ -213,7 +213,7 @@ e.g. Demo command:
213
 
214
  ```bash
215
  cd lumina_next_t2i
216
- lumina_next infer -c "config/infer/settings.yaml" "a snow man of ..." "./outputs"
217
  ```
218
 
219
  ### Web Demo
 
7
 
8
  # Lumina-Next-T2I
9
 
10
+ The `Lumina-Next-T2I` model uses Next-DiT with a 2B parameters model as well as using [Gemma-2B](https://huggingface.co/google/gemma-2b) as a text encoder. Compared with `Lumina-T2I`, it has faster inference speed, richer generation style, and more multilingual support, etc.
11
 
12
  Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
13
 
 
19
 
20
  ## πŸ“° News
21
 
22
+ - [2024-5-28] πŸš€πŸš€πŸš€ We updated the `Lumina-Next-T2I` model to support 2K Resolution image generation.
23
 
24
  - [2024-5-16] ❗❗❗ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
25
 
 
51
  gcc --version
52
  ```
53
 
54
+ Downloading Lumina-T2X repo from GitHub:
55
 
56
  ```bash
57
  git clone https://github.com/Alpha-VLLM/Lumina-T2X
 
125
  pip install -e .
126
  ```
127
 
128
+ 2. Prepare the pre-trained model
129
 
130
+ ⭐⭐ (Recommended) you can use huggingface_cli to download our model:
131
 
132
  ```bash
133
  huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
 
213
 
214
  ```bash
215
  cd lumina_next_t2i
216
+ lumina_next infer -c "config/infer/settings.yaml" "a snowman of ..." "./outputs"
217
  ```
218
 
219
  ### Web Demo