ZHANGYUXUAN-zR commited on
Commit
ffe84b9
·
verified ·
1 Parent(s): ce2ae2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -51,6 +51,8 @@ GLM-Image supports both text-to-image and image-to-image generation within a sin
51
  + Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.
52
  + Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.
53
 
 
 
54
  ## Showcase
55
 
56
  ### T2I with dense text and knowledge
@@ -88,7 +90,7 @@ image = pipe(
88
  prompt=prompt,
89
  height=32 * 32,
90
  width=36 * 32,
91
- num_inference_steps=30,
92
  guidance_scale=1.5,
93
  generator=torch.Generator(device="cuda").manual_seed(42),
94
  ).images[0]
@@ -112,7 +114,7 @@ image = pipe(
112
  image=[image], # can input multiple images for multi-image-to-image generation such as [image, image1]
113
  height=33 * 32, # Must set height even it is same as input image
114
  width=32 * 32, # Must set width even it is same as input image
115
- num_inference_steps=30,
116
  guidance_scale=1.5,
117
  generator=torch.Generator(device="cuda").manual_seed(42),
118
  ).images[0]
 
51
  + Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.
52
  + Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.
53
 
54
+ > You can find the full GLM-Image Model implementation in the [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm_image) and [diffusers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/glm_image) libraries here.
55
+
56
  ## Showcase
57
 
58
  ### T2I with dense text and knowledge
 
90
  prompt=prompt,
91
  height=32 * 32,
92
  width=36 * 32,
93
+ num_inference_steps=50,
94
  guidance_scale=1.5,
95
  generator=torch.Generator(device="cuda").manual_seed(42),
96
  ).images[0]
 
114
  image=[image], # can input multiple images for multi-image-to-image generation such as [image, image1]
115
  height=33 * 32, # Must set height even it is same as input image
116
  width=32 * 32, # Must set width even it is same as input image
117
+ num_inference_steps=50,
118
  guidance_scale=1.5,
119
  generator=torch.Generator(device="cuda").manual_seed(42),
120
  ).images[0]