Cene655 commited on
Commit
ce6b3ea
1 Parent(s): 8684f74

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -53
README.md CHANGED
@@ -36,49 +36,12 @@ We release our two models:
36
 
37
  Weights of the model are loaded internally but if want to change them one can use the following example:
38
 
39
- ```python
40
- from huggingface_hub import hf_hub_download
41
- from kandinsky3 import get_T2I_unet, get_T5encoder, get_movq, Kandinsky3T2IPipeline
42
-
43
- unet_path = hf_hub_download(
44
- repo_id="ai-forever/Kandinsky3.0", filename='weights/kandinsky3.pt')
45
- )
46
-
47
- movq_path = hf_hub_download(
48
- repo_id="ai-forever/Kandinsky3.0", filename='weights/movq.pt')
49
- )
50
- unet, null_embedding, projections_state_dict = get_T2I_unet(device, unet_path, fp16=fp16)
51
- processor, condition_encoders = get_T5encoder(device, text_encode_path, projections_state_dict, fp16=fp16)
52
- movq = get_movq(device, movq_path, fp16=fp16)
53
- t2i_pipe = Kandinsky3T2IPipeline(device, unet, null_embedding, processor, condition_encoders, movq, fp16=fp16)
54
- ```
55
-
56
- ```python
57
- from huggingface_hub import hf_hub_download
58
- from kandinsky3 import get_inpainting_unet, get_T5encoder, get_movq, Kandinsky3InpaintingPipeline
59
-
60
- inpainting_unet_path = hf_hub_download(
61
- repo_id="ai-forever/Kandinsky3.0", filename='weights/kandinsky3_inpainting.pt', cache_dir=cache_dir
62
- )
63
- movq_path = hf_hub_download(
64
- repo_id="ai-forever/Kandinsky3.0", filename='weights/movq.pt')
65
- )
66
-
67
- unet, null_embedding, projections_state_dict = get_inpainting_unet(device, unet_path, fp16=fp16)
68
- processor, condition_encoders = get_T5encoder(device, text_encode_path, projections_state_dict, fp16=fp16)
69
- movq = get_movq(device, movq_path, fp16=False) #MoVQ ooesn't work properly in fp16 on inpainting
70
- pipe = Kandinsky3InpaintingPipeline(device, unet, null_embedding, processor, condition_encoders, movq, fp16=fp16)
71
- ```
72
-
73
  ## Installing
74
 
75
  To install repo first one need to create conda environment:
76
 
77
  ```
78
- conda create -n kandinsky -y python=3.8;
79
- source activate kandinsky;
80
- pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html;
81
- pip install -r requirements.txt;
82
  ```
83
  The exact dependencies is got using `pip freeze` and can be found in `exact_requirements.txt`
84
 
@@ -89,23 +52,13 @@ Check our jupyter notebooks with examples in `./examples` folder
89
  ### 1. text2image
90
 
91
  ```python
92
- from kandinsky3 import get_T2I_pipeline
93
-
94
- t2i_pipe = get_T2I_pipeline('cuda', fp16=True)
95
-
96
- image = t2i_pipe( "A cute corgi lives in a house made out of sushi.")
97
- ```
98
-
99
- ### 2. inpainting
100
-
101
- ```python
102
- from kandinsky3 import get_inpainting_pipeline
103
 
104
- inp_pipe = get_inpainting_pipeline('cuda', fp16=True)
 
105
 
106
- image = ... # PIL Image
107
- mask = ... # Numpy array (HxW). Set 1 where image should be masked
108
- image = inp_pipe( "A cute corgi lives in a house made out of sushi.", image, mask)
109
  ```
110
 
111
  ## Examples of generations
 
36
 
37
  Weights of the model are loaded internally but if want to change them one can use the following example:
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ## Installing
40
 
41
  To install repo first one need to create conda environment:
42
 
43
  ```
44
+ pip install git+https://github.com/ai-forever/diffusers_kandinsky3.git
 
 
 
45
  ```
46
  The exact dependencies is got using `pip freeze` and can be found in `exact_requirements.txt`
47
 
 
52
  ### 1. text2image
53
 
54
  ```python
55
+ from diffusers import KandinskyV3Pipeline
56
+ import torch
 
 
 
 
 
 
 
 
 
57
 
58
+ pipe = KandinskyV3Img2ImgPipeline.from_pretrained('kandinsky-community/kandinsky-3', torch_dtype=torch.float16)
59
+ pipe = pipe.to('cuda')
60
 
61
+ image = pipe("A cute corgi lives in a house made out of sushi.")
 
 
62
  ```
63
 
64
  ## Examples of generations