VincentGOURBIN
commited on
Commit
•
c59fcfd
1
Parent(s):
35e6482
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ tags:
|
|
10 |
license: apache-2.0
|
11 |
language:
|
12 |
- en
|
|
|
|
|
13 |
---
|
14 |
|
15 |
# mlx-community/Llama-3.2-3B-Fluxed
|
@@ -25,15 +27,40 @@ pip install mlx-lm
|
|
25 |
```python
|
26 |
from mlx_lm import load, generate
|
27 |
|
28 |
-
model, tokenizer = load("mlx-community/Llama-3.2-3B-Fluxed")
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
|
33 |
-
messages = [{"role": "
|
|
|
34 |
prompt = tokenizer.apply_chat_template(
|
35 |
messages, tokenize=False, add_generation_prompt=True
|
36 |
)
|
37 |
|
38 |
-
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
39 |
-
```
|
|
|
10 |
license: apache-2.0
|
11 |
language:
|
12 |
- en
|
13 |
+
datasets:
|
14 |
+
- VincentGOURBIN/FluxPrompting
|
15 |
---
|
16 |
|
17 |
# mlx-community/Llama-3.2-3B-Fluxed
|
|
|
27 |
```python
|
28 |
from mlx_lm import load, generate
|
29 |
|
|
|
30 |
|
31 |
+
|
32 |
+
model_id = "mlx-community/Llama-3.2-3B-Fluxed"
|
33 |
+
|
34 |
+
model, tokenizer = load(model_id)
|
35 |
+
|
36 |
+
user_need = "a toucan coding on a mac"
|
37 |
+
|
38 |
+
system_message = """
|
39 |
+
You are a prompt creation assistant for FLUX, an AI image generation model. Your mission is to help the user craft a detailed and optimized prompt by following these steps:
|
40 |
+
|
41 |
+
1. **Understanding the User's Needs**:
|
42 |
+
- The user provides a basic idea, concept, or description.
|
43 |
+
- Analyze their input to determine essential details and nuances.
|
44 |
+
|
45 |
+
2. **Enhancing Details**:
|
46 |
+
- Enrich the basic idea with vivid, specific, and descriptive elements.
|
47 |
+
- Include factors such as lighting, mood, style, perspective, and specific objects or elements the user wants in the scene.
|
48 |
+
|
49 |
+
3. **Formatting the Prompt**:
|
50 |
+
- Structure the enriched description into a clear, precise, and effective prompt.
|
51 |
+
- Ensure the prompt is tailored for high-quality output from the FLUX model, considering its strengths (e.g., photorealistic details, fine anatomy, or artistic styles).
|
52 |
+
|
53 |
+
Use this process to compose a detailed and coherent prompt. Ensure the final prompt is clear and complete, and write your response in English.
|
54 |
+
|
55 |
+
Ensure that the final part is a synthesized version of the prompt.
|
56 |
+
"""
|
57 |
|
58 |
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
|
59 |
+
messages = [{"role": "system", "content": system_message},
|
60 |
+
{"role": "user", "content": user_need}]
|
61 |
prompt = tokenizer.apply_chat_template(
|
62 |
messages, tokenize=False, add_generation_prompt=True
|
63 |
)
|
64 |
|
65 |
+
response = generate(model, tokenizer, prompt=prompt, verbose=True,max_tokens=1000)
|
66 |
+
```
|