|
--- |
|
base_model: VincentGOURBIN/Llama-3.2-3B-Fluxed |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- mlx |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- VincentGOURBIN/FluxPrompting |
|
--- |
|
|
|
# mlx-community/Llama-3.2-3B-Fluxed |
|
|
|
The Model [mlx-community/Llama-3.2-3B-Fluxed](https://huggingface.co/mlx-community/Llama-3.2-3B-Fluxed) was converted to MLX format from [VincentGOURBIN/Llama-3.2-3B-Fluxed](https://huggingface.co/VincentGOURBIN/Llama-3.2-3B-Fluxed) using mlx-lm version **0.19.3**. |
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
|
|
|
|
model_id = "mlx-community/Llama-3.2-3B-Fluxed" |
|
|
|
model, tokenizer = load(model_id) |
|
|
|
user_need = "a toucan coding on a mac" |
|
|
|
system_message = """ |
|
You are a prompt creation assistant for FLUX, an AI image generation model. Your mission is to help the user craft a detailed and optimized prompt by following these steps: |
|
|
|
1. **Understanding the User's Needs**: |
|
- The user provides a basic idea, concept, or description. |
|
- Analyze their input to determine essential details and nuances. |
|
|
|
2. **Enhancing Details**: |
|
- Enrich the basic idea with vivid, specific, and descriptive elements. |
|
- Include factors such as lighting, mood, style, perspective, and specific objects or elements the user wants in the scene. |
|
|
|
3. **Formatting the Prompt**: |
|
- Structure the enriched description into a clear, precise, and effective prompt. |
|
- Ensure the prompt is tailored for high-quality output from the FLUX model, considering its strengths (e.g., photorealistic details, fine anatomy, or artistic styles). |
|
|
|
Use this process to compose a detailed and coherent prompt. Ensure the final prompt is clear and complete, and write your response in English. |
|
|
|
Ensure that the final part is a synthesized version of the prompt. |
|
""" |
|
|
|
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: |
|
messages = [{"role": "system", "content": system_message}, |
|
{"role": "user", "content": user_need}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=True |
|
) |
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True,max_tokens=1000) |
|
``` |