Edit model card

VincentGOURBIN/Llama-3.2-3B-Fluxed

The Model VincentGOURBIN/Llama-3.2-3B-Fluxed was converted to MLX format from meta-llama/Llama-3.2-3B-Instruct using mlx-lm version 0.19.3.

Lora trained and fused with this dataset : VincentGOURBIN/FluxPrompting

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("VincentGOURBIN/Llama-3.2-3B-Fluxed")


input_text = "an aqurium of clownfish"

guide_instructions = """
You are a prompt creation assistant for FLUX, an AI image generation model. Your mission is to help the user craft a detailed and optimized prompt by following these steps:

1. **Understanding the User's Needs**:
    - The user provides a basic idea, concept, or description.
    - Analyze their input to determine essential details and nuances.

2. **Enhancing Details**:
    - Enrich the basic idea with vivid, specific, and descriptive elements.
    - Include factors such as lighting, mood, style, perspective, and specific objects or elements the user wants in the scene.

3. **Formatting the Prompt**:
    - Structure the enriched description into a clear, precise, and effective prompt.
    - Ensure the prompt is tailored for high-quality output from the FLUX model, considering its strengths (e.g., photorealistic details, fine anatomy, or artistic styles).

4. **Translations (if necessary)**:
    - If the user provides a request in another language, translate it into English for the prompt and transcribe it back into their language for clarity.

Use this process to compose a detailed and coherent prompt. Ensure the final prompt is clear and complete, and write your response in English.

Ensure that the final part is a synthesized version of the prompt.
"""

prompt_for_llm = f"{guide_instructions}\n\nUser input: \"{input_text}\""

prompt=prompt_for_llm

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=False, max_tokens=4000)
print(response)
Downloads last month
3
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for VincentGOURBIN/Llama-3.2-3B-Fluxed

Quantized
(154)
this model