|
--- |
|
tags: |
|
- text-to-image |
|
- diffusers |
|
- autotrain |
|
inference: true |
|
--- |
|
|
|
# ART Text-to-Image Generation using stabilityai/stable-diffusion-xl-base-1.0 |
|
|
|
This repository contains code and instructions for using the `adrenex/outfitt2i` model from Hugging Face's Transformers library to generate images from textual descriptions. The model utilizes diffusion models for high-quality image synthesis based on the provided text prompts. |
|
|
|
![1](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/12.png) |
|
![2](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/2.png) |
|
![3](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/3.png) |
|
![4](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/4.png) |
|
![5](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/6.png) |
|
![6](https://huggingface.co/Falah/Iyad_Radi_SDXL1.0_Lora/resolve/main/8.png) |
|
|
|
## Model Information |
|
|
|
- Tags: |
|
- text-to-image |
|
- diffusers |
|
- autotrain |
|
|
|
## Inference |
|
|
|
To use this model for generating images from text prompts, follow these steps: |
|
|
|
1. **Environment Setup:** |
|
Make sure you have Python installed on your system. You can also use a virtual environment for isolation. |
|
|
|
2. **Install Dependencies:** |
|
Install the required Python packages by running the following command: |
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
3.## Usage |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
|
|
# Load LoRA weights |
|
lora_weights = torch.load("/path/to/lora_weights/pytorch_lora_weights.safetensors") |
|
|
|
# Initialize the DiffusionPipeline |
|
pipe = DiffusionPipeline.from_pretrained("adrenex/outfitt2i", torch_dtype=torch.float16) |
|
pipe.to("cuda") |
|
|
|
# Load LoRA weights into the pipeline |
|
pipe.load_lora_weights(lora_weights) |
|
|
|
# Text prompt for image generation |
|
prompt = "photo of Iyad Radi with cat in the pool" |
|
|
|
# Generate Images |
|
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images |
|
``` |
|
|
|
4. **Generated Images:** |
|
The generated images will be saved in the `output_images` directory by default. |
|
|
|
## Application in Art and Cinema Industry |
|
|
|
This model can be incredibly useful in the art and cinema movie production industry, especially for creating visuals based on textual descriptions. In the case of Aiyad Radi, an Iraqi actor and comedian, this tool can aid in visualizing character designs, scenes, and concepts before actual production. Directors, artists, and producers can utilize the generated images as a reference to plan and visualize their projects effectively. |
|
|
|
## Credits |
|
|
|
- This repository is created and maintained by [Falah.G.Saleih] |
|
|
|
## Disclaimer |
|
|
|
Please note that the model's outputs might vary, and the generated images are based on the input text prompts. The model's behavior is influenced by its training data and might not always produce accurate or desired results. |
|
|
|
Feel free to experiment, provide feedback, and contribute to this repository if you'd like to enhance its functionality! |
|
--- |