Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.

  1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
  2. If possible do not use this model for comercial stuff and if you want to at least give some credtis :)

By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the model authors as well.

Log in or Sign Up to review the conditions and access this model content.

Lyoko Diffusion v2-1 Model Card

Updates in version 2.1:

  • Rebased to 2.1 base model (make sure to put yml file next to ckpt)
  • Added main characters ( use in prompt Jeremy2D, Odd2D, Aelita2D, Ulrich2D,Yumi2D)
  • Added side characters (use in prompt Sisi2D, William2D, Jim2D)
  • Added CGI characters: (use in prompt AelitaCGI, OddCGI, UlrichCGI, YumiCGI)
  • Experimental: CLCrab, CLTower, CLDesert, CLIce, CLForest
  • For better generation use nagative prompt: ugly, ugly eyes, missing pupils

You can use it with https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases

Samples

2DLyoko artstyle main characters: sample1 2D characters in CGI style: sample2 CGI style characters: Sample3 Other things: sample4

This model is allowing users to generate images into styles from TV show Code Lyoko both 2D/CGI format. To switch between styles you need to add it to prompt: for CGI CGILyoko style style for 2D 2DLyoko style style If you want to support my future projects you can do it via https://ko-fi.com/madiator2011 Or by using my model on runpod with my reflink https://runpod.io?ref=vfker49t

This model has been trained thanks to support of Runpod.io team.

Diffusers

from diffusers import StableDiffusionPipeline
import torch

model_id = "Madiator2011/Lyoko-Diffusion-v1.1"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
    
image.save("astronaut_rides_horse.png")

For more detailed instructions, use-cases and examples in JAX follow the instructions here

Uses

Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

  • Safe deployment of models which have the potential to generate harmful content.
  • Probing and understanding the limitations and biases of generative models.
  • Generation of artworks and use in design and other artistic processes.
  • Applications in educational or creative tools.
  • Research on generative models.

Excluded uses are described below.

Misuse, Malicious Use, and Out-of-Scope Use

Note: This section is taken from the DALLE-MINI model card, but applies in the same way to Stable Diffusion v1.

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.

Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Misuse and Malicious Use

Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:

  • Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
  • Intentionally promoting or propagating discriminatory content or harmful stereotypes.
  • Impersonating individuals without their consent.
  • Sexual content without consent of the people who might see it.
  • Mis- and disinformation
  • Representations of egregious violence and gore
  • Sharing of copyrighted or licensed material in violation of its terms of use.
  • Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.

Limitations and Bias

Limitations

  • The model does not achieve perfect photorealism
  • The model cannot render legible text
  • The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
  • Faces and people in general may not be generated properly.
  • The model was trained mainly with English captions and will not work as well in other languages.
  • The autoencoding part of the model is lossy
  • The model was trained on a large-scale dataset LAION-5B which contains adult material and is not fit for product use without additional safety mechanisms and considerations.
  • No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at https://rom1504.github.io/clip-retrieval/ to possibly assist in the detection of memorized images.

Bias

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of LAION-2B(en), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.

Safety Module

The intended use of this model is with the Safety Checker in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the CLIPTextModel after generation of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.

Downloads last month
2
Inference API
Inference API (serverless) has been turned off for this model.