Edit model card

API Inference

generated from stablediffusionapi.com

Get API Key

Get API key from Stable Diffusion API, No Payment needed.

Replace Key in below code, change model_id to "Graphic Art"

Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs

Model link: [View model](https://stablediffusionapi.com/models/Graphic Art)

Credits: View credits

View all models: View Models

import requests
import json

url = "https://stablediffusionapi.com/api/v4/dreambooth"

payload = json.dumps({
    "key": "Your_API_key",
    "model_id": "Graphic Art",
    "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
    "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
    "width": "512",
    "height": "512",
    "samples": "1",
    "num_inference_steps": "30",
    "safety_checker": "no",
    "enhance_prompt": "yes",
    "seed": None,
    "guidance_scale": 7.5,
    "multi_lingual": "no",
    "panorama": "no",
    "self_attention": "no",
    "upscale": "no",
    "embeddings": "embeddings_model_id",
    "lora": "lora_model_id",
    "webhook": None,
    "track_id": None
})

headers = {
    'Content-Type': 'application/json'
}

response = requests.request("POST", url, headers=headers, data=payload)

print(response.text)

Use this coupon code to get 25% off DMGG0RBN
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.