Hi @ezzdev This was a demo space for the computer vision models. You can use images in your project. Generating images out of ethicalness is subject to your own risk ( not safe for work ).
Prithiv Sakthi PRO
AI & ML interests
Organizations
prithivMLmods's activity
Three of them with Tensor Art & remaining with the SD in RUNPOD. Using it with the appropriate base model will give the good results. But it was in initial training stage, better may be come on future ..
๐I have just created a step-by-step procedure with a Colab demo link also attached in the repository's README.md.
๐: https://github.com/prithivsakthiur/how-to-run-huggingface-spaces-on-local-machine-demo
Thanks for a read !!
Models:
SG161222/RealVisXL_V4.0
stabilityai/sdxl-turbo
SG161222/RealVisXL_V2.0
runwayml/stable-diffusion-v1-5
Corcelio/mobius
fluently/Fluently-XL-Final
These are the interesting models you can make use of it, Text-2-Image Gen.
If you want to run the hugging spaces in your local machine, stay tuned on the repo https://github.com/PRITHIVSAKTHIUR/How-to-run-huggingface-spaces-on-local-machine-demo, i will update the step by step process there asap !!.
-Thankyou
You have mentioned, you have experience with JuggernautXL, Rav Animated, SDXL, Realistic Vision, DreamShaper, and Lora. to generate images in Local & also mentioned GTX 3060 you hold for processing / accelerating !!.
Collection Zero is Zero GPU Nvidia A100 HCP Running Spaces & DALLE 4K, MidJourney are the Quick Names, i had just keep it for an Trend.
โญKindly Please provide Clearly, What i need to do for you / Guide you. Since you had mentioned experiences with Automatic 11111 or ComfyUI on your my local Hardware....
So You trying to run spaces in your local hardware / something else ??.
Hi @mk230580 !!
You are asked for the image that has high res quality with the fast computation, for your instance i came up with the idea in acceleration of GPU T4, Yes we know NVIDIA A100 TC GPU is unmatched in its power and highest- performing elastic computations (HPC) tasks. Apart from that you can use T4 as hardware accelerators. You asked me how to run externally from hugging face right. Use T4 in Google Colab or any other work spaces that compatible of it. A100 is also available in Colab but you be a premium user.
Running in the local system same as follows
Just have the HF Token to passed for login...
--Authenticate with Hugging Face
from huggingface_hub import login
--Log in to Hugging Face using the provided token
hf_token = '---pass your huging face access token---'
login(hf_token)
Visit my colab space for an instances to run local out of HF
Hardware accelerator : T4 GPU
See we know, we can get A100, L4 there in colab for premium / for cost. T4 is free for certain amount of computation i went with it . In local hardware you know what to do...
Second thing: the amount details you have in prompt also will have the desired results. see the higher-end details prompts via https://huggingface.co/spaces/prithivMLmods/Top-Prompt-Collection or in freeflo.ai, prompthero for better details results.
Colab link ( example of the stabilityai/sdxl-turbo) :
https://colab.research.google.com/drive/1zYj5w0howOT3kiuASjn8PnBUXGh_MvhJ#scrollTo=Ok9PcD_kVwUI
You can use the various model like RealVisXL_V4.0, Turbo, & more for better results
** After passing the Access Token, Remove your token to share for others**
Hi Yasirkh,
Yes!, you can run any Python-based SDK in Google Colab with the appropriate model and its api_url by using the correct request library. For example, if you are trying to run a text-to-image model, you can do it with the inference API.
โ ๏ธFor example:
import requests
API_URL = "----------your api addr goes here---------"
headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
--You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))
โ ๏ธYou can find your access token on hf settings >> access token, replace it on this "headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}" instead of x
โ ๏ธExample: headers = {"Authorization": "Bearer hf_ABC1234567890xyz9876543210"}, install the required PyPI Libs.
๐Then add the Gradio blocks what you need to perform in the interface func().
๐ For your information, this is not the original MidJourney model; I have named the space "MidJourney" to perform the same work similarly. Make try & let me know you got it or else. And one more you cannot commit / push the code when the access tokens are visible use secret key or variables ( when in repos).
โ ๏ธIf you feel any difficulties, you can reply me again ; i will sure help you with the logic or i will share the colab work link for make the case easier.
------ Try in Google Colab / Jupyter Nb / Data Spell / even in VsCode were you find the easier way ------
-Thank You !
This is the time to Share the Collection of Prompts which have high parametric details to produce the most detailed flawless images.
๐You can watch out the Collection on: prithivMLmods/Top-Prompt-Collection
๐ขMore than 200+ High Detailed prompts have been used in the Spaces.
@prithivMLmods
Thank you for the read. !!
๐จHuggingface APK Update v0.0.4๐จ
1. Fixed Pinch to Zoom Update .
2. Swipe Gestures.
3. Fixed Auto Rotate.
4. Updated app Indentifiers.
Download the app now !!
๐จHuggingface v0.0.4 Download,
โฌ๏ธLink : https://drive.google.com/file/d/1xEiH7LMdP14fBG-xDuSqKje5TRLV1PuS/view?usp=sharing
Like ๐Share ๐ Follow ๐
๐Huggingface for Android โก๏ธ
๐ชถMedian ( Go Native ) Plugin :
version 0.0.1
๐ prithivMLmods/Huggingface-Android-App