Yatharth Sharma
YaTharThShaRma999
AI & ML interests
None yet
Organizations
None yet
YaTharThShaRma999's activity
ComfyUI support?
4
#30 opened 9 days ago
by
JeriffCheng
inference can support multi gpu?
1
#11 opened 9 days ago
by
loong
lmdeploy: 'language_model.model.layers.0.feed_forward.w1.weight'
2
#1 opened 13 days ago
by
BenjaminTT
FAQ: Does lightning-lora favoring real people? Or does it affect the comic book model?
4
#57 opened 18 days ago
by
lyk0013
Is StableDiffusion3Pipeline.from_single_file not supported in local process?
4
#157 opened 19 days ago
by
shawn-kang
Why its taking so much time while running, I would be able to get video or not
2
#80 opened 24 days ago
by
shobhitagnihotri
GGUF Model not loading at all
2
#30 opened 3 months ago
by
jdhadljasnajd
Examples?
#2 opened 26 days ago
by
YaTharThShaRma999
Wait? 4x13b model?
3
#1 opened 7 months ago
by
mirek190
How does this version compare with Paligemma
1
#6 opened about 1 month ago
by
ojeokelias
Fine tune and Lora instructions when?
2
#29 opened about 1 month ago
by
Vigilence
wow impressive super fast!!! insanely BAD HANDS
2
#1 opened about 1 month ago
by
froilo
What is different between Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled and non distilled??
1
#1 opened about 1 month ago
by
balikita
Girl lying on the grass
24
#89 opened about 1 month ago
by
DIOcheng
torch.cuda.OutOfMemoryError: CUDA out of memory. How to run on a graphics card with small memory
7
#94 opened about 1 month ago
by
zhengjx
This LLM seems to be trolling me??
3
#9 opened about 1 month ago
by
skynet24
![](https://cdn-avatars.huggingface.co/v1/production/uploads/66694b5c1ab5ccf8f2d3f476/Pl09is8swXuw3AX3oA1xX.png)
access error: Couldn't connect to the Hub: 403 Client Error.
3
#45 opened about 1 month ago
by
Guytron
what is t5xxl models required for and what's the differences apart from the sizes? thx
2
#42 opened about 1 month ago
by
tetsujin007
Why is the effect of using ControlNet and IP adapter not very good
2
#11 opened about 1 month ago
by
michaelj
Rename README.md to README.md A man is driving down a country road when he realizes his car is running out of gas. He spots a farm in the distance and decides to stop there for help. The farmer, a friendly man, offers him some gas and starts a conversation. "Did you know," says the farmer, "that we have a cow here on the farm that can tell jokes?" "Really?" the man responds, incredulous. "I'd love to see that." The farmer leads him to the barn, where the cow is standing. He whispers something in the cow's ear, and then the cow looks at the man and says: "Mooo... do you know why the cow went to space? To see the milky way!" The man, impressed, thanks the farmer and asks, "How did you manage to teach the cow to tell jokes?" The farmer smiles and says, "Oh, that's easy. You just have to make sure she always has a captivated audience. After all, who doesn't love good farm stories?" The man laughs, thanks the farmer again, and goes on his way, thinking that it was the most surreal and amusing experience he's ever had.
3
#29 opened about 2 months ago
by
cabradapets
what does this model do?
3
#2 opened about 2 months ago
by
MayensGuds
It’s 6m parameters but phi 3 small?
#2 opened about 2 months ago
by
YaTharThShaRma999
When will this be open?
7
#1 opened about 2 months ago
by
YaTharThShaRma999
I would like to access the model please
12
#1 opened about 2 months ago
by
ngbien83
What was this?
#5 opened about 2 months ago
by
YaTharThShaRma999
How to Accelerate Audio Generation to Real-Time Speeds
1
#4 opened 2 months ago
by
samarthshrivas
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6274a036d1c9458d577e6bf2/6YrF5m80lzjNSqz-5wf0D.jpeg)
running model on a Tesla T4
6
#2 opened about 2 months ago
by
xiaoajie738
What kind of GPU need to run this model locally on-prem ?
1
#8 opened about 2 months ago
by
eliastick
Is there a way to use it in StableDiffusionXLControlNetImg2ImgPipeline
1
#1 opened 2 months ago
by
michaelj
Upload 056.jpg
1
#76 opened 2 months ago
by
beetlespinner
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4wnbHc1CciiFj8OmZIGKd.jpeg)
Is this ssd1b or a pruned version?
2
#2 opened 2 months ago
by
YaTharThShaRma999
Quality compared to pixart alpha/sigma and hyper sd?
#1 opened 2 months ago
by
YaTharThShaRma999
Performance Degredation After Weight Update
7
#18 opened 3 months ago
by
evilperson068
Is this a model similar to hyper SD? Can it run in the same way
2
#2 opened 3 months ago
by
guicen
What is going on with your vocab?
3
#10 opened 2 months ago
by
xzuyn
Feedback
1
#31 opened 3 months ago
by
YaTharThShaRma999
Parameters and what lora?
1
#1 opened 3 months ago
by
YaTharThShaRma999
Trying to use llama-2-7b-chat.Q4_K_M.gguf with/without tensorflow weights
1
#33 opened 3 months ago
by
cgthayer
Love the project, when will CFG hyper-sd come?
12
#24 opened 3 months ago
by
brandostrong
I believe this might be a pretrained model?
1
#4 opened 3 months ago
by
ccibeekeoc42
![](https://cdn-avatars.huggingface.co/v1/production/uploads/639b5f7305ca2c2341bf42b8/9MlKI3RHR258TMEdL7wmM.jpeg)
Reducing Latency in Locally Hosted model
1
#8 opened 3 months ago
by
anshulchandel
Question about quality
1
#4 opened 3 months ago
by
YaTharThShaRma999
invalid magic number 00000000
8
#1 opened 10 months ago
by
BigDeeper
Waiting for Meta-Llama-3-8B-Instruct-gguf
1
#29 opened 3 months ago
by
anuragrawal
How do you estimate the number of GPUs required to run this model?
1
#29 opened 3 months ago
by
vishjoshi
What are the diffences of this with Qwen/CodeQwen1.5-7B
6
#5 opened 3 months ago
by
Kalemnor
Model is paraphrasing text instead of citing it verbatim
3
#7 opened 3 months ago
by
sszymczyk
Quantization for more than 8 bits?
3
#25 opened 3 months ago
by
ibalampanis
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/DnSHgVo4nbusnIbuPuXpF.jpeg)
Very slow response on LM Studio with these settings
3
#4 opened 4 months ago
by
yassersharaf
issue loading
1
#2 opened 3 months ago
by
baelof
today's version of llama.cpp results in an error
9
#4 opened 7 months ago
by
LaferriereJC
Necessary material for llama2
7
#27 opened 12 months ago
by
Samitoo
No module Named Opitimum
1
#1 opened 4 months ago
by
deeplearner123
hugging face version coming?
5
#2 opened 6 months ago
by
ctranslate2-4you
CUDA error - the provided PTX was compiled with an unsupported toolchain
12
#23 opened 9 months ago
by
melindmi
Unable to convert `llama-2-70b-chat.ggmlv3.q4_K_M.bin` to GGUF
2
#12 opened 11 months ago
by
barha
CUDA out of memory
18
#4 opened 4 months ago
by
RedAISkye
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1669143724948-noauth.png)
Fix support for SGLang inference
11
#2 opened 5 months ago
by
aliencaocao
![](https://cdn-avatars.huggingface.co/v1/production/uploads/62987db352ad3199b9ff06ff/0TclKS8uO13F6IX7jmfuc.png)