IamexperimentingNow
Iamexperimenting
AI & ML interests
None yet
Organizations
None yet
Iamexperimenting's activity
save, loading and inferencing the Gemma model
13
#64 opened 2 months ago
by
Iamexperimenting
gemma -2b with multi-gpu
3
#44 opened about 1 month ago
by
Iamexperimenting
evaluation loss not calculated during during?
2
#43 opened about 1 month ago
by
Iamexperimenting
pretraining Gemma for domain dataset
7
#41 opened about 2 months ago
by
Iamexperimenting
Very high loss compared to keras
5
#46 opened 3 months ago
by
tanimazsin130
Dont download, google scuttled this model
9
#77 opened about 2 months ago
by
Tom-Neverwinter
FSDP with Nvidia GPU
1
#84 opened about 1 month ago
by
Iamexperimenting
Fine-Tune a gemma model for question answering
16
#62 opened 2 months ago
by
Iamexperimenting
description for special tokens
1
#21 opened 2 months ago
by
Iamexperimenting
Fine-tuning with SQL coder
5
#14 opened 3 months ago
by
Iamexperimenting
instruction fine tuning template
2
#57 opened 3 months ago
by
Iamexperimenting
About Supervised fine tuning
4
#9 opened 3 months ago
by
aniketjha1304
context length higher than 100K
2
#13 opened 3 months ago
by
Iamexperimenting
schema consideration and warnings
5
#3 opened 9 months ago
by
nobitha
how large can database schemas be ?
2
#1 opened 7 months ago
by
PankajShukla
can you share finetune scripts and datasets
2
#8 opened 3 months ago
by
TomPei
Context-Window
3
#6 opened 3 months ago
by
HuggySSO
Adding Buffer memory to Q&A application
#92 opened 6 months ago
by
Iamexperimenting
Dolly answers on its own.
1
#15 opened 9 months ago
by
Iamexperimenting
question answering using llama
1
#4 opened 11 months ago
by
Iamexperimenting
question answering using llama
1
#7 opened 11 months ago
by
Iamexperimenting
what happens when I pass out of vocabulary word to this model?
1
#7 opened 11 months ago
by
Iamexperimenting
can anyone help me get prompt template for Question Answering model
2
#54 opened 11 months ago
by
Iamexperimenting
can anyone help me to get Prompt template for question answering model
1
#18 opened 11 months ago
by
Iamexperimenting
unable to trace the model using TorchScript
1
#3 opened 12 months ago
by
Iamexperimenting
int8 model consumes the same GPU memory as default model.
2
#15 opened 12 months ago
by
Iamexperimenting
understanding about LLM
1
#14 opened about 1 year ago
by
Iamexperimenting
Question Answering model using dolly
13
#59 opened about 1 year ago
by
Iamexperimenting