Barun
bapatra
AI & ML interests
Natural Language Processing, Large Model Scaling, Alignment research, multimodality
Organizations
bapatra's activity
auto_map in config.json doesn't contain Phi3SmallForSequenceClassification
1
#13 opened about 1 month ago
by
kyeongpil
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1673253849121-noauth.jpeg)
Add the classifiers to the auto_map
1
#76 opened 23 days ago
by
mrm196
Enabled the AutoModelForSequenceClassification in the auto_map
#22 opened 26 days ago
by
mrm196
Ensure the query_states and key_states remain in bf16
1
#21 opened 26 days ago
by
mrm196
Keep getting AssertionError: Flash Attention is not available when load the model
1
#7 opened about 1 month ago
by
Complete-your-profile
Phi 3 small crashing error
3
#12 opened about 1 month ago
by
aravindpai
Crash in Fine-tuning
4
#14 opened about 1 month ago
by
tanliboy
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6448b3266ffed6ece10335ba/HLC0SfOHjssWXB99eyxt8.png)
how should data be packed?
2
#16 opened about 1 month ago
by
shiyue
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1632426687077-613a776517297f3c0bfd7b41.jpeg)
What pad token should I use for fine tuning?
1
#10 opened about 1 month ago
by
faizsameerahmed96
Shared memory error
5
#15 opened about 1 month ago
by
marktenenholtz
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1640544247032-noauth.jpeg)
Update tokenization_phi3_small.py
1
#18 opened about 1 month ago
by
damajercakms
Update tokenization_phi3_small.py
1
#14 opened about 1 month ago
by
damajercakms
RuntimeError: FlashAttention only support fp16 and bf16 data type during fine tuning.
7
#11 opened about 1 month ago
by
faizsameerahmed96
Where can we download the phi-3 small ?
1
#11 opened about 1 month ago
by
sebastienbo
Why a different architecture from mini and medium?
5
#5 opened about 2 months ago
by
winddude
![](https://cdn-avatars.huggingface.co/v1/production/uploads/62fd57e9c1588e1d4c699edc/Y3tvhM3gCj_I6WuZUBFIN.jpeg)
Target_module of this phi-3-small model
8
#3 opened about 2 months ago
by
hackint0sh
flash Attention Error while inference
5
#7 opened about 2 months ago
by
hackint0sh
Is it possible that this is a small model of GPT-3.5?
1
#6 opened about 2 months ago
by
Trangle
inference not working in any enviroment
1
#5 opened about 2 months ago
by
LoreVitCon
Add attention_bias to make TGI work
#5 opened about 2 months ago
by
philschmid
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1624629516652-5ff5d596f244529b3ec0fb89.png)
Add attention_bias to make TGI work
#2 opened about 2 months ago
by
philschmid
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1624629516652-5ff5d596f244529b3ec0fb89.png)
Add attention_bias to make TGI work
#3 opened about 2 months ago
by
philschmid
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1624629516652-5ff5d596f244529b3ec0fb89.png)
Add attention_bias to make TGI work
#4 opened about 2 months ago
by
philschmid
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1624629516652-5ff5d596f244529b3ec0fb89.png)
Please, add GGUF version!
1
#2 opened about 2 months ago
by
Anderson452
Tokenizer question
2
#2 opened about 2 months ago
by
psinger
![](https://cdn-avatars.huggingface.co/v1/production/uploads/636d18755aaed143cd6698ef/AalDh13Gp8jv1BfM5IASh.png)
Upload 3 files
#1 opened about 2 months ago
by
bapatra
Upload 3 files
#1 opened about 2 months ago
by
bapatra