Björn Plüster
bjoernp
AI & ML interests
None yet
Recent Activity
liked
a dataset
7 days ago
microsoft/orca-agentinstruct-1M-v1
liked
a dataset
14 days ago
wyu1/Leopard-Instruct
liked
a model
about 1 month ago
osunlp/UGround
Organizations
bjoernp's activity
Can you share how you converted this?
7
#1 opened 5 months ago
by
bjoernp
Hf safetensors version
9
#3 opened 5 months ago
by
ehartford
use_flash_attention_2=True
3
#9 opened 7 months ago
by
TillFetzer
leo-mistral-hessianai-7b-chat for privateGPT
3
#8 opened 7 months ago
by
Dodo124
Update tokenizer_config.json
#1 opened 7 months ago
by
bjoernp
Problems with flash-attention2
1
#13 opened 9 months ago
by
omaer0
Loss function?
1
#10 opened 12 months ago
by
narvind2003
No multi GPU inference support?
8
#4 opened 12 months ago
by
dataautogpt3
Llama2 vs Mistral
1
#2 opened 12 months ago
by
lightningRalf
Add languages
#8 opened 12 months ago
by
lbourdois
Missing module/classes: from transformers.cache_utils import Cache, DynamicCache
1
#7 opened 12 months ago
by
panopstor
changed "tokenizer" typo to be the one we create.
#4 opened 12 months ago
by
dyngnosis
Which transformers version is being used here?
2
#6 opened 12 months ago
by
Promptengineering
Flash dependency (locks out non-NVIDIA GPUs)
3
#4 opened 12 months ago
by
Thalesian
Update modeling_moe_mistral.py
#5 opened 12 months ago
by
bjoernp
Trying to quantize. Running into the issue below. Any suggestions?
1
#5 opened 12 months ago
by
BigDeeper
small readme fix
#1 opened 12 months ago
by
jphme
Update modeling_moe_mistral.py
2
#1 opened 12 months ago
by
bjoernp
AWQ-Variante
4
#2 opened 12 months ago
by
SebastianBodza