new
Community Tab
Start discussions and open PR in the Community Tab.
Ideas to improve Fine Tuned BLOOM 560 for dialogue using LIGHT dataset
#176 opened 4 days ago
by
andrewnoel
How to use BLOOM for text summarization ?
2
#172 opened 13 days ago
by
ankit5678
Learn how to prompt with bloom properly! Create chatbots, with different personalities use bloom as Linux terminal, code generation and more!
#171 opened 14 days ago
by
NigelTheMaker

BloomTokenizerFast does not exist
4
#170 opened 15 days ago
by
hiddenchamp
Something between BLOOM-176B and BLOOM-7B1?
1
#169 opened 15 days ago
by
gameveloster
como ejecuto bloom?
#168 opened 15 days ago
by
xGaBx
Eats up all RAM + 163GB Swap
4
#167 opened 18 days ago
by
LuvIsBadToTheBone

How can I pretrain the BLOOM?
3
#166 opened 18 days ago
by
fmf1287
Looking at what makes ChatGPT special...
6
#165 opened 18 days ago
by
Ioulaum
Donwload size
2
#163 opened 23 days ago
by
zz99mz
How can I train Bloom on a specific set of texts?
3
#162 opened 28 days ago
by
boomer22
What does it take to self-host Bloom? How much money would that cost?
6
#161 opened about 1 month ago
by
damc
CUDA error while running bloom-accelerate-inference.py | RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasLtMatmul
#160 opened about 1 month ago
by
rsoods
How can I make Bloom stop generating when it should?
#159 opened about 1 month ago
by
lewiswu1209
What is the best way to run Bloom-176B locally at interactive speeds?
6
#156 opened about 1 month ago
by
aeva
A way to inference and fine-tune BLOOM-176B from Google Colab or locally
2
#152 opened about 2 months ago
by
borzunov

Problem using bloom-1b3 version
2
#151 opened about 2 months ago
by
dionatandiego11
BLOOM API inference
3
#150 opened about 2 months ago
by
Matuesz
Prompt tunning in Bloom for long form text generation
8
#149 opened about 2 months ago
by
info2000
Code generation
#147 opened about 2 months ago
by
celestialme
Commercial Use...
1
#146 opened about 2 months ago
by
Siyam

Could I generate sample disinformation for research purposes?
1
#145 opened about 2 months ago
by
Infinity1337
Batching token length
3
#144 opened about 2 months ago
by
mishavee
How does hugging face have so many hosted api s running at once?
12
#132 opened 3 months ago
by
mishavee
Change seed in interference API
4
#131 opened 3 months ago
by
imwide
What gpu power do you need just to run Bloom, not fine tune?
1
#130 opened 3 months ago
by
mishavee
Where can I find a script to fine tune Bloom?
4
#129 opened 3 months ago
by
mishavee
Suggest a cloud gpu service to fine tune Bloom.
4
#128 opened 3 months ago
by
mishavee
How large is Bloom exactly to load all the checkpoints into gpu ram?
3
#127 opened 3 months ago
by
mishavee
Paraphrasing with Bloom
6
#125 opened 3 months ago
by
mishavee
Code generation with Bloom
3
#123 opened 3 months ago
by
SummerSigh

Text summarization with Bloom
9
#122 opened 3 months ago
by
mishavee
Locally run instance in an RTX 3090 - Performance?
9
#119 opened 4 months ago
by
byeai
Separate training data by country
1
#117 opened 4 months ago
by
wponhf
Dropout
4
#116 opened 4 months ago
by
Muennighoff

how can i train bloom
4
#111 opened 4 months ago
by
s3rgio27
How much GPU memory needed?
4
#109 opened 5 months ago
by
mazib
Speed of the hosted inference API for interactive playground
16
#107 opened 5 months ago
by
pai4451
how to fine tuning
4
#105 opened 5 months ago
by
nora1008
Support Korean Language
7
#104 opened 5 months ago
by
MasBakr
Is Few-shots performance optimization possible? (keep initial prompt encoded state)
#101 opened 5 months ago
by
Saiyan
Unable to load Bloom on an EC2 instance
1
#99 opened 5 months ago
by
viniciusguimaraes
I am doing a project where I need to feed Bloom more than 1000 tokens. is there a paid API where I can have a higher token limit?
1
#95 opened 5 months ago
by
rexer3000
"Temperature needs to be >0" error
2
#94 opened 5 months ago
by
sleven

Why does bloom like mattresses soo much?
1
#90 opened 5 months ago
by
aaronhance
Getting log probabilities from the Inference API?
12
#89 opened 5 months ago
by
Brendan

Can Bloom-176B really be evaluated on normal hardware at a rate of 3 minutes per token?
25
#87 opened 5 months ago
by
Philimipp
How to use Bloom InferenceApi with Colab ?
#82 opened 6 months ago
by
Kareem-Gamal
Querying Bloom from hugginterface inference api
7
#81 opened 6 months ago
by
sbreit
From Megratron GPT-2 or GPT-3?
2
#75 opened 6 months ago
by
jmassot
