metadata
license: apache-2.0
inference: false
datasets:
- CohereForAI/xP3x
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- DataProvenanceInitiative/Commercially-Verified-Licenses
- CohereForAI/aya_evaluation_suite
language:
- afr
- amh
- ara
- aze
- bel
- ben
- bul
- cat
- ceb
- ces
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fin
- fil
- fra
- fry
- gla
- gle
- glg
- guj
- hat
- hau
- heb
- hin
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kan
- kat
- kaz
- khm
- kir
- kor
- kur
- lao
- lav
- lat
- lit
- ltz
- mal
- mar
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nld
- nor
- nso
- nya
- ory
- pan
- pes
- pol
- por
- pus
- ron
- rus
- sin
- slk
- slv
- smo
- sna
- snd
- som
- sot
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tel
- tgk
- tha
- tur
- twi
- ukr
- urd
- uzb
- vie
- xho
- yid
- yor
- zho
- zul
metrics:
- accuracy
- bleu
Aya-101-GGUF
This repo contains GGUF format model files for Cohere's Aya-101 model
Quantized using Huggingface's candle framework
How to use with Candle quantized T5 example
Visit the candle T5 example for more detailed instruction
Clone candle repo:
git clone https://github.com/huggingface/candle.git
cd candle/candle-examples
Run the following command:
cargo run --example quantized-t5 --release -- \
--model-id "kcoopermiller/aya-101-GGUF" \
--weight-file "aya-101.Q2_K.gguf" \
--config-file "config.json" \
--prompt "भारत में इतनी सारी भाषाएँ क्यों हैं?" \
--temperature 0
Note: this runs on CPU
Available weight files:
- aya-101.Q2_K.gguf
- aya-101.Q3_K.gguf
- aya-101.Q4_0.gguf
- aya-101.Q4_1.gguf
- aya-101.Q4_K.gguf
- aya-101.Q5_0.gguf
- aya-101.Q5_1.gguf
- aya-101.Q5_K.gguf
- aya-101.Q6_K.gguf
- aya-101.Q8_0.gguf
- aya-101.Q8_1.gguf
- aya-101.Q8_K.gguf