File size: 1,355 Bytes
0b1feb7 2aa071a fe616b6 04fcac9 2aa071a 418d66e 2aa071a 418d66e 2aa071a b62729d 418d66e b62729d 418d66e fe616b6 7c2ddce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: llama2
---
Llama-2-Amharic
Uses Llama-2-7b as base. Does not use chat variant.
You can use the inference demo script provided to run inference. Note that it expects to be executed in context of llama recipes (https://github.com/facebookresearch/llama-recipes/tree/main/src/llama_recipes/inference)
Ensure that you replace the Llama-2 tokenizer with the Llama-2-Amharic tokenizer.
To avoid hallucinations use low top-k.
Use format "$system\nHuman: $prompt\nAssistant [Amharic] : "
Specify Amharic or English in the prompt to control the response language. Prompt can be in either language.
Example:
"Below is an interaction between a human and an AI fluent in English and Amharic, providing reliable and informative answers.
The AI is supposed to answer test questions from the human with short responses saying just the answer and nothing else.
Human: α ααα αααα α¨αα½α αα½α αααͺα« αα΄αα ααα΅α°ααα
α¨α
α α₯αα α»αα’ α’α«αα΅ 3 α αααΎα½ ααα α αα£αΈαα’
Assistant [Amharic] : "
Cite:
```
@misc{andersland2024amharic,
title={Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages},
author={Michael Andersland},
year={2024},
eprint={2403.06354},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|