Edit model card

Llama-2-Amharic

Uses Llama-2-7b as base. Does not use chat variant.

You can use the inference demo script provided to run inference. Note that it expects to be executed in context of llama recipes (https://github.com/facebookresearch/llama-recipes/tree/main/src/llama_recipes/inference)

Ensure that you replace the Llama-2 tokenizer with the Llama-2-Amharic tokenizer.

To avoid hallucinations use low top-k.

Use format "$system\nHuman: $prompt\nAssistant [Amharic] : "

Specify Amharic or English in the prompt to control the response language. Prompt can be in either language.

Example:

"Below is an interaction between a human and an AI fluent in English and Amharic, providing reliable and informative answers. The AI is supposed to answer test questions from the human with short responses saying just the answer and nothing else.

Human: αŠ αˆ›αˆ­αŠ› αˆ˜αŠ“αŒˆαˆ­ α‹¨αˆšα‰½αˆ αˆ›αˆ½αŠ• αˆ˜αˆ›αˆͺα‹« αˆžα‹΄αˆαŠ• αˆˆαˆ›αˆ΅α‰°α‹‹α‹ˆα‰… αˆ¨αŒ…αˆ α‰₯ሎግ ጻፍፒ α‰’α‹«αŠ•αˆ΅ 3 αŠ αŠ•α‰€αŒΎα‰½ αˆ˜αˆ†αŠ• αŠ αˆˆα‰£α‰Έα‹α’

Assistant [Amharic] : "

Cite:

@misc{andersland2024amharic,
      title={Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages}, 
      author={Michael Andersland},
      year={2024},
      eprint={2403.06354},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
0
Unable to determine this model's library. Check the docs .

Space using iocuydi/llama-2-amharic-3784m 1