fierysurf commited on
Commit
a830546
1 Parent(s): 94a95f4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - kn
6
+ metrics:
7
+ - accuracy
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - bilingual
11
+ - kannada
12
+ - english
13
+ ---
14
+
15
+ (This repo contains the sharded version of the [original](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1) Ambari-7B model)
16
+
17
+ # Ambari-7B-Base-v0.1 (sharded)
18
+
19
+ ## Overview
20
+
21
+ Ambari-7B-Base-v0.1 is the first bilingual English/Kannada model in the Ambari series, developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). Based on the Llama2 model by Meta, this 7B parameter model is the outcome of the pretraining stage, involving training on approximately 500 million new Kannada tokens.
22
+
23
+ ## Usage
24
+
25
+ To use the Ambari-7B-Base-v0.1 model, you can follow the example code below:
26
+
27
+ ```python
28
+ # Usage
29
+ import torch
30
+ from transformers import LlamaTokenizer, LlamaForCausalLM
31
+
32
+ model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Base-v0.1')
33
+ tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Base-v0.1')
34
+
35
+ prompt = "ಕನ್ನಡದ ಇತಿಹಾಸವನ್ನು ವಿವರವಾಗಿ ತಿಳಿಸಿ"
36
+ inputs = tokenizer(prompt, return_tensors="pt")
37
+
38
+ # Generate
39
+ generate_ids = model.generate(inputs.input_ids, max_length=30)
40
+ decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
41
+
42
+ print(decoded_output)
43
+ ```
44
+
45
+ **Important:** The provided model serves as a foundation and is not designed for independent use. We strongly advise conducting finetuning tailored to your particular task(s) of interest before deploying it in a production environment. Feel free to customize the code according to your specific use case, ensuring that the model undergoes finetuning for optimal performance in your desired application.