anamikac2708 commited on
Commit
1d8689b
1 Parent(s): 8ba406f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  language:
3
  - en
4
- license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
@@ -14,9 +14,55 @@ base_model: unsloth/llama-3-8b-bnb-4bit
14
  # Uploaded model
15
 
16
  - **Developed by:** anamikac2708
17
- - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ license: cc-by-nc-4.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
14
  # Uploaded model
15
 
16
  - **Developed by:** anamikac2708
17
+ - **License:** cc-by-nc-4.0
18
  - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
19
 
20
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team
21
 
22
+ Model then converted Q5_K_M gguf using llama.cpp https://github.com/ggerganov/llama.cpp/. This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.
23
+
24
+ ## How to Get Started with the Model
25
+
26
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
27
+ 1. ## Install llama-cpp-python:
28
+
29
+ ```
30
+ ! CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
31
+ ```
32
+
33
+ 2. ## Run the model
34
+
35
+ ```
36
+ from transformers import AutoTokenizer
37
+ tokenizer = AutoTokenizer.from_pretrained('meta-llama/Meta-Llama-3-8B')
38
+ example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
39
+ prompt = tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)
40
+
41
+ llm = Llama.from_pretrained(
42
+ repo_id="anamikac2708/Llama3-8b-finetuned-investopedia-q5_k_m_gguf",
43
+ filename="*Q5_K_M.gguf",
44
+ verbose=False
45
+ )
46
+
47
+ output = llm(
48
+ prompt,
49
+ max_tokens=256, # Generate up to 256 tokens
50
+ stop=["<|im_end|>"],
51
+ echo=True, # Whether to echo the prompt
52
+ )
53
+
54
+ print(output['choices'][0]['text'])
55
+ ```
56
+ ## Evaluation
57
+
58
+ <!-- This section describes the evaluation protocols and provides the results. -->
59
+ Coming soon!
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
+ This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
65
+
66
+ ## License
67
+
68
+ Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.