ar08 commited on
Commit
8e5c333
1 Parent(s): 28b8d67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -1
README.md CHANGED
@@ -14,4 +14,92 @@ tags:
14
 
15
  - **Developed by:** ar08
16
  - **License:** apache-2.0
17
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  - **Developed by:** ar08
16
  - **License:** apache-2.0
17
+ # USAGE
18
+ ```python
19
+ #pip install llama-cpp-python
20
+ # Install transformers from source - only needed for versions <= v4.34
21
+ # pip install git+https://github.com/huggingface/transformers.git
22
+ # pip install accelerate
23
+ # Instanciate the model
24
+ from llama_cpp import Llama
25
+
26
+ my_aweseome_llama_model = Llama(model_path="./MY_AWESOME_MODEL")
27
+
28
+
29
+ prompt = "This is a prompt"
30
+ max_tokens = 100
31
+ temperature = 0.3
32
+ top_p = 0.1
33
+ echo = True
34
+ stop = ["Q", "\n"]
35
+
36
+
37
+ # Define the parameters
38
+ model_output = my_aweseome_llama_model(
39
+ prompt,
40
+ max_tokens=max_tokens,
41
+ temperature=temperature,
42
+ top_p=top_p,
43
+ echo=echo,
44
+ stop=stop,
45
+ )
46
+ final_result = model_output["choices"][0]["text"].strip()
47
+ from llama_cpp import Llama
48
+
49
+
50
+ # GLOBAL VARIABLES
51
+ my_model_path = "your dowloaded model name/path"
52
+ CONTEXT_SIZE = 512
53
+
54
+
55
+ # LOAD THE MODEL
56
+ model = Llama(model_path=my_model_path,
57
+ n_ctx=CONTEXT_SIZE)
58
+ def generate_text_from_prompt(user_prompt,
59
+ max_tokens = 100,
60
+ temperature = 0.3,
61
+ top_p = 0.1,
62
+ echo = True,
63
+ stop = ["Q", "\n"]):
64
+
65
+
66
+
67
+
68
+ # Define the parameters
69
+ model_output = zephyr_model(
70
+ user_prompt,
71
+ max_tokens=max_tokens,
72
+ temperature=temperature,
73
+ top_p=top_p,
74
+ echo=echo,
75
+ stop=stop,
76
+ )
77
+
78
+
79
+ return model_output
80
+
81
+ if __name__ == "__main__":
82
+
83
+
84
+ my_prompt = "What do you think about the inclusion policies in Tech companies?"
85
+
86
+
87
+ model_response = generate_text_from_prompt(my_prompt)
88
+
89
+
90
+ print(model_response)
91
+ ```
92
+ # Chat tempalte:
93
+ ```
94
+ "instruction":"Summarize the given article in 200 Words.",
95
+ "input": "https://www.bbc.com/news/world-51461830",
96
+ "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
97
+
98
+ "instruction": "Classify the following shape according to simple geometry.",
99
+ "input": "<Picture Attached>",
100
+ "output": "Triangle"
101
+
102
+ "instruction": "Given a recent news article, write a headline for it.",
103
+ "input": "https://www.news.example.com/article-1",
104
+ "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
105
+ ```