ar08 commited on
Commit
f02ebe5
1 Parent(s): 47722b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -111
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  language:
3
  - en
@@ -10,130 +11,59 @@ tags:
10
 
11
  ---
12
 
13
- # Uploaded model
14
 
15
  - **Developed by:** ar08
16
  - **License:** apache-2.0
17
- # USAGE
18
- ```python
19
- ---
20
- language:
21
- - en
22
- license: apache-2.0
23
- tags:
24
- - text-generation-inference
25
- - transformers
26
- - llama
27
- - gguf
28
-
29
- ---
30
-
31
- # Uploaded model
32
-
33
- - **Developed by:** ar08
34
- - **License:** apache-2.0
35
- # USAGE
36
- ```python
37
- #pip install llama-cpp-python
38
- # Install transformers from source - only needed for versions <= v4.34
39
- # pip install git+https://github.com/huggingface/transformers.git
40
- # pip install accelerate
41
- # Instanciate the model
42
- from llama_cpp import Llama
43
-
44
- my_aweseome_llama_model = Llama(model_path="./MY_AWESOME_MODEL")
45
-
46
-
47
- prompt = "This is a prompt"
48
- max_tokens = 100
49
- temperature = 0.3
50
- top_p = 0.1
51
- echo = True
52
- stop = ["Q", "\n"]
53
-
54
-
55
- # Define the parameters
56
- model_output = my_aweseome_llama_model(
57
- prompt,
58
- max_tokens=max_tokens,
59
- temperature=temperature,
60
- top_p=top_p,
61
- echo=echo,
62
- stop=stop,
63
- )
64
- final_result = model_output["choices"][0]["text"].strip()
65
- from llama_cpp import Llama
66
-
67
-
68
- # GLOBAL VARIABLES
69
- my_model_path = "your dowloaded model name/path"
70
- CONTEXT_SIZE = 512
71
-
72
-
73
- # LOAD THE MODEL
74
- model = Llama(model_path=my_model_path,
75
- n_ctx=CONTEXT_SIZE)
76
- def generate_text_from_prompt(user_prompt,
77
- max_tokens = 100,
78
- temperature = 0.3,
79
- top_p = 0.1,
80
- echo = True,
81
- stop = ["Q", "\n"]):
82
-
83
-
84
-
85
-
86
- # Define the parameters
87
- model_output = zephyr_model(
88
- user_prompt,
89
- max_tokens=max_tokens,
90
- temperature=temperature,
91
- top_p=top_p,
92
- echo=echo,
93
- stop=stop,
94
- )
95
-
96
 
97
- return model_output
98
 
99
- if __name__ == "__main__":
100
 
 
 
 
 
101
 
102
- my_prompt = "What do you think about the inclusion policies in Tech companies?"
 
103
 
 
 
 
104
 
105
- model_response = generate_text_from_prompt(my_prompt)
 
 
106
 
 
 
 
107
 
108
- print(model_response)
109
- ```
110
- # Chat tempalte:
111
- ```
112
- "instruction":"Summarize the given article in 200 Words.",
113
- "input": "https://www.bbc.com/news/world-51461830",
114
- "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
115
 
116
- "instruction": "Classify the following shape according to simple geometry.",
117
- "input": "<Picture Attached>",
118
- "output": "Triangle"
 
 
 
 
 
 
 
 
 
119
 
120
- "instruction": "Given a recent news article, write a headline for it.",
121
- "input": "https://www.news.example.com/article-1",
122
- "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
123
- ```
124
 
125
- ```
126
- # Chat tempalte:
127
- ```
128
- "instruction":"Summarize the given article in 200 Words.",
129
- "input": "https://www.bbc.com/news/world-51461830",
130
- "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
131
 
132
- "instruction": "Classify the following shape according to simple geometry.",
133
- "input": "<Picture Attached>",
134
- "output": "Triangle"
135
 
136
- "instruction": "Given a recent news article, write a headline for it.",
137
- "input": "https://www.news.example.com/article-1",
138
- "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
139
- ```
 
1
+ ```md
2
  ---
3
  language:
4
  - en
 
11
 
12
  ---
13
 
14
+ # Uploaded Model
15
 
16
  - **Developed by:** ar08
17
  - **License:** apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ ## USAGE
20
 
21
+ To use this model, follow the steps below:
22
 
23
+ 1. **Install the necessary packages:**
24
+ ```python
25
+ # Install llama-cpp-python
26
+ pip install llama-cpp-python
27
 
28
+ # Install transformers from source - only needed for versions <= v4.34
29
+ pip install git+https://github.com/huggingface/transformers.git
30
 
31
+ # Install accelerate
32
+ pip install accelerate
33
+ ```
34
 
35
+ 2. **Instantiate the model:**
36
+ ```python
37
+ from llama_cpp import Llama
38
 
39
+ # Define the model path
40
+ my_model_path = "your_downloaded_model_name/path"
41
+ CONTEXT_SIZE = 512
42
 
43
+ # Load the model
44
+ model = Llama(model_path=my_model_path, n_ctx=CONTEXT_SIZE)
45
+ ```
 
 
 
 
46
 
47
+ 3. **Generate text from a prompt:**
48
+ ```python
49
+ def generate_text_from_prompt(user_prompt, max_tokens=100, temperature=0.3, top_p=0.1, echo=True, stop=["Q", "\n"]):
50
+ # Define the parameters
51
+ model_output = model(
52
+ user_prompt,
53
+ max_tokens=max_tokens,
54
+ temperature=temperature,
55
+ top_p=top_p,
56
+ echo=echo,
57
+ stop=stop,
58
+ )
59
 
60
+ return model_output["choices"][0]["text"].strip()
 
 
 
61
 
62
+ if __name__ == "__main__":
63
+ my_prompt = "What do you think about the inclusion policies in Tech companies?"
64
+ model_response = generate_text_from_prompt(my_prompt)
65
+ print(model_response)
66
+ ```
 
67
 
 
 
 
68
 
69
+ ```