File size: 1,584 Bytes
26d3403
 
 
 
 
 
 
 
 
81dcda4
 
 
 
26d3403
 
f02ebe5
26d3403
47722b7
 
8e5c333
f02ebe5
8e5c333
f02ebe5
8e5c333
f02ebe5
 
 
 
8e5c333
f02ebe5
 
8e5c333
f02ebe5
 
 
8e5c333
f02ebe5
 
 
8e5c333
f02ebe5
 
 
8e5c333
f02ebe5
 
 
47722b7
f02ebe5
 
 
 
 
 
 
 
 
 
 
 
47722b7
f02ebe5
47722b7
f02ebe5
 
 
 
 
8e5c333
 
f02ebe5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- gguf
datasets:
- yahma/alpaca-cleaned
library_name: transformers
pipeline_tag: text-generation
---

# Uploaded Model

- **Developed by:** ar08
- **License:** apache-2.0

## USAGE

To use this model, follow the steps below:

1. **Install the necessary packages:**
   ```python
   # Install llama-cpp-python
   pip install llama-cpp-python

   # Install transformers from source - only needed for versions <= v4.34
   pip install git+https://github.com/huggingface/transformers.git

   # Install accelerate
   pip install accelerate
   ```

2. **Instantiate the model:**
   ```python
   from llama_cpp import Llama

   # Define the model path
   my_model_path = "your_downloaded_model_name/path"
   CONTEXT_SIZE = 512

   # Load the model
   model = Llama(model_path=my_model_path, n_ctx=CONTEXT_SIZE)
   ```

3. **Generate text from a prompt:**
   ```python
   def generate_text_from_prompt(user_prompt, max_tokens=100, temperature=0.3, top_p=0.1, echo=True, stop=["Q", "\n"]):
       # Define the parameters
       model_output = model(
           user_prompt,
           max_tokens=max_tokens,
           temperature=temperature,
           top_p=top_p,
           echo=echo,
           stop=stop,
       )

       return model_output["choices"][0]["text"].strip()

   if __name__ == "__main__":
       my_prompt = "What do you think about the inclusion policies in Tech companies?"
       model_response = generate_text_from_prompt(my_prompt)
       print(model_response)
   ```


```