ar08 commited on
Commit
6aab33a
1 Parent(s): b404c17

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ ## Model Details
8
+ - *Finetuned+Capable for laptop
9
+ ### Model Description
10
+ ------------
11
+ Capable for run in Low-end **laptop**
12
+
13
+ - **Developed by:** [Tiny-llama]("https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main")
14
+
15
+ - **Finetuned from model [optional]:** [Tiny-llama]("https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main")
16
+
17
+
18
+
19
+ ## Uses
20
+ ```python
21
+ from llama_cpp import Llama
22
+
23
+ llm = Llama(
24
+ model_path="path/to/llama",
25
+ # n_gpu_layers=-1, # Uncomment to use GPU acceleration
26
+ # seed=1337, # Uncomment to set a specific seed
27
+ # n_ctx=2048, # Uncomment to increase the context window
28
+ )
29
+ output = llm(
30
+ "Q: Name the planets in the solar system? A: ", # Prompt
31
+ max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window
32
+ stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
33
+ echo=True # Echo the prompt back in the output
34
+ ) # Generate a completion, can also call create_completion
35
+ print(output)
36
+ ```