sourabhdattawad commited on
Commit
2d38b88
1 Parent(s): 084dae6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -26,8 +26,26 @@ output = llm(
26
  stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
27
  echo=True # Echo the prompt back in the output
28
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
-
31
 
32
  ## Google Colab
33
 
 
26
  stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
27
  echo=True # Echo the prompt back in the output
28
  )
29
+ output
30
+ ```
31
+ ```
32
+ Llama.generate: prefix-match hit
33
+
34
+ llama_print_timings: load time = 7770.49 ms
35
+ llama_print_timings: sample time = 100.16 ms / 40 runs ( 2.50 ms per token, 399.35 tokens per second)
36
+ llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
37
+ llama_print_timings: eval time = 35214.73 ms / 40 runs ( 880.37 ms per token, 1.14 tokens per second)
38
+ llama_print_timings: total time = 35895.91 ms / 41 tokens
39
+ {'id': 'cmpl-01e2feb3-c0ff-4a6e-8ca4-b8bf2172da01',
40
+ 'object': 'text_completion',
41
+ 'created': 1713912080,
42
+ 'model': 'meta-llama-3-8b-instruct.Q8_0.gguf',
43
+ 'choices': [{'text': 'Q: Name the planets in the solar system? A: 1. Mercury, 2. Venus, 3. Earth, 4. Mars, 5. Jupiter, 6. Saturn, 7. Uranus, 8. Neptune.',
44
+ 'index': 0,
45
+ 'logprobs': None,
46
+ 'finish_reason': 'stop'}],
47
+ 'usage': {'prompt_tokens': 13, 'completion_tokens': 40, 'total_tokens': 53}}
48
  ```
 
49
 
50
  ## Google Colab
51