tianbaoxiexxx commited on
Commit
a7c79e3
1 Parent(s): 439e080

Create README.md

Browse files

Update README.md

Create README.md

Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: true
4
+ widget:
5
+ - text: 'def factorial(n):'
6
+ example_title: Factorial
7
+ group: Python
8
+ - text: 'def recur_fibo(n):'
9
+ example_title: Recursive Fibonacci
10
+ group: Python
11
+ license: llama2
12
+ library_name: transformers
13
+ tags:
14
+ - text-generation
15
+ - code
16
+ - text-generation-inference
17
+ language:
18
+ - en
19
+ ---
20
+
21
+ # lemur-70b-chat-v1
22
+
23
+ <p align="center">
24
+ <img src="https://huggingface.co/datasets/OpenLemur/assets/resolve/main/lemur_icon.png" width="300" height="300" alt="Lemur">
25
+ </p>
26
+
27
+
28
+ ## Table of Contents
29
+
30
+ 1. [Model Summary](##model-summary)
31
+ 2. [Use](##use)
32
+ 3. [License](##license)
33
+
34
+
35
+ ## Model Summary
36
+
37
+ - **Repository:** [OpenLemur/lemur-v1](https://github.com/OpenLemur/lemur-v1)
38
+ - **Project Website:** [xlang.ai](https://www.xlang.ai/)
39
+ - **Paper:** [Coming soon](https://www.xlang.ai/)
40
+ - **Point of Contact:** [mail@xlang.ai](mailto:mail@xlang.ai)
41
+
42
+
43
+ ## Use
44
+
45
+ ### Setup
46
+
47
+ First, we have to install all the libraries listed in `requirements.txt` in [GitHub](https://github.com/OpenLemur/lemur-v1):
48
+
49
+ ```bash
50
+ pip install -r requirements.txt
51
+ ```
52
+
53
+ ### Generation
54
+
55
+ ```python
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+
58
+ tokenizer = AutoTokenizer.from_pretrained("OpenLemur/lemur-70b-chat-v1")
59
+ model = AutoModelForCausalLM.from_pretrained("OpenLemur/lemur-70b-chat-v1", device_map="auto", load_in_8bit=True)
60
+
61
+ # Text Generation Example
62
+ prompt = "What's lemur's favorite fruit?"
63
+ input = tokenizer(prompt, return_tensors="pt")
64
+ output = model.generate(**input, max_length=50, num_return_sequences=1)
65
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
66
+ print(generated_text)
67
+
68
+ # Code Generation Example
69
+ prompt = "Write a Python function to merge two sorted lists into one sorted list without using any built-in sort functions."
70
+ input = tokenizer(prompt, return_tensors="pt")
71
+ output = model.generate(**input, max_length=200, num_return_sequences=1)
72
+ generated_code = tokenizer.decode(output[0], skip_special_tokens=True)
73
+ print(generated_code)
74
+ ```
75
+
76
+ # License
77
+ The model is licensed under the Llama-2 community license agreement.