shrimantasatpati commited on
Commit
0e7f3ba
1 Parent(s): 7c16aeb

Uploaded files

Browse files
Files changed (4) hide show
  1. main.py +40 -0
  2. readme.md +56 -0
  3. requirements.txt +6 -0
  4. streamlit_app.py +36 -0
main.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Phi 2 Inference.ipynb
2
+
3
+ Automatically generated by Colaboratory.
4
+
5
+ Original file is located at
6
+ https://colab.research.google.com/drive/1qGlsJwf-rAF06cTMJU56iM9z3k_HZngS
7
+ """
8
+
9
+
10
+ """## Tokenizer and Model Prep"""
11
+
12
+ from transformers import AutoTokenizer, AutoModelForCausalLM
13
+ import torch
14
+
15
+ tokenizer = AutoTokenizer.from_pretrained(
16
+ "microsoft/phi-2",
17
+ trust_remote_code = True
18
+ )
19
+
20
+ model = AutoModelForCausalLM.from_pretrained(
21
+ "microsoft/phi-2",
22
+ torch_dtype = "auto",
23
+ device_map = "auto",
24
+ trust_remote_code = True
25
+ )
26
+
27
+ prompt = """Give me a list of 13 words that have 9 letters."""
28
+
29
+ with torch.no_grad():
30
+ token_ids = tokenizer.encode(prompt, add_special_tokens=False ,return_tensors="pt")
31
+ output_ids = model.generate(
32
+ token_ids.to(model.device),
33
+ max_new_tokens=512,
34
+ do_sample=True,
35
+ temperature = 0.3
36
+ )
37
+
38
+ output = tokenizer.decode(output_ids[0][token_ids.size(1) :])
39
+
40
+ print(output)
readme.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Microsoft Phi 2 Streamlit App
2
+
3
+ Welcome to the Microsoft Phi 2 Streamlit App, a powerful and user-friendly application that harnesses the capabilities of the Microsoft Phi 2 language model for text generation. This app, built using Streamlit, allows users to interact with Phi 2 effortlessly and generate contextually rich text based on their prompts.
4
+
5
+ ## Getting Started
6
+
7
+ 1. **Clone the Repository:**
8
+ ```bash
9
+ git clone https://github.com/your-username/microsoft-phi2-streamlit-app.git
10
+ cd microsoft-phi2-streamlit-app
11
+ ```
12
+
13
+ 2. **Install Dependencies:**
14
+ ```bash
15
+ pip install -r requirements.txt
16
+ ```
17
+
18
+ 3. **Run the App:**
19
+ ```bash
20
+ streamlit run app.py
21
+ ```
22
+ This command will launch the app in your default web browser.
23
+
24
+ ## Usage
25
+
26
+ 1. Enter your text prompt in the provided text area.
27
+ 2. Click the "Generate Output" button to initiate text generation based on your prompt.
28
+ 3. Explore the generated output provided by the Microsoft Phi 2 language model.
29
+
30
+ ## Customize and Explore
31
+
32
+ Feel free to explore and customize the app to suit your needs. You can experiment with different prompts, adjust parameters, or even integrate additional features. The codebase is open for further customization and serves as a starting point for various text generation applications.
33
+
34
+ ## Demonstration
35
+ * Start screen
36
+
37
+ ![img.png](img.png)
38
+
39
+ * The output
40
+
41
+ ![img_1.png](img_1.png)
42
+
43
+
44
+ ## Acknowledgments
45
+
46
+ This app utilizes the Microsoft Phi 2 language model, and we appreciate the contributions from the Microsoft Research team in advancing language model capabilities. Special thanks to the Streamlit development team for providing an excellent framework for building interactive web applications.
47
+
48
+ ## License
49
+
50
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
51
+
52
+ ## Issues and Contributions
53
+
54
+ If you encounter any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request. Your contributions are highly valued!
55
+
56
+ Enjoy generating text with Microsoft Phi 2!
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ transformers
2
+ sentencepiece
3
+ accelerate
4
+ bitsandbytes
5
+ einops
6
+ streamlit
streamlit_app.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from transformers import AutoTokenizer, AutoModelForCausalLM
3
+ import torch
4
+
5
+ # Load the Phi 2 model and tokenizer
6
+ tokenizer = AutoTokenizer.from_pretrained(
7
+ "microsoft/phi-2",
8
+ trust_remote_code=True
9
+ )
10
+
11
+ model = AutoModelForCausalLM.from_pretrained(
12
+ "microsoft/phi-2",
13
+ device_map="auto",
14
+ trust_remote_code=True
15
+ )
16
+
17
+ # Streamlit UI
18
+ st.title("Microsoft Phi 2 Streamlit App")
19
+
20
+ # User input prompt
21
+ prompt = st.text_area("Enter your prompt:", """Write a story about Nasa""")
22
+
23
+ # Generate output based on user input
24
+ if st.button("Generate Output"):
25
+ with torch.no_grad():
26
+ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
27
+ output_ids = model.generate(
28
+ token_ids.to(model.device),
29
+ max_new_tokens=512,
30
+ do_sample=True,
31
+ temperature=0.3
32
+ )
33
+
34
+ output = tokenizer.decode(output_ids[0][token_ids.size(1):])
35
+ st.text("Generated Output:")
36
+ st.write(output)