File size: 6,034 Bytes
07a1b1b
 
 
7384944
07a1b1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
import streamlit as st
from utils import *

if True:
    st.markdown("""
    ## With ChatGPT 
    
    #### 1. Set up OpenAI config in your notebook with code snippet below:
    
    ```python
    import openai ## you need to install this with "pip install openai" 
    import os

    from dotenv import load_dotenv
    load_dotenv() # read local .env file

    openai.api_key = os.getenv('<Your OpenAI API key name>') # this assign the key to a variable named "openai.api_key"
    ```
    
        - you might want to check on your own notebook if you did correctly by "print(openai.api_key)"
    
    #### 2. Define a python function to make our life easier later on
    
    ```python
    def get_completion(prompt, model="gpt-3.5-turbo"):
        messages = [{"role": "user", "content": prompt}]
        response = openai.ChatCompletion.create(model=model, messages=messages,
                   temperature=0, # this is the degree of randomness of the model's output
                      )
                      
        content = response.choices[0].message["content"] # This is the real reply from the ChatGPT 
        
        ## This is the data we can grab from the API, how many tokens we have used in this conversation
        token_dict = {
                'prompt_tokens':response['usage']['prompt_tokens'],
                'completion_tokens':response['usage']['completion_tokens'],
                'total_tokens':response['usage']['total_tokens'],
                    }
                    
        ## This is another important data we will use later on
        moderation_output = openai.Moderation.create(input=prompt)["results"][0]
        return content, token_dict, moderation_output
    ```
    - The above function have two input, 
        - prompt is a string input that will pass to ChatGPT
        - model is another input for us to sellet which ChatGPT model we wish to talk on
    - The function return three peaces of information, AI's reply, the usage info, and the moderation.
        - we only focus on the 'content' this week, 
        - we will use the other two in the coming weeks
    
    #### 3. Our example text to summarise, a product review in this case
    """)
    
    st.code(
    '''
    prod_review = 
    """
        Got this panda plush toy for my daughter's birthday, who loves it and takes it everywhere. \n
        It's soft and super cute, and its face has a friendly look. It's a bit small for what I paid though. \n
        I think there might be other options that are bigger for the same price. \n
        It arrived a day earlier than expected, so I got to play with it myself before I gave it to her.
    """
    '''
    , language = 'python')
    
    st.write("##")
    
    st.markdown("""
    #### 4. Now, we add our system requirement to the bot, asking it to follow our instruction specifically
    """)
    st.code('''
    numberOfWords = '20' # Here we specify the max number of words we want it to reply
    topics = "weather" # Here we want it to focus on specific topic
    
    ## Here is our system instruction, check how we pass our specific requirements to the system one
    prompt = f"""
        Your task is to generate a short summary of a given text \n
        Summarize the text below, delimited by triple backticks, \n
        in at most {numberOfWords} words, \n
        and focusing on any aspects that mention {topics}. 
        
        Review: ```{prod_review}```
        """
    ''', language = 'python')
    
    st.markdown("""
    Now if you run code below, you will get the response, if everything were correct so far...
    
    ```python
    response, _, _ = get_completion(prompt) # we use '_' to bypass other outputs
    print(response)
    ```
    """)
    
    st.markdown("""
    #### 5. More specific prompt
    """)
    st.code('''
    numberOfWords = '20'
    topics = "marry"

    prompt = f"""
    Your task is to generate a short summary from text below, delimited by triple backticks, in at most {numberOfWords} words.\n 
    Firstly, extract relevant information and create a list of keywords without response,\n
    Then, check if {topics} is in your list, if not, just response no relevent topics about {topics} to summarise,\
    if it is in your list, focusing on any aspects that mention {topics}, \n
    Review: ```{prod_review}```
    """
    ''', language = 'python')
    
    st.markdown("""
        - The prompt above is a template for us to build our first app next week. 
        - We hope to create an app that can only pass the user input into those pre-defined variables
        - Therefore, user only need very easy interface to get the desired output.
    """)
    
    
    st.markdown("""
    ## With Hugging Chat 
    """)
    st.code('''
    # we need to install the hugchat library fistly
    ## in your notebook, copy and execute the following code
    !pip install hugchat
    ''')
    st.markdown(" - Now we can call the Hugging chat API and let it do the same job as ChatGPT")
    st.code('''
    from hugchat.login import Login
    from hugchat import hugchat
    sign = Login(email, passwd) ## You need to here are your sign up email and password for Hugging Face
    
    ## Just copy these two lines and execute it, don't worry about it too much
    cookies = sign.login()
    chatbot = hugchat.ChatBot(cookies=cookies.get_dict())
    
    ## get the response from the Hugging Chat
    res = chatbot.chat(prompt)
    print(res)
    ''', language = 'python')
    
    st.markdown("""
        - As you might see, Hugging chat can do the job of summarization, \n
        - But in terms of satisfy specific requirement from us, it has limitation compare to ChatGPT, \n
        - ChatGPT is more advanced and it can follow human instruction clearly.\n
        - The advantage of Hugging Chat is that it is free, so you might do as many experiment as you wish, \n
        without cost you money.
        - In our app, hugging chat would be a good alternative to ChatGPT for some case.
    """)