Omnibus commited on
Commit
3dc7ae1
·
verified ·
1 Parent(s): 7aa5fad

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -23
app.py CHANGED
@@ -7,26 +7,6 @@ control_json={'control':'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRS
7
  string_json={'control':'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN','char':'OPQRSTUVWXYZ','leng':50}
8
  cont_list=list(string_json['control'])
9
 
10
-
11
-
12
-
13
- text="""
14
- I asked Generative AI Models about their context window. Their response was intriguing.
15
- The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.
16
- In practical terms, the context window limits how much previous dialogue the model can “remember” during an interaction. If the interaction exceeds the context window, the model loses access to the earliest parts of the conversation. This limitation can impact the model’s consistency in long conversations or complex tasks.
17
- I asked Generative AI Models about their context window. Their response was intriguing.
18
- The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.
19
- In practical terms, the context window limits how much previous dialogue the model can “remember” during an interaction. If the interaction exceeds the context window, the model loses access to the earliest parts of the conversation. This limitation can impact the model’s consistency in long conversations or complex tasks.
20
- I asked Generative AI Models about their context window. Their response was intriguing.
21
- The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.
22
- In practical terms, the context window limits how much previous dialogue the model can “remember” during an interaction. If the interaction exceeds the context window, the model loses access to the earliest parts of the conversation. This limitation can impact the model’s consistency in long conversations or complex tasks.
23
- I asked Generative AI Models about their context window. Their response was intriguing.
24
- The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.
25
- In practical terms, the context window limits how much previous dialogue the model can “remember” during an interaction. If the interaction exceeds the context window, the model loses access to the earliest parts of the conversation. This limitation can impact the model’s consistency in long conversations or complex tasks.
26
- I asked Generative AI Models about their context window. Their response was intriguing.
27
- The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.
28
- In practical terms, the context window limits how much previous dialogue the model can “remember” during an interaction. If the interaction exceeds the context window, the model loses access to the earliest parts of the conversation. This limitation can impact the model’s consistency in long conversations or complex tasks.
29
- """
30
  def get_sen_list(text):
31
  sen_list=[]
32
  blob = TextBlob(text)
@@ -42,7 +22,7 @@ def proc_sen(sen_list,cnt):
42
  n=ea.split("/")
43
  if n[1] == "NN":
44
  noun_box1.append(n[0])
45
- json_object={'sentence':sen_list[cnt],'noun_phrase':noun_p,'nouns':noun_box1}
46
  return json_object
47
 
48
  def proc_nouns(sen_list):
@@ -158,7 +138,7 @@ def find_query(query,sen,nouns):
158
  noun_box[str(nl)].append(ea_n)
159
  for ea in noun_box.values():
160
  for vals in ea:
161
- sen_box.append(sen[vals]['sentence'])
162
  return noun_box,sen_box
163
 
164
  with gr.Blocks() as app:
@@ -168,7 +148,7 @@ with gr.Blocks() as app:
168
  query=gr.Textbox(label="Search query")
169
  search_btn=gr.Button("Search")
170
  out_box=gr.Textbox(label="Results")
171
- sen_box=gr.Textbox(label="Sentences")
172
  with gr.Row():
173
  with gr.Column(scale=2):
174
  sen=gr.JSON(label="Sentences")
 
7
  string_json={'control':'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN','char':'OPQRSTUVWXYZ','leng':50}
8
  cont_list=list(string_json['control'])
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  def get_sen_list(text):
11
  sen_list=[]
12
  blob = TextBlob(text)
 
22
  n=ea.split("/")
23
  if n[1] == "NN":
24
  noun_box1.append(n[0])
25
+ json_object={'sen_num':cnt,'sentence':sen_list[cnt],'noun_phrase':noun_p,'nouns':noun_box1}
26
  return json_object
27
 
28
  def proc_nouns(sen_list):
 
138
  noun_box[str(nl)].append(ea_n)
139
  for ea in noun_box.values():
140
  for vals in ea:
141
+ sen_box.append({'sen_num':sen[vals]['sen_num'],'sentence':sen[vals]['sentence']})
142
  return noun_box,sen_box
143
 
144
  with gr.Blocks() as app:
 
148
  query=gr.Textbox(label="Search query")
149
  search_btn=gr.Button("Search")
150
  out_box=gr.Textbox(label="Results")
151
+ sen_box=gr.JSON(label="Sentences")
152
  with gr.Row():
153
  with gr.Column(scale=2):
154
  sen=gr.JSON(label="Sentences")