Spaces:
Runtime error
Runtime error
Sharathhebbar24
commited on
Commit
•
97d35d2
1
Parent(s):
29f0bd6
Upload 4 files
Browse files- README.md +152 -13
- app.py +183 -0
- requirements.txt +2 -0
- source_code.py +127 -0
README.md
CHANGED
@@ -1,13 +1,152 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# AI21 Models
|
2 |
+
|
3 |
+
1. Jurassic is AI21 Labs' production-ready family of large language models (LLMs), powering natural language AI in thousands of live applications.
|
4 |
+
2. Jurassic-2 (J2) is our top-notch series of state-of-the-art Large Language Models. As the new generation following Jurassic-1, J2 not only improves upon the previous series in every aspect, but it also offers new features and capabilities that put it in a league of its own.
|
5 |
+
3. It offers 6 models such as `j2-large-instruct`, `j2-grande-chat`, `j2-jumbo-chat`, `j2-light`, `j2-mid`, and `j2-ultra`.
|
6 |
+
4. It offers various types of tasks such as `Paraphrase`, `Summarize`, `Summarize By Segment`, `Text Segmentation`, `Grammatical Error Corrections`, `Text Improvements`, `Library Answer`, `Library Search`, and `Contextual Answers`.
|
7 |
+
|
8 |
+
## Important Links
|
9 |
+
|
10 |
+
The AI21 platform offers 90$ credit as a usage for the user.
|
11 |
+
|
12 |
+
- Website: https://studio.ai21.com/
|
13 |
+
- Docs: https://docs.ai21.com/docs/task-specific
|
14 |
+
- SDK: https://docs.ai21.com/reference/python-sdk
|
15 |
+
|
16 |
+
## Paraphrase
|
17 |
+
|
18 |
+
### Features
|
19 |
+
|
20 |
+
#### Choose a style that fits your needs
|
21 |
+
|
22 |
+
**You have the choice between five different styles:**
|
23 |
+
|
24 |
+
- General - there are fresh and creative ways to rephrase sentences. Offer them to your users.
|
25 |
+
- Casual - convey a friendlier and more accessible tone for the right audience.
|
26 |
+
- Formal - present your words in a more professional way.
|
27 |
+
- Short - express your messages clearly and concisely.
|
28 |
+
- Long - expand your sentences to give more detail, nuance and depth.
|
29 |
+
|
30 |
+
#### Adding context to the paraphrase
|
31 |
+
You can paraphrase only part of the text while keeping the surrounding text unchanged by specifying a range within the text.
|
32 |
+
|
33 |
+
## Grammetical Error Correction
|
34 |
+
|
35 |
+
### Features
|
36 |
+
|
37 |
+
#### Fix every grammar error
|
38 |
+
Allow your users to write with flawless grammar, including tenses, verb additions, changing the order, and everything else you forgot since English high school lessons. No more wondering if it's who or whom.
|
39 |
+
|
40 |
+
|
41 |
+
| Before | After |
|
42 |
+
| ---- | ----- |
|
43 |
+
|I'm is going | I'm going |
|
44 |
+
|I'm going go | I'm going to go |
|
45 |
+
|I was in the there | I was there |
|
46 |
+
|
47 |
+
#### Find all the missing words
|
48 |
+
|
49 |
+
It difficult read a sentence like this, isn't it? The GEC API will make it easier for you to do this.
|
50 |
+
|
51 |
+
| Before | After |
|
52 |
+
| ---- | ----- |
|
53 |
+
| This soup very tasty | This soup is very tasty |
|
54 |
+
|
55 |
+
#### Punctuation: Punctuation, Punctuation!
|
56 |
+
|
57 |
+
Give your users peace of mind without having to worry about double whitespaces, incorrect punctuation, and answering the most annoying question - do I need a hyphen here?
|
58 |
+
|
59 |
+
| Before | After |
|
60 |
+
| ---- | ----- |
|
61 |
+
| Are you going to be there! | Are you going to be there? |
|
62 |
+
| Hi you | Hi, |
|
63 |
+
|
64 |
+
#### Take the spelling bee by storm
|
65 |
+
|
66 |
+
A spell check, but so much more: Capitalizing, changing words that sound the same but spelled differently, and fixing errors caused by typos.
|
67 |
+
|
68 |
+
| Before | After |
|
69 |
+
| ---- | ----- |
|
70 |
+
| I'm not aloud to go | I'm not allowed to go |
|
71 |
+
| i think i tough i saw you tryt | i think i thought i saw you try |
|
72 |
+
| Let it god | Let it go |
|
73 |
+
|
74 |
+
#### Avoid repetition of repetitive word repetitions
|
75 |
+
|
76 |
+
Do you know the song "I will will always love you"? How about “I will always love you you”? Doesn’t have the same ring, does it? With GEC API you can make sure you’ll always have a hit.
|
77 |
+
|
78 |
+
| Before | After |
|
79 |
+
| ---- | ----- |
|
80 |
+
| Gimme Gimme Gimme a man after midnight | Gimme a man after midnight |
|
81 |
+
|
82 |
+
#### Communicate the right message without using any wrong words
|
83 |
+
|
84 |
+
Did you ever use words with similar sounds or spellings? GEC API makes sure you want won't!
|
85 |
+
|
86 |
+
| Before | After |
|
87 |
+
| ---- | ----- |
|
88 |
+
| At times, my job can be quite monogamous | At times, my job can be quite monotonous |
|
89 |
+
|
90 |
+
## Text Improvements
|
91 |
+
|
92 |
+
### Features
|
93 |
+
|
94 |
+
#### Speak with great fluency
|
95 |
+
|
96 |
+
Let your users express themselves more fluently, phrasing the same message in a natural way.
|
97 |
+
|
98 |
+
| Before | After |
|
99 |
+
| ---- | ----- |
|
100 |
+
|Affiliated with the profession of project management, I have ameliorated myself with a different set of hard skills as well as soft skills. | Being involved in the profession of project management, I have developed a different set of hard skills as well as soft skills. |
|
101 |
+
|
102 |
+
#### Feature description: specificity
|
103 |
+
|
104 |
+
Make it easier for users to be more precise by recommending a more specific word to use within the context.
|
105 |
+
|
106 |
+
| Before | After |
|
107 |
+
| ---- | ----- |
|
108 |
+
| Good sleep | Full night's sleep |
|
109 |
+
| I ate a good pizza | I ate a tasty/delicious/yummy pizza |
|
110 |
+
|
111 |
+
#### Enrich the text with variety
|
112 |
+
|
113 |
+
Allow your users to avoid multiple repetitions of the same multiple words.
|
114 |
+
|
115 |
+
| Before | After |
|
116 |
+
| ---- | ----- |
|
117 |
+
| Positive energy balance means that you consume more energy than you burn. With the right types of foods this could mean muscle gain, with the wrong types it could mean fat gain. | Positive energy balance means that you consume more energy than you burn. With the right types of foods this could mean muscle gain, with the wrong types it could result in fat gain. |
|
118 |
+
|
119 |
+
#### Write simple with short sentences
|
120 |
+
|
121 |
+
Advise your users how to avoid long and convoluted sentences by splitting them into short sentences.
|
122 |
+
|
123 |
+
| Before | After |
|
124 |
+
| ---- | ----- |
|
125 |
+
| In addition, it is essential to build trust in their relationships, so they can start having efficient communications, so it will allow them to give feedback and call their peers on their performance without the fear of interpersonal conflicts. | In addition, it is essential to build trust in their relationships, so they can start having efficient communications. This will allow them to give feedback and call their peers on their performance without the fear of interpersonal conflicts. |
|
126 |
+
|
127 |
+
#### Conciseness
|
128 |
+
|
129 |
+
Make it easier for your users to be concise.
|
130 |
+
|
131 |
+
| Before | After |
|
132 |
+
| ---- | ----- |
|
133 |
+
| We will arrive home in a period of five days | We will arrive home in five |days
|
134 |
+
|
135 |
+
## Summarize
|
136 |
+
|
137 |
+
### Summarization Models
|
138 |
+
|
139 |
+
#### Summarize
|
140 |
+
The /summarize API takes a piece of text or fetches text from a given URL and generates grounded summaries that remain faithful to the original document (i.e. no external information is added during the process). The summaries are formatted as bullet lists, following the original text flow.
|
141 |
+
|
142 |
+
#### Summarize by Segment
|
143 |
+
The /summary-by-segment API takes a piece of text or fetches text from a given URL and breaks it into logical segments, returning summarized content for each segment, rather than one overall summary. This method is particularly useful for enabling users to read the original text faster and more efficiently. They can skim where possible and pay more attention where needed.
|
144 |
+
|
145 |
+
## Text Segmentation
|
146 |
+
|
147 |
+
### Features
|
148 |
+
|
149 |
+
#### Different types
|
150 |
+
In addition to working with free text, this API can also work directly with your favorite (or least favorite) webpage URLs! No need to spend time and effort scraping text yourself - just input the required URL and let the summarization begin.
|
151 |
+
|
152 |
+
Note: if the webpage you are trying to summarize is behind a paywall or restricted access, your call will fail and will result in an error.
|
app.py
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from source_code import text_completion, chat, GEC, paraphrase, contextual_answer, summarize, improvements
|
3 |
+
st.title("AI21 Studio Jurassic")
|
4 |
+
|
5 |
+
st.image(
|
6 |
+
image="assets\jurassic.jpg",
|
7 |
+
caption="Jurassic",
|
8 |
+
)
|
9 |
+
|
10 |
+
st.sidebar.title("Select your preferred tasks")
|
11 |
+
|
12 |
+
jurassic_models = [
|
13 |
+
"j2-light",
|
14 |
+
"j2-ultra",
|
15 |
+
"j2-mid",
|
16 |
+
]
|
17 |
+
tasks = [
|
18 |
+
"Generic",
|
19 |
+
"Specific"
|
20 |
+
]
|
21 |
+
|
22 |
+
# task = st.sidebar.selectbox(
|
23 |
+
# label="Select your Model",
|
24 |
+
# options = tasks
|
25 |
+
# )
|
26 |
+
|
27 |
+
disabled = False
|
28 |
+
# if task == "Generic":
|
29 |
+
# disabled = False
|
30 |
+
|
31 |
+
# task_disable = False
|
32 |
+
# if task == "Specific":
|
33 |
+
# task_disable = True
|
34 |
+
|
35 |
+
|
36 |
+
# generic_tasks = [
|
37 |
+
# "Text Completion",
|
38 |
+
# # "Chat"
|
39 |
+
# ]
|
40 |
+
|
41 |
+
specific_tasks = [
|
42 |
+
# "Contextual Answers",
|
43 |
+
"Paraphrase",
|
44 |
+
"Summarize",
|
45 |
+
"Grammetical Error Corrections",
|
46 |
+
"Text Improvements"
|
47 |
+
]
|
48 |
+
|
49 |
+
# choose_task = generic_tasks
|
50 |
+
# if task == "Specific":
|
51 |
+
# choose_task = specific_tasks
|
52 |
+
|
53 |
+
choose_task = specific_tasks
|
54 |
+
|
55 |
+
choose = st.sidebar.selectbox(
|
56 |
+
label="Select your tasks",
|
57 |
+
options = choose_task,
|
58 |
+
|
59 |
+
)
|
60 |
+
|
61 |
+
# model = st.sidebar.selectbox(
|
62 |
+
# label="Select your Model",
|
63 |
+
# options = jurassic_models,
|
64 |
+
# disabled=disabled
|
65 |
+
# )
|
66 |
+
|
67 |
+
|
68 |
+
# numResults = st.sidebar.number_input(
|
69 |
+
# label="Select Number of results",
|
70 |
+
# min_value=1,
|
71 |
+
# max_value=5,
|
72 |
+
# value=1,
|
73 |
+
# disabled=disabled
|
74 |
+
# )
|
75 |
+
|
76 |
+
# maxTokens = st.sidebar.number_input(
|
77 |
+
# label="Max Number of Tokens to generate",
|
78 |
+
# min_value=32,
|
79 |
+
# max_value=2048,
|
80 |
+
# value=200,
|
81 |
+
# step=2,
|
82 |
+
# disabled=disabled
|
83 |
+
# )
|
84 |
+
|
85 |
+
# temperature = st.sidebar.slider(
|
86 |
+
# label="Temperature",
|
87 |
+
# min_value=0.1,
|
88 |
+
# max_value=1.0,
|
89 |
+
# value=0.5,
|
90 |
+
# step=0.1,
|
91 |
+
# disabled=disabled
|
92 |
+
# )
|
93 |
+
|
94 |
+
# topP = st.sidebar.slider(
|
95 |
+
# label="Top P",
|
96 |
+
# min_value=0.1,
|
97 |
+
# max_value=1.0,
|
98 |
+
# value=0.6,
|
99 |
+
# step=0.1,
|
100 |
+
# disabled=disabled
|
101 |
+
# )
|
102 |
+
|
103 |
+
# topKReturn = st.sidebar.slider(
|
104 |
+
# label="Top K",
|
105 |
+
# min_value=1,
|
106 |
+
# max_value=10,
|
107 |
+
# value=5,
|
108 |
+
# step=1,
|
109 |
+
# disabled=disabled
|
110 |
+
# )
|
111 |
+
|
112 |
+
# context = st.sidebar.text_input(
|
113 |
+
# label="Context",
|
114 |
+
# )
|
115 |
+
|
116 |
+
# if choose == "Chat":
|
117 |
+
# question = st.chat_input(key="Question")
|
118 |
+
# if context is None:
|
119 |
+
# context = "Everything"
|
120 |
+
# # template = f"<|system|>\nYou are a intelligent chatbot and expertise in {context}.</s>\n<|user|>\n{question}.\n<|assistant|>"
|
121 |
+
# template = f"{context}\n{question}"
|
122 |
+
# if "messages" not in st.session_state:
|
123 |
+
# st.session_state.messages = []
|
124 |
+
# for message in st.session_state.messages:
|
125 |
+
# with st.chat_message(message.get('role')):
|
126 |
+
# st.write(message.get("content"))
|
127 |
+
# st.session_state.messages.append(
|
128 |
+
# {
|
129 |
+
# "role":"user",
|
130 |
+
# "content": f"Question: {question}"
|
131 |
+
# }
|
132 |
+
# )
|
133 |
+
# if question:
|
134 |
+
# result = chat(model, template, numResults, maxTokens, temperature, topKReturn, topP)
|
135 |
+
# with st.chat_message("user"):
|
136 |
+
# st.write(f"Context: {context}\n\nQuestion: {question}")
|
137 |
+
|
138 |
+
# if question.lower() == "clear":
|
139 |
+
# del st.session_state.messages
|
140 |
+
|
141 |
+
# st.session_state.messages.append(
|
142 |
+
# {
|
143 |
+
# "role": "assistant",
|
144 |
+
# "content": result
|
145 |
+
# }
|
146 |
+
# )
|
147 |
+
# with st.chat_message('User'):
|
148 |
+
# st.write(f"Context: {context}\n\nQuestion: {question}")
|
149 |
+
# with st.chat_message('assistant'):
|
150 |
+
# st.markdown(result)
|
151 |
+
if 0 > 1:
|
152 |
+
pass
|
153 |
+
else:
|
154 |
+
question = st.text_area(label="Question")
|
155 |
+
# if context is None:
|
156 |
+
# context = "Everything"
|
157 |
+
# template = f"<|system|>\nYou are a intelligent chatbot and expertise in {context}.</s>\n<|user|>\n{question}.\n<|assistant|>"
|
158 |
+
|
159 |
+
# if choose == "Text Completion":
|
160 |
+
# if question:
|
161 |
+
# result = text_completion(model, template, numResults, maxTokens, temperature, topKReturn, topP)
|
162 |
+
# st.markdown(result)
|
163 |
+
# if choose == "Contextual Answers":
|
164 |
+
# if question:
|
165 |
+
# result = contextual_answer(context, question)
|
166 |
+
# st.markdown(result)
|
167 |
+
if choose == "Paraphrase":
|
168 |
+
if question:
|
169 |
+
result = paraphrase(question)
|
170 |
+
st.markdown(result)
|
171 |
+
elif choose == "Summarize":
|
172 |
+
if question:
|
173 |
+
result = summarize(question)
|
174 |
+
st.markdown(result)
|
175 |
+
elif choose == "Grammetical Error Corrections":
|
176 |
+
if question:
|
177 |
+
result = GEC(question)
|
178 |
+
st.markdown(result)
|
179 |
+
elif choose == "Text Improvements":
|
180 |
+
if question:
|
181 |
+
result = improvements(question)
|
182 |
+
st.markdown(result)
|
183 |
+
|
requirements.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
ai21
|
2 |
+
streamlit
|
source_code.py
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import ai21
|
2 |
+
import os
|
3 |
+
|
4 |
+
ai21.api_key = os.getenv("HF_KEY")
|
5 |
+
|
6 |
+
|
7 |
+
def text_completion(model, prompt, numResults, maxTokens, temperature, topKReturn, topP):
|
8 |
+
response = ai21.Completion.execute(
|
9 |
+
model=model,
|
10 |
+
prompt=prompt,
|
11 |
+
numResults=numResults,
|
12 |
+
maxTokens=maxTokens,
|
13 |
+
temperature=temperature,
|
14 |
+
topKReturn=topKReturn,
|
15 |
+
topP=topP,
|
16 |
+
presencePenalty={
|
17 |
+
"scale": 1,
|
18 |
+
"applyToNumbers": True,
|
19 |
+
"applyToPunctuations": True,
|
20 |
+
"applyToStopwords": True,
|
21 |
+
"applyToWhitespaces": True,
|
22 |
+
"applyToEmojis": True
|
23 |
+
},
|
24 |
+
countPenalty={
|
25 |
+
"scale": 1,
|
26 |
+
"applyToNumbers": True,
|
27 |
+
"applyToPunctuations": True,
|
28 |
+
"applyToStopwords": True,
|
29 |
+
"applyToWhitespaces": True,
|
30 |
+
"applyToEmojis": True
|
31 |
+
},
|
32 |
+
frequencyPenalty={
|
33 |
+
"scale": 1,
|
34 |
+
"applyToNumbers": True,
|
35 |
+
"applyToPunctuations": True,
|
36 |
+
"applyToStopwords": True,
|
37 |
+
"applyToWhitespaces": True,
|
38 |
+
"applyToEmojis": True
|
39 |
+
},
|
40 |
+
stopSequences=[]
|
41 |
+
)
|
42 |
+
return response.suggestions[0].text
|
43 |
+
|
44 |
+
def chat(model, messages, numResults, maxTokens, temperature, topKReturn, topP):
|
45 |
+
response = ai21.Chat.execute(
|
46 |
+
model=model,
|
47 |
+
messages=messages,
|
48 |
+
numResults=numResults,
|
49 |
+
maxTokens=maxTokens,
|
50 |
+
temperature=temperature,
|
51 |
+
topKReturn=topKReturn,
|
52 |
+
topP=topP,
|
53 |
+
presencePenalty={
|
54 |
+
"scale": 1,
|
55 |
+
"applyToNumbers": True,
|
56 |
+
"applyToPunctuations": True,
|
57 |
+
"applyToStopwords": True,
|
58 |
+
"applyToWhitespaces": True,
|
59 |
+
"applyToEmojis": True
|
60 |
+
},
|
61 |
+
countPenalty={
|
62 |
+
"scale": 1,
|
63 |
+
"applyToNumbers": True,
|
64 |
+
"applyToPunctuations": True,
|
65 |
+
"applyToStopwords": True,
|
66 |
+
"applyToWhitespaces": True,
|
67 |
+
"applyToEmojis": True
|
68 |
+
},
|
69 |
+
frequencyPenalty={
|
70 |
+
"scale": 1,
|
71 |
+
"applyToNumbers": True,
|
72 |
+
"applyToPunctuations": True,
|
73 |
+
"applyToStopwords": True,
|
74 |
+
"applyToWhitespaces": True,
|
75 |
+
"applyToEmojis": True
|
76 |
+
},
|
77 |
+
stopSequences=[]
|
78 |
+
)
|
79 |
+
return response.suggestions[0].text
|
80 |
+
|
81 |
+
def GEC(text):
|
82 |
+
response = ai21.GEC.execute(text=text)
|
83 |
+
l = len(response.corrections)
|
84 |
+
for i in range(l):
|
85 |
+
sug = response.corrections[i].suggestion
|
86 |
+
start = response.corrections[i].startIndex
|
87 |
+
end = response.corrections[i].endIndex
|
88 |
+
text = text.replace(
|
89 |
+
text[start:end],
|
90 |
+
sug
|
91 |
+
)
|
92 |
+
return text
|
93 |
+
|
94 |
+
def summarize(text):
|
95 |
+
response = ai21.Summarize.execute(source=text, sourceType="TEXT")
|
96 |
+
return response.summary
|
97 |
+
|
98 |
+
def improvements(text):
|
99 |
+
response = ai21.Improvements.execute(
|
100 |
+
text=text,
|
101 |
+
types=[
|
102 |
+
'fluency',
|
103 |
+
'vocabulary/specificity',
|
104 |
+
'vocabulary/variety',
|
105 |
+
'clarity/short-sentences',
|
106 |
+
'clarity/conciseness'
|
107 |
+
]
|
108 |
+
)
|
109 |
+
l = len(response.improvements)
|
110 |
+
for i in range(l):
|
111 |
+
sug = response.improvements[i].suggestions[0]
|
112 |
+
start = response.improvements[i].startIndex
|
113 |
+
end = response.improvements[i].endIndex
|
114 |
+
text = text.replace(
|
115 |
+
text[start:end],
|
116 |
+
sug
|
117 |
+
)
|
118 |
+
|
119 |
+
return text
|
120 |
+
|
121 |
+
def paraphrase(text):
|
122 |
+
response = ai21.Paraphrase.execute(text=text, style="general")
|
123 |
+
return response.suggestions[0].text
|
124 |
+
|
125 |
+
def contextual_answer(context, question):
|
126 |
+
response = ai21.Answer.execute(context=context, question=question)
|
127 |
+
return response.suggestions[0].text
|