Spaces:
Sleeping
Sleeping
import gradio as gr | |
import pandas as pd | |
from graph_utils import * | |
from transformers import pipeline | |
languages = pd.read_csv("model_lang.csv", names=["Lang_acr"]) | |
def check_lang(lang_acronym): | |
if lang_acronym in languages["Lang_acr"].to_list(): | |
return "True" | |
else: | |
return "False" | |
title = "DReAM" | |
description_main = """ | |
This space allows you to test a set of LLMs tuned to perform different tasks over dream reports. | |
Three main tasks are available: | |
- Name Entity Recognition (NER), with an English-only model that generates the identified characters. | |
- Sentiment Analysis (SA), with two English-only models (one for multi-label classification, and one for generation) and a large multilingual model for multi-label classification. | |
- Relation Extraction (RE), with an English-only model that identifies relevant characters and existing relations between them following the Activity feature of the Hall and Van de Castle framework. | |
All models have been tuned on the Hall and Van de Castle framework. More details are on the page for each model. For more techincal and theoretical details, see the [Bertolini et al., 2024](https://aclanthology.org/2024.clpsych-1.7/), and [Bertolini et al., 2024](https://www.sciencedirect.com/science/article/pii/S1389945723015186?via%3Dihub). | |
Use the current interface to check if a language is included in the multilingual SA model, using language acronyms (e.g. it for Italian). the tabs above will direct you to each model to query. | |
If you want to use the models outside the space, you can easily do so via [DReAMy](https://github.com/lorenzoscottb/DReAMy) | |
""" | |
description_L = """ | |
This model is an XLM-R tuned model, pre-trained with 94 languages available, and tuned on emotion-annotated DreamBank English data. (see original model [card](https://huggingface.co/xlm-roberta-large) to see which are available) | |
""" | |
description_S = """ | |
A BERT-base-cased model pre-trained on English-only text and tuned on annotated DreamBank English data. | |
""" | |
description_G = """ | |
A T5 model tuned to perform text generation and predict emotion as well as the character experiencing those emotions. | |
""" | |
description_R = """ | |
A T5 model tuned to perform text generation and predicts the characters and the (activity) relation between them. | |
""" | |
description_GNER = """ | |
A T5 model tuned to perform text generation, and predict which characters are present in the report. Note that, in the Hall and Van de Castle, the character lists never includes the dreamer. Hence, if you (willingly or not) enter a report that does not contain another character reference, the model will/should (correctly) produce an empty string. Moreover, it is likely that the produced list of CHAR could be longer than the one produced by the SA model, as not all CHAR might be associated with emotions. | |
""" | |
examples = [ | |
"I was followed by the blue monster but was not scared. I was calm and relaxed.", | |
"Ero seguito dal mostro blu, ma non ero spaventato. Ero calmo e rilassato.", | |
"Śledził mnie niebieski potwór, ale się nie bałem. Byłem spokojny i zrelaksowany.", | |
] | |
############################# | |
interface_words = gr.Interface( | |
fn=check_lang, | |
inputs="text", | |
outputs="text", | |
title=title, | |
description=description_main, | |
examples=["en", "it", "pl"], | |
cache_examples=True, | |
) | |
############################# | |
pipe_L = pipeline( | |
"text-classification", | |
model="DReAMy-lib/xlm-roberta-large-DreamBank-emotion-presence", | |
max_length=300, | |
return_all_scores=True, | |
truncation="do_not_truncate", | |
) | |
def predictL(text): | |
t = pipe_L(text) | |
t = {list(dct.values())[0] : list(dct.values())[1] for dct in t[0]} | |
return t | |
interface_model_L = gr.Interface( | |
fn=predictL, | |
inputs='text', | |
outputs=gr.Label(), | |
title="SA Large Multilingual", | |
description=description_L, | |
examples=examples, | |
cache_examples=True, | |
) | |
############################# | |
pipe_S = pipeline( | |
"text-classification", | |
model="DReAMy-lib/bert-base-cased-DreamBank-emotion-presence", | |
max_length=300, | |
return_all_scores=True, | |
truncation="do_not_truncate", | |
) | |
def predict(text): | |
t = pipe_S(text) | |
t = {list(dct.values())[0] : list(dct.values())[1] for dct in t[0]} | |
return t | |
interface_model_S = gr.Interface( | |
fn=predict, | |
inputs='text', | |
outputs=gr.Label(), | |
title="SA Base English-Only", | |
description=description_S, | |
examples=["I was followed by the blue monster but was not scared. I was calm and relaxed."], | |
cache_examples=True, | |
) | |
############################# | |
# interface_model_G = gr.Interface.load( | |
# "models/DReAMy-lib/t5-base-DreamBank-Generation-Emot-Char", | |
# examples=examples_g, | |
# title="SA Generation", | |
# ) | |
############################# | |
interface_model_RE = gr.Interface( | |
text_to_graph, | |
inputs=gr.Textbox(label="Text", placeholder="Enter a text here."), | |
outputs=[gr.HTML(label="Extracted graph"),gr.Textbox(label="Extracted text")], | |
examples= [ | |
"I was skating on the outdoor ice pond that used to be across the street from my house. I was not alone, but I did not recognize any of the other people who were skating around. I went through my whole repertoire of jumps, spires, and steps-some of which I can do and some of which I'm not yet sure of. They were all executed flawlessly-some I repeated, some I did only once. I seemed to know that if I went into competition, I would be sure of coming in third because there were only three contestants. Up to that time I hadn't considered it because I hadn't thought I was good enough, but now since everything was going so well, I decided to enter.", | |
"I was talking on the telephone to the father of an old friend of mine (boy, 21 years old). We were discussing the party the Saturday night before to which I had invited his son as a guest. I asked him if his son had a good time at the party. He told me not to tell his son that he had told me, but that he had had a good time, except he was a little surprised that I had acted the way I did.", | |
"I was walking alone with my dog in a forest." | |
], | |
title=title, | |
description=description_R, | |
cache_examples=True, | |
) | |
############################# | |
pipe_N = pipeline( | |
"text2text-generation", | |
model="DReAMy-lib/t5-base-DreamBank-Generation-NER-Char", | |
max_length=300, | |
truncation="do_not_truncate", | |
) | |
def predictN(text): | |
t = pipe_N(text) | |
t = t[0]["generated_text"] | |
return t | |
interface_model_N = gr.Interface( | |
fn=predictN, | |
inputs='text', | |
outputs='text', | |
title="NER", | |
description=description_GNER, | |
examples=["I was followed by the blue monster but was not scared. I was calm and relaxed."], | |
cache_examples=True, | |
) | |
############################# | |
gr.TabbedInterface( | |
[interface_words, interface_model_N, interface_model_L, interface_model_S, interface_model_RE], | |
["Main", "NER", "SA Multilingual", "SA English", "RE"], | |
).launch() | |