Update README.md
Browse files
README.md
CHANGED
@@ -108,7 +108,7 @@ Here you can see the structure of the single sample in the present dataset.
|
|
108 |
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
109 |
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
110 |
|
111 |
-
Description of the task: "Data una frase
|
112 |
|
113 |
|
114 |
### Cloze style:
|
@@ -122,7 +122,7 @@ Label (**Organizzazione**): "Data la frase: '{{text}}'\nL'entità {{target_entit
|
|
122 |
### MCQA style:
|
123 |
|
124 |
```txt
|
125 |
-
Data la frase:
|
126 |
```
|
127 |
|
128 |
## Results
|
|
|
108 |
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
109 |
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
110 |
|
111 |
+
Description of the task: "Data una frase e un'entità, indica se tale entità rappresenta un luogo, un'organizzazione o una persona.\n\n"
|
112 |
|
113 |
|
114 |
### Cloze style:
|
|
|
122 |
### MCQA style:
|
123 |
|
124 |
```txt
|
125 |
+
Data la frase: \"{{text}}\"\nDomanda: A quale tipologia di entità appartiene \"{{target_entity}}\" nella frase precedente?\nA. Luogo\nB. Organizzazione\nC. Persona\nRisposta:
|
126 |
```
|
127 |
|
128 |
## Results
|