saadamin2k13 commited on
Commit
4fe6142
1 Parent(s): 559fa6f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ur
4
+ metrics:
5
+ - accuracy
6
+ library_name: transformers
7
+ tags:
8
+ - text-generation-inference
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+ This model card lists fine-tuned byT5 model for the task of Semantic Parsing.
16
+
17
+ ## Model Details
18
+ We worked on a pre-trained byt5-base model and fine-tuned it with the Parallel Meaning Bank dataset (DRS-Text pairs dataset).
19
+ Furthermore, we enriched the gold_silver flavors of PMB (release 5.0.0) with different augmentation strategies.
20
+
21
+
22
+ ## Uses
23
+
24
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
25
+
26
+ To use the model, follow the code below for a quick response.
27
+
28
+ ```python
29
+
30
+ from transformers import ByT5Tokenizer, T5ForConditionalGeneration
31
+
32
+ # Initialize the tokenizer and model
33
+ tokenizer = ByT5Tokenizer.from_pretrained('saadamin2k13/italian_augmented_semantic_parser', max_length=512)
34
+
35
+ model = T5ForConditionalGeneration.from_pretrained('saadamin2k13/italian_augmented_semantic_parser')
36
+
37
+ # Example sentence
38
+ example = "یہ کار کالی ہے۔"
39
+
40
+ # Tokenize and prepare the input
41
+ x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids']
42
+
43
+ # Generate output
44
+ output = model.generate(x)
45
+
46
+ # Decode and print the output text
47
+ pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
48
+ print(pred_text)
49
+