rasta commited on
Commit
0b1d8d2
1 Parent(s): 2a2573c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -8,4 +8,66 @@ pipeline_tag: text2text-generation
8
  tags:
9
  - health
10
  - FHIR
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  tags:
9
  - health
10
  - FHIR
11
+ ---
12
+
13
+ # bart-large
14
+
15
+ This model is a fine-tuned version of [bart-large](https://huggingface.co/facebook/bart-large) on a manually created dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.40
18
+
19
+ ### Training hyperparameters
20
+
21
+ The following hyperparameters were used during training:
22
+ - learning_rate: 2e-05
23
+ - train_batch_size: 64
24
+ - eval_batch_size: 64
25
+ - seed: 42
26
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
27
+ - num_epochs: 3
28
+
29
+ ### Training results
30
+
31
+ | Training Loss | Epoch | Step | Validation Loss |
32
+ |:-------------:|:-----:|:----:|:---------------:|
33
+ | - | 1.0 | 47 | 4.5156
34
+ ...
35
+ | - | 10 | 490 | 0.4086
36
+
37
+
38
+ ## How to use
39
+
40
+ ```python
41
+ def generate_text(input_text):
42
+ # Tokenize the input text
43
+ input_tokens = tokenizer(input_text, return_tensors='pt')
44
+
45
+ # Move the input tokens to the same device as the model
46
+ input_tokens = input_tokens.to(model.device)
47
+
48
+ # Generate text using the fine-tuned model
49
+ output_tokens = model.generate(**input_tokens)
50
+
51
+ # Decode the generated tokens to text
52
+ output_text = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
53
+
54
+ return output_text
55
+
56
+ from transformers import BartForConditionalGeneration
57
+
58
+ # Load the pre-trained BART model from the Hugging Face model hub
59
+ model = BartForConditionalGeneration.from_pretrained('rasta/BART-FHIR-question')
60
+
61
+ input_text = "List all procedures with reason reference to resource with ID 24680135."
62
+ output_text = generate_text(input_text)
63
+ print(output_text)
64
+ ```
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.18.0
70
+ - Pytorch 1.11.0+cu113
71
+ - Datasets 2.1.0
72
+ - Tokenizers 0.12.1
73
+