mgalkin commited on
Commit
60cbe88
1 Parent(s): 2f20352

Update model card

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -47,9 +47,9 @@ from ultra.eval import test
47
  model = UltraLinkPrediction.from_pretrained("mgalkin/ultra_3g")
48
  dataset = CoDExSmall(root="./datasets/")
49
  test(model, mode="test", dataset=dataset, gpus=None)
50
- # Expected results
51
- # mrr: 0.472035
52
- # hits@10: 0.66849
53
  ```
54
 
55
  * You can also **fine-tune** ULTRA on each graph, please refer to the [github repo](https://github.com/DeepGraphLearning/ULTRA#run-inference-and-fine-tuning) for more details on training / fine-tuning
@@ -102,6 +102,8 @@ test(model, mode="test", dataset=dataset, gpus=None)
102
  </tr>
103
  </table>
104
 
 
 
105
  **ULTRA 50g Performance**
106
 
107
  ULTRA 50g was pre-trained on 50 graphs, so we can't really apply the zero-shot evaluation protocol to the graphs.
 
47
  model = UltraLinkPrediction.from_pretrained("mgalkin/ultra_3g")
48
  dataset = CoDExSmall(root="./datasets/")
49
  test(model, mode="test", dataset=dataset, gpus=None)
50
+ # Expected results for ULTRA 3g
51
+ # mrr: 0.472
52
+ # hits@10: 0.668
53
  ```
54
 
55
  * You can also **fine-tune** ULTRA on each graph, please refer to the [github repo](https://github.com/DeepGraphLearning/ULTRA#run-inference-and-fine-tuning) for more details on training / fine-tuning
 
102
  </tr>
103
  </table>
104
 
105
+ Fine-tuning ULTRA on specific graphs brings, on average, further 10% relative performance boost both in MRR and Hits@10. See the paper for more comparisons.
106
+
107
  **ULTRA 50g Performance**
108
 
109
  ULTRA 50g was pre-trained on 50 graphs, so we can't really apply the zero-shot evaluation protocol to the graphs.