anjandash commited on
Commit
60d4c73
1 Parent(s): 2b0488a

Updated README model card

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ## GraphCodeBERT model
2
 
3
- GraphCodeBERT is a graph-based pre-trained model based on the Transformer architecture for programming language, which also considers data-flow information along with code sequences. GraphCodeBERT consists of 12 layers, 768 dimensional hidden states, and 12 attention heads. The maximum sequence length for the model is 512. The model is trained on the CodeSearchNet dataset, which includes 2.3M functions with document pairs for six programming languages.
4
 
5
  More details can be found in the [paper](https://arxiv.org/abs/2009.08366) by Guo et. al.
6
 
7
- **Disclaimer:** The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face community members.
1
  ## GraphCodeBERT model
2
 
3
+ GraphCodeBERT is a graph-based pre-trained model based on the Transformer architecture for programming language, which also considers data-flow information along with code sequences. GraphCodeBERT consists of 12 layers, 768 dimensional hidden states, and 12 attention heads. The maximum sequence length used by the authors is 512, but longer sequences can also be passed. The model is trained on the CodeSearchNet dataset, which includes 2.3M functions with document pairs for six programming languages.
4
 
5
  More details can be found in the [paper](https://arxiv.org/abs/2009.08366) by Guo et. al.
6
 
7
+ **Disclaimer:** The team releasing GraphCodeBERT did not write a model card for this model so this model card has been written by the Hugging Face community members.