hmthanh commited on
Commit
2867842
1 Parent(s): da1371a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,3 +1,48 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # Vietnamese Legal Text BERT
5
+ #### Table of contents
6
+ 1. [Introduction](#introduction)
7
+ 2. [Using Vietnamese Legal Text BERT](#transformers)
8
+ - [Installation](#install2)
9
+ - [Pre-trained models](#models2)
10
+ - [Example usage](#usage2)
11
+
12
+ # <a name="introduction"></a> Using Vietnamese Legal Text BERT `hmthanh/VietnamLegalText-SBERT`
13
+
14
+ Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
15
+
16
+
17
+ ## <a name="transformers"></a> Using Vietnamese Legal Text BERT `transformers`
18
+
19
+ ### Installation <a name="install2"></a>
20
+ - Install `transformers` with pip:
21
+ `pip install transformers`<br />
22
+
23
+ - Install `tokenizers` with pip:
24
+ `pip install tokenizers`
25
+
26
+ ### Pre-trained models <a name="models2"></a>
27
+
28
+
29
+ Model | #params | Arch. | Max length | Pre-training data
30
+ ---|---|---|---|---
31
+ `hmthanh/VietnamLegalText-SBERT` | 135M | base | 256 | 20GB of texts
32
+
33
+ ### Example usage <a name="usage2"></a>
34
+
35
+ ```python
36
+ import torch
37
+ from transformers import AutoModel, AutoTokenizer
38
+
39
+ phobert = AutoModel.from_pretrained("hmthanh/VietnamLegalText-SBERT")
40
+ tokenizer = AutoTokenizer.from_pretrained("hmthanh/VietnamLegalText-SBERT")
41
+
42
+ sentence = 'Chúng_tôi là những nghiên_cứu_viên .'
43
+
44
+ input_ids = torch.tensor([tokenizer.encode(sentence)])
45
+
46
+ with torch.no_grad():
47
+ features = phobert(input_ids) # Models outputs are now tuples
48
+ ```