Sentence Similarity
sentence-transformers
PyTorch
Transformers
Japanese
luke
feature-extraction
akiFQC commited on
Commit
7059446
1 Parent(s): 3dddb71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ datasets:
22
 
23
  # GLuCoSE (General Luke-based Contrastive Sentence Embedding)-base-Japanese
24
 
25
- [日本語のREADME/Japanese README](https://huggingface.co/pkshatech/GLuCoSE-base-ja)
26
 
27
  GLuCoSE (General LUke-based COntrastive Sentence Embedding, "glucose") is a Japanese text embedding model based on [LUKE](https://github.com/studio-ousia/luke). In order to create a general-purpose, user-friendly Japanese text embedding model, GLuCoSE has been trained on a mix of web data and various datasets associated with natural language inference and search. This model is not only suitable for sentence vector similarity tasks but also for semantic search tasks.
28
  - Maximum token count: 512
 
22
 
23
  # GLuCoSE (General Luke-based Contrastive Sentence Embedding)-base-Japanese
24
 
25
+ [日本語のREADME/Japanese README](https://huggingface.co/pkshatech/GLuCoSE-base-ja/blob/main/README_JA.md)
26
 
27
  GLuCoSE (General LUke-based COntrastive Sentence Embedding, "glucose") is a Japanese text embedding model based on [LUKE](https://github.com/studio-ousia/luke). In order to create a general-purpose, user-friendly Japanese text embedding model, GLuCoSE has been trained on a mix of web data and various datasets associated with natural language inference and search. This model is not only suitable for sentence vector similarity tasks but also for semantic search tasks.
28
  - Maximum token count: 512