--- license: cc-by-nc-4.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - generated_from_trainer datasets: - squad - newsqa - LLukas22/cqadupstack - LLukas22/fiqa - LLukas22/scidocs - deepset/germanquad - LLukas22/nq --- # all-MiniLM-L12-v2-embedding-all This model is a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad](https://huggingface.co/datasets/squad), [newsqa](https://huggingface.co/datasets/newsqa), [LLukas22/cqadupstack](https://huggingface.co/datasets/LLukas22/cqadupstack), [LLukas22/fiqa](https://huggingface.co/datasets/LLukas22/fiqa), [LLukas22/scidocs](https://huggingface.co/datasets/LLukas22/scidocs), [deepset/germanquad](https://huggingface.co/datasets/deepset/germanquad), [LLukas22/nq](https://huggingface.co/datasets/LLukas22/nq). ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('LLukas22/all-MiniLM-L12-v2-embedding-all') embeddings = model.encode(sentences) print(embeddings) ``` ## Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2E-05 - per device batch size: 60 - effective batch size: 120 - seed: 42 - optimizer: AdamW with betas (0.9,0.999) and eps 1E-08 - weight decay: 1E-02 - number of epochs: 4 - mixed_precision_training: bf16 ## Training results | Epoch | Train Loss | Validation Loss | | ----- | ---------- | --------------- | | 0 | 0.0655 | 0.055 | | 1 | 0.0549 | 0.051 | | 2 | 0.049 | 0.0481 | | 3 | 0.0451 | 0.0471 | ## Evaluation results | Epoch | top_1 | top_3 | top_5 | top_10 | top_25 | | ----- | ----- | ----- | ----- | ----- | ----- | | 0 | 0.537 | 0.697 | 0.753 | 0.812 | 0.867 | | 1 | 0.538 | 0.699 | 0.755 | 0.814 | 0.872 | | 2 | 0.544 | 0.705 | 0.761 | 0.818 | 0.876 | | 3 | 0.544 | 0.703 | 0.759 | 0.817 | 0.874 | ## Framework versions - Transformers: 4.25.1 - PyTorch: 1.13.0+cu116 - PyTorch Lightning: 1.8.6 - Datasets: 2.7.1 - Tokenizers: 0.13.1 - Sentence Transformers: 2.2.2 ## Additional Information This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Master).