Edit model card

zenml/finetuned-all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("zenml/finetuned-all-MiniLM-L6-v2")
# Run inference
sentences = [
    'How do I register and connect an S3 artifact store in ZenML using the interactive mode?',
    "hich Resource Name to use in the interactive mode:zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles\n\nzenml service-connector list-resources --resource-type s3-bucket --resource-id s3://zenfiles\n\nzenml artifact-store connect s3-zenfiles --connector aws-multi-type\n\nExample Command Output\n\n$ zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles\n\nRunning with active workspace: 'default' (global)\n\nRunning with active stack: 'default' (global)\n\nSuccessfully registered artifact_store `s3-zenfiles`.\n\n$ zenml service-connector list-resources --resource-type s3-bucket --resource-id zenfiles\n\nThe  's3-bucket' resource with name 'zenfiles' can be accessed by service connectors configured in your workspace:\n\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓\n\n┃             CONNECTOR ID             │ CONNECTOR NAME       │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 4a550c82-aa64-4a48-9c7f-d5e127d77a44 │ aws-multi-type       │ 🔶 aws         │ 📦 s3-bucket  │ s3://zenfiles  ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 66c0922d-db84-4e2c-9044-c13ce1611613 │ aws-multi-instance   │ 🔶 aws         │ 📦 s3-bucket  │ s3://zenfiles  ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 65c82e59-cba0-4a01-b8f6-d75e8a1d0f55 │ aws-single-instance  │ 🔶 aws         │ 📦 s3-bucket  │ s3://zenfiles  ┃\n\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛\n\n$ zenml artifact-store connect s3-zenfiles --connector aws-multi-type\n\nRunning with active workspace: 'default' (global)\n\nRunning with active stack: 'default' (global)\n\nSuccessfully connected artifact store `s3-zenfiles` to the following resources:",
    "Azure Container Registry\n\nStoring container images in Azure.\n\nThe Azure container registry is a container registry flavor that comes built-in with ZenML and uses the Azure Container Registry to store container images.\n\nWhen to use it\n\nYou should use the Azure container registry if:\n\none or more components of your stack need to pull or push container images.\n\nyou have access to Azure. If you're not using Azure, take a look at the other container registry flavors.\n\nHow to deploy it\n\nGo here and choose a subscription, resource group, location, and registry name. Then click on Review + Create and to create your container registry.\n\nHow to find the registry URI\n\nThe Azure container registry URI should have the following format:\n\n<REGISTRY_NAME>.azurecr.io\n\n# Examples:\n\nzenmlregistry.azurecr.io\n\nmyregistry.azurecr.io\n\nTo figure out the URI for your registry:\n\nGo to the Azure portal.\n\nIn the search bar, enter container registries and select the container registry you want to use. If you don't have any container registries yet, check out the deployment section on how to create one.\n\nUse the name of your registry to fill the template <REGISTRY_NAME>.azurecr.io and get your URI.\n\nHow to use it\n\nTo use the Azure container registry, we need:\n\nDocker installed and running.\n\nThe registry URI. Check out the previous section on the URI format and how to get the URI for your registry.\n\nWe can then register the container registry and use it in our active stack:\n\nzenml container-registry register <NAME> \\\n\n--flavor=azure \\\n\n--uri=<REGISTRY_URI>\n\n# Add the container registry to the active stack\n\nzenml stack update -c <NAME>\n\nYou also need to set up authentication required to log in to the container registry.\n\nAuthentication Methods",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3133
cosine_accuracy@3 0.6145
cosine_accuracy@5 0.7169
cosine_accuracy@10 0.7892
cosine_precision@1 0.3133
cosine_precision@3 0.2048
cosine_precision@5 0.1434
cosine_precision@10 0.0789
cosine_recall@1 0.3133
cosine_recall@3 0.6145
cosine_recall@5 0.7169
cosine_recall@10 0.7892
cosine_ndcg@10 0.5579
cosine_mrr@10 0.4829
cosine_map@100 0.4907

Information Retrieval

Metric Value
cosine_accuracy@1 0.2892
cosine_accuracy@3 0.6145
cosine_accuracy@5 0.7108
cosine_accuracy@10 0.7651
cosine_precision@1 0.2892
cosine_precision@3 0.2048
cosine_precision@5 0.1422
cosine_precision@10 0.0765
cosine_recall@1 0.2892
cosine_recall@3 0.6145
cosine_recall@5 0.7108
cosine_recall@10 0.7651
cosine_ndcg@10 0.5394
cosine_mrr@10 0.4655
cosine_map@100 0.4739

Information Retrieval

Metric Value
cosine_accuracy@1 0.2831
cosine_accuracy@3 0.5482
cosine_accuracy@5 0.6506
cosine_accuracy@10 0.7169
cosine_precision@1 0.2831
cosine_precision@3 0.1827
cosine_precision@5 0.1301
cosine_precision@10 0.0717
cosine_recall@1 0.2831
cosine_recall@3 0.5482
cosine_recall@5 0.6506
cosine_recall@10 0.7169
cosine_ndcg@10 0.5068
cosine_mrr@10 0.4386
cosine_map@100 0.4479

Information Retrieval

Metric Value
cosine_accuracy@1 0.241
cosine_accuracy@3 0.4699
cosine_accuracy@5 0.5843
cosine_accuracy@10 0.6807
cosine_precision@1 0.241
cosine_precision@3 0.1566
cosine_precision@5 0.1169
cosine_precision@10 0.0681
cosine_recall@1 0.241
cosine_recall@3 0.4699
cosine_recall@5 0.5843
cosine_recall@10 0.6807
cosine_ndcg@10 0.4531
cosine_mrr@10 0.3807
cosine_map@100 0.3891

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,490 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 9 tokens
    • mean: 21.23 tokens
    • max: 49 tokens
    • min: 23 tokens
    • mean: 237.64 tokens
    • max: 256 tokens
  • Samples:
    positive anchor
    How can you leverage MLflow for tracking and visualizing experiment results in ZenML? MLflow

    Logging and visualizing experiments with MLflow.

    The MLflow Experiment Tracker is an Experiment Tracker flavor provided with the MLflow ZenML integration that uses the MLflow tracking service to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).

    When would you want to use it?

    MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition toward a more production-oriented workflow.

    You should use the MLflow Experiment Tracker:

    if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.

    if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)

    if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines

    You should consider one of the other Experiment Tracker flavors if you have never worked with MLflow before and would rather use another experiment tracking tool that you are more familiar with.

    How do you deploy it?

    The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack:

    zenml integration install mlflow -y

    The MLflow Experiment Tracker can be configured to accommodate the following MLflow deployment scenarios:
    What are the required integrations for running pipelines with a Docker-based orchestrator in ZenML? ctivated by installing the respective integration:Integration Materializer Handled Data Types Storage Format bentoml BentoMaterializer bentoml.Bento .bento deepchecks DeepchecksResultMateriailzer deepchecks.CheckResult , deepchecks.SuiteResult .json evidently EvidentlyProfileMaterializer evidently.Profile .json great_expectations GreatExpectationsMaterializer great_expectations.ExpectationSuite , great_expectations.CheckpointResult .json huggingface HFDatasetMaterializer datasets.Dataset , datasets.DatasetDict Directory huggingface HFPTModelMaterializer transformers.PreTrainedModel Directory huggingface HFTFModelMaterializer transformers.TFPreTrainedModel Directory huggingface HFTokenizerMaterializer transformers.PreTrainedTokenizerBase Directory lightgbm LightGBMBoosterMaterializer lgbm.Booster .txt lightgbm LightGBMDatasetMaterializer lgbm.Dataset .binary neural_prophet NeuralProphetMaterializer NeuralProphet .pt pillow PillowImageMaterializer Pillow.Image .PNG polars PolarsMaterializer pl.DataFrame , pl.Series .parquet pycaret PyCaretMaterializer Any sklearn , xgboost , lightgbm or catboost model .pkl pytorch PyTorchDataLoaderMaterializer torch.Dataset , torch.DataLoader .pt pytorch PyTorchModuleMaterializer torch.Module .pt scipy SparseMaterializer scipy.spmatrix .npz spark SparkDataFrameMaterializer pyspark.DataFrame .parquet spark SparkModelMaterializer pyspark.Transformer pyspark.Estimator tensorflow KerasMaterializer tf.keras.Model Directory tensorflow TensorflowDatasetMaterializer tf.Dataset Directory whylogs WhylogsMaterializer whylogs.DatasetProfileView .pb xgboost XgboostBoosterMaterializer xgb.Booster .json xgboost XgboostDMatrixMaterializer xgb.DMatrix .binary

    If you are running pipelines with a Docker-based orchestrator, you need to specify the corresponding integration as required_integrations in the DockerSettings of your pipeline in order to have the integration materializer available inside your Docker container. See the pipeline configuration documentation for more information.
    What is the difference between the stack component settings at registration time and runtime for ZenML? ettings to specify AzureML step operator settings.Difference between stack component settings at registration-time vs real-time

    For stack-component-specific settings, you might be wondering what the difference is between these and the configuration passed in while doing zenml stack-component register --config1=configvalue --config2=configvalue, etc. The answer is that the configuration passed in at registration time is static and fixed throughout all pipeline runs, while the settings can change.

    A good example of this is the MLflow Experiment Tracker, where configuration which remains static such as the tracking_url is sent through at registration time, while runtime configuration such as the experiment_name (which might change every pipeline run) is sent through as runtime settings.

    Even though settings can be overridden at runtime, you can also specify default values for settings while configuring a stack component. For example, you could set a default value for the nested setting of your MLflow experiment tracker: zenml experiment-tracker register --flavor=mlflow --nested=True

    This means that all pipelines that run using this experiment tracker use nested MLflow runs unless overridden by specifying settings for the pipeline at runtime.

    Using the right key for Stack-component-specific settings

    When specifying stack-component-specific settings, a key needs to be passed. This key should always correspond to the pattern: .

    For example, the SagemakerStepOperator supports passing in estimator_args. The way to specify this would be to use the key step_operator.sagemaker

    @step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": {"estimator_args": {"instance_type": "m7g.medium"}}})

    def my_step():

    ...

    # Using the class

    @step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": SagemakerStepOperatorSettings(instance_type="m7g.medium")})

    def my_step():

    ...

    or in YAML:

    steps:

    my_step:
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: True
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_384_cosine_map@100 dim_64_cosine_map@100
0.6667 1 0.4153 0.4312 0.4460 0.3779
2.0 3 0.4465 0.4643 0.4824 0.3832
2.6667 4 0.4479 0.4739 0.4907 0.3891
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
10
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zenml/finetuned-all-MiniLM-L6-v2

Finetuned
(151)
this model

Evaluation results