SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Lysandrec/MNLP_M2_document_encoder")
# Run inference
sentences = [
    'For some constant $b$, if the minimum value of \\[f(x)=\\dfrac{x^2-2x+b}{x^2+2x+b}\\] is $\\tfrac12$, what is the maximum value of $f(x)$?',
    "<page_title> Second degree polynomial </page_title> <path> Quadratic_function > Graph of the univariate function > Vertex > Maximum and minimum points </path> <section_title> Maximum and minimum points </section_title> <content> Using calculus, the vertex point, being a maximum or minimum of the function, can be obtained by finding the roots of the derivative: f ( x ) = a x 2 + b x + c ⇒ f ′ ( x ) = 2 a x + b {\\displaystyle f(x)=ax^{2}+bx+c\\quad \\Rightarrow \\quad f'(x)=2ax+b} x is a root of f '(x) if f '(x) = 0 resulting in x = − b 2 a {\\displaystyle x=-{\\frac {b}{2a}}} with the corresponding function value f ( x ) = a ( − b 2 a ) 2 + b ( − b 2 a ) + c = c − b 2 4 a , {\\displaystyle f(x)=a\\left(-{\\frac {b}{2a}}\\right)^{2}+b\\left(-{\\frac {b}{2a}}\\right)+c=c-{\\frac {b^{2}}{4a}},} so again the vertex point coordinates, (h, k), can be expressed as ( − b 2 a , c − b 2 4 a ) . {\\displaystyle \\left(-{\\frac {b}{2a}},c-{\\frac {b^{2}}{4a}}\\right).} </content>",
    '<page_title> Lagrangian multiplier </page_title> <path> Lagrange_multiplier > Examples > Example 1 </path> <section_title> Example 1 </section_title> <content> Evaluating the objective function f at these points yields f ( 2 2 , 2 2 ) = 2 , f ( − 2 2 , − 2 2 ) = − 2 . {\\displaystyle f\\left({\\tfrac {\\sqrt {2\\ }}{2}},{\\tfrac {\\sqrt {2\\ }}{2}}\\right)={\\sqrt {2\\ }}\\ ,\\qquad f\\left(-{\\tfrac {\\sqrt {2\\ }}{2}},-{\\tfrac {\\sqrt {2\\ }}{2}}\\right)=-{\\sqrt {2\\ }}~.} Thus the constrained maximum is 2 {\\displaystyle \\ {\\sqrt {2\\ }}\\ } and the constrained minimum is − 2 {\\displaystyle -{\\sqrt {2}}} . </content>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

This training dataset was synthetically generated. For each question from the source Q/A dataset (Lysandrec/MNLP_M2_rag_dataset), relevant passages were retrieved from a large document corpus (Lysandrec/MNLP_M2_rag_documents).

  • A positive_passage was identified from the retrieved candidates, typically one containing the answer to the question. If no definitive positive was found, the top retrieved passage was often selected.
  • Hard_negative_passages were selected from other highly-ranked (but not positive) retrieved documents for the same question.
  • Random_negative_passages were sampled from the broader document corpus, ensuring they differed from the selected positive and hard negative passages. This process resulted in triplets of (query, positive_passage, negative_passage) used for training.
  • Size: 100,000 training samples
  • Columns: query (a question), positive_passage (a good retrieved document), and negative_passage (a bad example of a retrieved document)
  • Approximate statistics based on the first 1000 samples:
    query positive_passage negative_passage
    type string string string
    details
    • min: 13 tokens
    • mean: 64.12 tokens
    • max: 214 tokens
    • min: 64 tokens
    • mean: 205.95 tokens
    • max: 384 tokens
    • min: 63 tokens
    • mean: 178.5 tokens
    • max: 384 tokens
  • Samples:
    query positive_passage negative_passage
    The average of first five prime numbers greater than 61 is?<br>A. A)32.2<br>B. B)32.98<br>C. C)74.6<br>D. D)32.8<br>E. E)32.4 <page_title> 61 (number) </page_title> <path> 61_(number) > In mathematics </path> <section_title> In mathematics </section_title> <content> 61 is: the 18th prime number. a twin prime with 59. a cuban prime of the form p = x3 − y3/x − y, where x = y + 1. the smallest proper prime, a prime p which ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repeating sequence with length p − 1. In such primes, each digit 0, 1, ..., 9 appears in the repeating sequence the same number of times as does each other digit (namely, p − 1/10 times). </content> <page_title> Astatine </page_title> <path> Element_85 > Characteristics > Chemical </path> <section_title> Chemical </section_title> <content> In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008). </content>
    A charitable association sold an average of 66 raffle tickets per member. Among the female members, the average was 70 raffle tickets. The male to female ratio of the association is 1:2. What was the average number E of tickets sold by the male members of the association<br>A. A)50<br>B. B)56<br>C. C)58<br>D. D)62<br>E. E)66 <page_title> RSA number </page_title> <path> RSA_numbers </path> <section_title> Summary </section_title> <content> Cash prizes of varying size, up to US$200,000 (and prizes up to $20,000 awarded), were offered for factorization of some of them. The smallest RSA number was factored in a few days. Most of the numbers have still not been factored and many of them are expected to remain unfactored for many years to come. </content> <page_title> Peer learning </page_title> <path> Peer_learning > Connections with other practices > Connectivism </path> <section_title> Connectivism </section_title> <content> Yochai Benkler explains how the now-ubiquitous computer helps us produce and process knowledge together with others in his book, The Wealth of Networks. George Siemens argues in Connectivism: A Learning Theory for the Digital Age, that technology has changed the way we learn, explaining how it tends to complicate or expose the limitations of the learning theories of the past. In practice, the ideas of connectivism developed in and alongside the then-new social formation, "massive open online courses" or MOOCs. Connectivism proposes that the knowledge we can access by virtue of our connections with others is just as valuable as the information carried inside our minds. </content>
    Find prime numbers \(a, b, c, d, e\) such that \(a^4 + b^4 + c^4 + d^4 + e^4 = abcde\). <page_title> Pythagorean triangle </page_title> <path> Primitive_Pythagorean_triple > Special cases and related equations > The Jacobi–Madden equation </path> <section_title> The Jacobi–Madden equation </section_title> <content> The equation, a 4 + b 4 + c 4 + d 4 = ( a + b + c + d ) 4 {\displaystyle a^{4}+b^{4}+c^{4}+d^{4}=(a+b+c+d)^{4}} is equivalent to the special Pythagorean triple, ( a 2 + a b + b 2 ) 2 + ( c 2 + c d + d 2 ) 2 = ( ( a + b ) 2 + ( a + b ) ( c + d ) + ( c + d ) 2 ) 2 {\displaystyle (a^{2}+ab+b^{2})^{2}+(c^{2}+cd+d^{2})^{2}=((a+b)^{2}+(a+b)(c+d)+(c+d)^{2})^{2}} There is an infinite number of solutions to this equation as solving for the variables involves an elliptic curve. Small ones are, a , b , c , d = − 2634 , 955 , 1770 , 5400 {\displaystyle a,b,c,d=-2634,955,1770,5400} a , b , c , d = − 31764 , 7590 , 27385 , 48150 {\displaystyle a,b,c,d=-31764,7590,27385,48150} </content> ` Pythagorean triple Descartes' Circle Theorem For the case of Descartes' circle theorem where all variables are squares, 2 ( a 4 + b 4 + c 4 + d 4 ) = ( a 2 + b 2 + c 2 + d 2 ) 2 {\displaystyle 2(a^{4}+b^{4}+c^{4}+d^{4})=(a^{2}+b^{2}+c^{2}+d^{2})^{2}} Euler showed this is equivalent to three simultaneous Pythagorean triples, ( 2 a b ) 2 + ( 2 c d ) 2 = ( a 2 + b 2 − c 2 − d 2 ) 2 {\displaystyle (2ab)^{2}+(2cd)^{2}=(a^{2}+b^{2}-c^{2}-d^{2})^{2}} ( 2 a c ) 2 + ( 2 b d ) 2 = ( a 2 − b 2 + c 2 − d 2 ) 2 {\displaystyle (2ac)^{2}+(2bd)^{2}=(a^{2}-b^{2}+c^{2}-d^{2})^{2}} ( 2 a d ) 2 + ( 2 b c ) 2 = ( a 2 − b 2 − c 2 + d 2 ) 2 {\displaystyle (2ad)^{2}+(2bc)^{2}=(a^{2}-b^{2}-c^{2}+d^{2})^{2}} There is also an infinite number of solutions, and for the special case when a + b = c {\displaystyle a+b=c} , then the equation simplifi...
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.3199 500 4.0855
0.6398 1000 3.9274
0.9597 1500 3.9199

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
5
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lysandrec/MNLP_M2_document_encoder

Finetuned
(484)
this model