PubMedBERT BioNLI LoRA
PubMedBERT BioNLI LoRA is a biomedical Natural Language Inference (NLI) model fine-tuned with LoRA adapters.
It classifies entailment, contradiction, and neutrality between biomedical text pairs, optimized for chain-of-thought reasoning validation.
π Training Details
- Base model: pritamdeka/PubMedBERT-MNLI-MedNLI
- Fine-tuning datasets: BioASQ + MedNLI
- Objective: 3-class NLI (entailment / neutral / contradiction)
- Method: LoRA parameter-efficient fine-tuning
- Hardware: Apple MPS (Metal backend)
- Hyperparameters:
- Epochs: 4
- Learning rate: 1e-5
- Batch size: 8
- Max length: 256
- Gradient accumulation: 2
- Warmup ratio: 0.1
- Label smoothing: 0.05
π Results
Metric | Value |
---|---|
Accuracy | 90.39% |
Macro F1 | 0.9036 |
Eval Loss | 0.2673 |
π Calibrated with isotonic regression (calibration/isotonic.pkl
) for reliable probabilities.
π Usage
Transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Bam3752/PubMedBERT-BioNLI-LoRA")
tokenizer = AutoTokenizer.from_pretrained("Bam3752/PubMedBERT-BioNLI-LoRA")
premise = "Aspirin reduces the risk of myocardial infarction."
hypothesis = "Aspirin prevents heart attacks."
inputs = tokenizer(premise, hypothesis, return_tensors="pt")
outputs = model(**inputs)
probs = outputs.logits.softmax(-1).detach().cpu().numpy()
print(probs) # [neutral, contradiction, entailment]
- Downloads last month
- 59
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Evaluation results
- accuracy on BioASQ + MedNLIself-reported0.904
- f1 on BioASQ + MedNLIself-reported0.904
- loss on BioASQ + MedNLIself-reported0.267