ONNX format of voxreality/src_ctx_and_term_nllb_600M model

Model inference example:

from optimum.onnxruntime import ORTModelForSeq2SeqLM
from transformers import AutoTokenizer,pipeline

model_path = 'voxreality/src_ctx_and_term_nllb_600M_onnx' 

model = ORTModelForSeq2SeqLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)

max_length = 100
src_lang = 'eng_Latn'
tgt_lang = 'deu_Latn'
context_text = 'This is an optional context sentence.'
target_term = 'text'
sentence_text = 'Text to be translated.'

input_text = f'{context_text} {tokenizer.sep_token} {sentence_text} {tokenizer.sep_token} {target_term}'

forced_bos_token_id = tokenizer.lang_code_to_id[tgt_lang]

output = model.generate(
    **tokenizer(input_text, return_tensors='pt'),
    forced_bos_token_id=forced_bos_token_id,
    max_length=max_length
)

output_text = tokenizer.batch_decode(output, skip_special_tokens=True)[0]

print(output_text)
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support