YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Model

This is a sample finetuned model produced under LLMPot research project and explained further in the related research manuscript.

How to Use

This model is a fine-tuned version of google/byt5-small for Modbus protocol emulation.

Make sure you have transformers and torch installed:

pip install transformers torch

Load the model and run a single inference.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("cv43/llmpot")
model = AutoModelForSeq2SeqLM.from_pretrained("cv43/llmpot")

pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, framework="pt")

request = "02b10000000b00100000000204ffffffff"
result = pipe(request)
print(f"Request: {request}, Response: {result[0]['generated_text']}")

Otherwise you may use our Space application where the model is running on the cloud.

Downloads last month
102
Safetensors
Model size
300M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cv43/llmpot

Base model

google/byt5-small
Finetuned
(112)
this model

Space using cv43/llmpot 1