YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Llama3-8b-alpaca-v2 - bnb 4bits

Original model description:

library_name: transformers tags: []

Model Card for Model ID

lainshower/Llama3-8b-alpaca-v2

Model Details

Full Fine-tuned Llama3-8B Alpaca (with training 3 epochs).

Training with (BF16) Mixed Precision For Stability.

This is Model is Trained For stanford alpaca for 3 Epochs. > Click here Llama3-8B-Alpaca-1EPOCHS For the Best Validation Loss Model.

Refer to the Training Graph for the better details.

Direct Use

[Templates]

You can use the following standard templates for inference the Llama3 Alpaca model:


PROMPT_DICT = {
    "prompt_input": (
        "Below is an instruction that describes a task, paired with an input that provides further context. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    ),
    "prompt_no_input": (
        "Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Response:"
    ),
}

[Code]

[Model Loading]


### We recommend using Float32 when running inference on the models.
model = LlamaForCausalLM.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
tokenizer = AutoTokenizer.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
  

[Template]


PROMPT_DICT = {
    "prompt_input": (
        "Below is an instruction that describes a task, paired with an input that provides further context. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    ),
    "prompt_no_input": (
        "Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Response:"
    ),
}

ann = {}
ann['instruction'] = '''You are presented with the quiz "What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:'''
prompt = PROMPT_DICT["prompt_no_input"].format_map(ann)
'''
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
"What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:

### Response:
''' 

[Generation]


input_ids = token.batch_encode_plus([prompt], return_tensors="pt", padding=False)
total_sequences = model.generate(input_ids=input_ids['input_ids'].cuda(), attention_mask=input_ids['attention_mask'].cuda(), max_length=490, do_sample=True, top_p=0.9) 
print(token.decode(total_sequences[0], skip_special_tokens=True))) 

Training Hyperparameters

Training Graph

training_graph.png

Downloads last month
4
Safetensors
Model size
4.65B params
Tensor type
F16
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support