Update README.md
Browse files
README.md
CHANGED
@@ -6,11 +6,12 @@ base_model:
|
|
6 |
- meta-llama/Llama-3.1-8B-Instruct
|
7 |
tags:
|
8 |
- legal
|
|
|
9 |
---
|
10 |
|
11 |
This is a fine-tuned version of the **Llama3.1-8B-Instruct** model, adapted for answering questions about legislation in Latvia. The model was fine-tuned on a [dataset](http://hdl.handle.net/20.500.12574/130) of ~15 thousand question–answer pairs sourced from the [LVportals.lv](https://lvportals.lv/e-konsultacijas) archive.
|
12 |
|
13 |
-
|
14 |
|
15 |
The data preparation, fine-tuning process, and comprehensive evaluation are described in more detail in:
|
16 |
|
@@ -18,4 +19,4 @@ The data preparation, fine-tuning process, and comprehensive evaluation are desc
|
|
18 |
|
19 |
**Note**:
|
20 |
|
21 |
-
The model may occasionally generate overly long responses. To prevent this, it is recommended to set the `num_predict` parameter to limit the number of tokens generated - either in your Python code or in the `Modelfile`, depending on how the model is run.
|
|
|
6 |
- meta-llama/Llama-3.1-8B-Instruct
|
7 |
tags:
|
8 |
- legal
|
9 |
+
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
This is a fine-tuned version of the **Llama3.1-8B-Instruct** model, adapted for answering questions about legislation in Latvia. The model was fine-tuned on a [dataset](http://hdl.handle.net/20.500.12574/130) of ~15 thousand question–answer pairs sourced from the [LVportals.lv](https://lvportals.lv/e-konsultacijas) archive.
|
13 |
|
14 |
+
Quantized versions of the model are available for use with Ollama and other local LLM runtime environments that support the GGUF format.
|
15 |
|
16 |
The data preparation, fine-tuning process, and comprehensive evaluation are described in more detail in:
|
17 |
|
|
|
19 |
|
20 |
**Note**:
|
21 |
|
22 |
+
The model may occasionally generate overly long responses. To prevent this, it is recommended to set the `num_predict` parameter to limit the number of tokens generated - either in your Python code or in the `Modelfile`, depending on how the model is run.
|