Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,10 @@ pipeline_tag: text-generation
|
|
11 |
base_model: tiiuae/falcon-180B
|
12 |
---
|
13 |
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
This instruction model was built via parameter-efficient QLoRA finetuning of [falcon-180b](https://huggingface.co/tiiuae/falcon-180B) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 4x A6000s (48 GB RTX) for roughly 32 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
17 |
|
|
|
11 |
base_model: tiiuae/falcon-180B
|
12 |
---
|
13 |
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
# Falcon-180B-Instruct-v0.1
|
18 |
|
19 |
This instruction model was built via parameter-efficient QLoRA finetuning of [falcon-180b](https://huggingface.co/tiiuae/falcon-180B) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 4x A6000s (48 GB RTX) for roughly 32 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
20 |
|