Upload folder using huggingface_hub
Browse files- README.md +9 -7
- model.safetensors +2 -2
README.md
CHANGED
@@ -1,13 +1,13 @@
|
|
1 |
---
|
2 |
-
|
|
|
3 |
datasets:
|
4 |
- EleutherAI/the_pile_deduplicated
|
5 |
language:
|
6 |
- en
|
|
|
7 |
metrics:
|
8 |
- accuracy
|
9 |
-
base_model:
|
10 |
-
- BlinkDL/rwkv-7-pile
|
11 |
pipeline_tag: text-generation
|
12 |
library_name: transformers
|
13 |
---
|
@@ -38,15 +38,17 @@ This is RWKV-7 model under flash-linear attention format.
|
|
38 |
<!-- Provide the basic links for the model. -->
|
39 |
|
40 |
- **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
|
41 |
-
- **Paper:** [RWKV
|
|
|
|
|
42 |
|
43 |
## Uses
|
44 |
|
45 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
46 |
-
Install `flash-linear-attention`
|
47 |
|
48 |
```bash
|
49 |
-
pip install
|
50 |
pip install 'transformers>=4.48.0'
|
51 |
```
|
52 |
|
@@ -81,4 +83,4 @@ This model is trained on the Pile with a total of 332 billion tokens.
|
|
81 |
## FAQ
|
82 |
Q: safetensors metadata is none.
|
83 |
|
84 |
-
A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- BlinkDL/rwkv-7-pile
|
4 |
datasets:
|
5 |
- EleutherAI/the_pile_deduplicated
|
6 |
language:
|
7 |
- en
|
8 |
+
license: apache-2.0
|
9 |
metrics:
|
10 |
- accuracy
|
|
|
|
|
11 |
pipeline_tag: text-generation
|
12 |
library_name: transformers
|
13 |
---
|
|
|
38 |
<!-- Provide the basic links for the model. -->
|
39 |
|
40 |
- **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
|
41 |
+
- **Paper:** [RWKV: Parallelizable RNN with Transformer-level LLM Performance](https://huggingface.co/papers/2503.14456)
|
42 |
+
- **Project Page:** [RWKV](https://huggingface.co/RWKV)
|
43 |
+
|
44 |
|
45 |
## Uses
|
46 |
|
47 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
48 |
+
Install `flash-linear-attention` and the latest version of `transformers` before using this model:
|
49 |
|
50 |
```bash
|
51 |
+
pip install git+https://github.com/fla-org/flash-linear-attention
|
52 |
pip install 'transformers>=4.48.0'
|
53 |
```
|
54 |
|
|
|
83 |
## FAQ
|
84 |
Q: safetensors metadata is none.
|
85 |
|
86 |
+
A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7cf7acea0d2becc8b7c5f085b3fefb1e32bd0b3c6dee4e39fd2a3e419c9aee35
|
3 |
+
size 842244712
|