Update README.md
Browse files
README.md
CHANGED
@@ -7,18 +7,25 @@ language:
|
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
-

|
18 |
>
|
19 |
-
> The following is the raw representation of the EagleX 7B v2 model.
|
20 |
> This is not an instruct tune model! (soon...)
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
## Evaluation
|
23 |
|
24 |
The following shows the progression of the model from 1.1T trained to 2.25T trained.
|
|
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
+

|
11 |
|
12 |
### RWKV EagleX 7B v2 Model
|
13 |
|
14 |
+
> **!Important!: This is not meant to be used with the huggingface transformers**
|
|
|
|
|
15 |
> [Use the Hugging Face varient instead, found here (v5-EagleX-v2-7B-HF)](huggingface.co/RWKV/v5-EagleX-v2-7B-HF)
|
16 |
>
|
17 |
+
> The following is the raw representation of the EagleX 7B v2 model. For use with our own set of trainers
|
18 |
> This is not an instruct tune model! (soon...)
|
19 |
|
20 |
+
## Using with the hugging face library
|
21 |
+
|
22 |
+
[See the huggingface version here (v5-EagleX-v2-7B-HF)](huggingface.co/RWKV/v5-EagleX-v2-7B-HF)
|
23 |
+
|
24 |
+
```
|
25 |
+
model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True).to(torch.float32)
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True)
|
27 |
+
```
|
28 |
+
|
29 |
## Evaluation
|
30 |
|
31 |
The following shows the progression of the model from 1.1T trained to 2.25T trained.
|