Xenova HF Staff whitphx HF Staff commited on
Commit
b2f7405
·
verified ·
1 Parent(s): fcc7cc4

Add/update the quantized ONNX model files and README.md for Transformers.js v3 (#1)

Browse files

- Add/update the quantized ONNX model files and README.md for Transformers.js v3 (6e0797899ebde8411dca564bcc27960ab654337c)


Co-authored-by: Yuichiro Tachibana <whitphx@users.noreply.huggingface.co>

README.md CHANGED
@@ -6,4 +6,22 @@ pipeline_tag: zero-shot-classification
6
 
7
  https://huggingface.co/sileod/deberta-v3-base-tasksource-nli with ONNX weights to be compatible with Transformers.js.
8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
6
 
7
  https://huggingface.co/sileod/deberta-v3-base-tasksource-nli with ONNX weights to be compatible with Transformers.js.
8
 
9
+ ## Usage (Transformers.js)
10
+
11
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
12
+ ```bash
13
+ npm i @huggingface/transformers
14
+ ```
15
+
16
+ **Example:** Zero-shot text classification.
17
+
18
+ ```js
19
+ import { pipeline } from '@huggingface/transformers';
20
+
21
+ const classifier = await pipeline('zero-shot-classification', 'Xenova/deberta-v3-base-tasksource-nli');
22
+ const output = await classifier('I love transformers!', {
23
+ candidate_labels: ['positive', 'negative']
24
+ });
25
+ ```
26
+
27
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
onnx/model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:423406d0f578bda4fbfe2aa252ddff64aad6ac116c28dc785c2474a5ab080051
3
+ size 482534997
onnx/model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1ace1387b96643b28019cc463ef79c12b9d7ab65633f37ea5b3cb8c3f6a5ae5
3
+ size 222863108
onnx/model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f56f0c3ef1d867127070f7f4d46b49d734184441fea12f9f24bd361dbb4627c
3
+ size 487842885
onnx/model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b480ace1550715e6c1f6caceef04463e292db097e6406fb02a2309cc73f4ae8
3
+ size 265471135
onnx/model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2aa95d0b8b23916485bf97dce70a8c3f1377e37dabeb51febf115aaea01b254
3
+ size 222863145