Upload 22 files
Browse files- .gitattributes +4 -0
- wer/hubert-large-ls960-ft/README.md +68 -0
- wer/hubert-large-ls960-ft/config.json +70 -0
- wer/hubert-large-ls960-ft/preprocessor_config.json +9 -0
- wer/hubert-large-ls960-ft/pytorch_model.bin +3 -0
- wer/hubert-large-ls960-ft/special_tokens_map.json +1 -0
- wer/hubert-large-ls960-ft/tokenizer_config.json +1 -0
- wer/hubert-large-ls960-ft/vocab.json +1 -0
- wer/paraformer-zh/.mdl +0 -0
- wer/paraformer-zh/.msc +0 -0
- wer/paraformer-zh/.mv +1 -0
- wer/paraformer-zh/README.md +357 -0
- wer/paraformer-zh/am.mvn +8 -0
- wer/paraformer-zh/asr_example_hotword.wav +3 -0
- wer/paraformer-zh/config.yaml +160 -0
- wer/paraformer-zh/configuration.json +14 -0
- wer/paraformer-zh/example/asr_example.wav +3 -0
- wer/paraformer-zh/example/hotword.txt +1 -0
- wer/paraformer-zh/fig/res.png +3 -0
- wer/paraformer-zh/fig/seaco.png +3 -0
- wer/paraformer-zh/model.pt +3 -0
- wer/paraformer-zh/seg_dict +0 -0
- wer/paraformer-zh/tokens.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
wer/paraformer-zh/asr_example_hotword.wav filter=lfs diff=lfs merge=lfs -text
|
37 |
+
wer/paraformer-zh/example/asr_example.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
+
wer/paraformer-zh/fig/res.png filter=lfs diff=lfs merge=lfs -text
|
39 |
+
wer/paraformer-zh/fig/seaco.png filter=lfs diff=lfs merge=lfs -text
|
wer/hubert-large-ls960-ft/README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- libri-light
|
5 |
+
- librispeech_asr
|
6 |
+
tags:
|
7 |
+
- speech
|
8 |
+
- audio
|
9 |
+
- automatic-speech-recognition
|
10 |
+
- hf-asr-leaderboard
|
11 |
+
license: apache-2.0
|
12 |
+
model-index:
|
13 |
+
- name: hubert-large-ls960-ft
|
14 |
+
results:
|
15 |
+
- task:
|
16 |
+
name: Automatic Speech Recognition
|
17 |
+
type: automatic-speech-recognition
|
18 |
+
dataset:
|
19 |
+
name: LibriSpeech (clean)
|
20 |
+
type: librispeech_asr
|
21 |
+
config: clean
|
22 |
+
split: test
|
23 |
+
args:
|
24 |
+
language: en
|
25 |
+
metrics:
|
26 |
+
- name: Test WER
|
27 |
+
type: wer
|
28 |
+
value: 1.9
|
29 |
+
---
|
30 |
+
|
31 |
+
# Hubert-Large-Finetuned
|
32 |
+
|
33 |
+
[Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression)
|
34 |
+
|
35 |
+
The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
|
36 |
+
|
37 |
+
The model is a fine-tuned version of [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k).
|
38 |
+
|
39 |
+
[Paper](https://arxiv.org/abs/2106.07447)
|
40 |
+
|
41 |
+
Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed
|
42 |
+
|
43 |
+
**Abstract**
|
44 |
+
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.
|
45 |
+
|
46 |
+
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert .
|
47 |
+
|
48 |
+
# Usage
|
49 |
+
|
50 |
+
The model can be used for automatic-speech-recognition as follows:
|
51 |
+
|
52 |
+
```python
|
53 |
+
import torch
|
54 |
+
from transformers import Wav2Vec2Processor, HubertForCTC
|
55 |
+
from datasets import load_dataset
|
56 |
+
|
57 |
+
processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
|
58 |
+
model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
|
59 |
+
|
60 |
+
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
|
61 |
+
|
62 |
+
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
|
63 |
+
logits = model(input_values).logits
|
64 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
65 |
+
transcription = processor.decode(predicted_ids[0])
|
66 |
+
|
67 |
+
# ->"A MAN SAID TO THE UNIVERSE SIR I EXIST"
|
68 |
+
```
|
wer/hubert-large-ls960-ft/config.json
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "facebook/hubert-large-ls960-ft",
|
3 |
+
"activation_dropout": 0.1,
|
4 |
+
"apply_spec_augment": true,
|
5 |
+
"architectures": [
|
6 |
+
"HubertForCTC"
|
7 |
+
],
|
8 |
+
"attention_dropout": 0.1,
|
9 |
+
"bos_token_id": 1,
|
10 |
+
"conv_bias": true,
|
11 |
+
"conv_dim": [
|
12 |
+
512,
|
13 |
+
512,
|
14 |
+
512,
|
15 |
+
512,
|
16 |
+
512,
|
17 |
+
512,
|
18 |
+
512
|
19 |
+
],
|
20 |
+
"conv_kernel": [
|
21 |
+
10,
|
22 |
+
3,
|
23 |
+
3,
|
24 |
+
3,
|
25 |
+
3,
|
26 |
+
2,
|
27 |
+
2
|
28 |
+
],
|
29 |
+
"conv_stride": [
|
30 |
+
5,
|
31 |
+
2,
|
32 |
+
2,
|
33 |
+
2,
|
34 |
+
2,
|
35 |
+
2,
|
36 |
+
2
|
37 |
+
],
|
38 |
+
"ctc_loss_reduction": "sum",
|
39 |
+
"ctc_zero_infinity": false,
|
40 |
+
"diversity_loss_weight": 0.1,
|
41 |
+
"do_stable_layer_norm": true,
|
42 |
+
"eos_token_id": 2,
|
43 |
+
"feat_extract_activation": "gelu",
|
44 |
+
"feat_extract_dropout": 0.0,
|
45 |
+
"feat_extract_norm": "layer",
|
46 |
+
"feat_proj_dropout": 0.1,
|
47 |
+
"final_dropout": 0.1,
|
48 |
+
"gradient_checkpointing": false,
|
49 |
+
"hidden_act": "gelu",
|
50 |
+
"hidden_dropout": 0.1,
|
51 |
+
"hidden_dropout_prob": 0.1,
|
52 |
+
"hidden_size": 1024,
|
53 |
+
"initializer_range": 0.02,
|
54 |
+
"intermediate_size": 4096,
|
55 |
+
"layer_norm_eps": 1e-05,
|
56 |
+
"layerdrop": 0.1,
|
57 |
+
"mask_feature_length": 10,
|
58 |
+
"mask_feature_prob": 0.0,
|
59 |
+
"mask_time_length": 10,
|
60 |
+
"mask_time_prob": 0.05,
|
61 |
+
"model_type": "hubert",
|
62 |
+
"num_attention_heads": 16,
|
63 |
+
"num_conv_pos_embedding_groups": 16,
|
64 |
+
"num_conv_pos_embeddings": 128,
|
65 |
+
"num_feat_extract_layers": 7,
|
66 |
+
"num_hidden_layers": 24,
|
67 |
+
"pad_token_id": 0,
|
68 |
+
"transformers_version": "4.10.0.dev0",
|
69 |
+
"vocab_size": 32
|
70 |
+
}
|
wer/hubert-large-ls960-ft/preprocessor_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_normalize": true,
|
3 |
+
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
|
4 |
+
"feature_size": 1,
|
5 |
+
"padding_side": "right",
|
6 |
+
"padding_value": 0,
|
7 |
+
"return_attention_mask": true,
|
8 |
+
"sampling_rate": 16000
|
9 |
+
}
|
wer/hubert-large-ls960-ft/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9cf43abec3f0410ad6854afa4d376c69ccb364b48ddddfd25c4c5aa16398eab0
|
3 |
+
size 1262057559
|
wer/hubert-large-ls960-ft/special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
|
wer/hubert-large-ls960-ft/tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": false, "word_delimiter_token": "|"}
|
wer/hubert-large-ls960-ft/vocab.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"<pad>": 0, "<s>": 1, "</s>": 2, "<unk>": 3, "|": 4, "E": 5, "T": 6, "A": 7, "O": 8, "N": 9, "I": 10, "H": 11, "S": 12, "R": 13, "D": 14, "L": 15, "U": 16, "M": 17, "W": 18, "C": 19, "F": 20, "G": 21, "Y": 22, "P": 23, "B": 24, "V": 25, "K": 26, "'": 27, "X": 28, "J": 29, "Q": 30, "Z": 31}
|
wer/paraformer-zh/.mdl
ADDED
Binary file (99 Bytes). View file
|
|
wer/paraformer-zh/.msc
ADDED
Binary file (838 Bytes). View file
|
|
wer/paraformer-zh/.mv
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Revision:master,CreatedAt:1727670560
|
wer/paraformer-zh/README.md
ADDED
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tasks:
|
3 |
+
- auto-speech-recognition
|
4 |
+
domain:
|
5 |
+
- audio
|
6 |
+
model-type:
|
7 |
+
- Non-autoregressive
|
8 |
+
frameworks:
|
9 |
+
- pytorch
|
10 |
+
backbone:
|
11 |
+
- transformer/conformer
|
12 |
+
metrics:
|
13 |
+
- CER
|
14 |
+
license: Apache License 2.0
|
15 |
+
language:
|
16 |
+
- cn
|
17 |
+
tags:
|
18 |
+
- FunASR
|
19 |
+
- Paraformer
|
20 |
+
- Alibaba
|
21 |
+
- ICASSP2024
|
22 |
+
- Hotword
|
23 |
+
datasets:
|
24 |
+
train:
|
25 |
+
- 50,000 hour industrial Mandarin task
|
26 |
+
test:
|
27 |
+
- AISHELL-1-hotword dev/test
|
28 |
+
indexing:
|
29 |
+
results:
|
30 |
+
- task:
|
31 |
+
name: Automatic Speech Recognition
|
32 |
+
dataset:
|
33 |
+
name: 50,000 hour industrial Mandarin task
|
34 |
+
type: audio # optional
|
35 |
+
args: 16k sampling rate, 8404 characters # optional
|
36 |
+
metrics:
|
37 |
+
- type: CER
|
38 |
+
value: 8.53% # float
|
39 |
+
description: greedy search, withou lm, avg.
|
40 |
+
args: default
|
41 |
+
- type: RTF
|
42 |
+
value: 0.0251 # float
|
43 |
+
description: GPU inference on V100
|
44 |
+
args: batch_size=1
|
45 |
+
widgets:
|
46 |
+
- task: auto-speech-recognition
|
47 |
+
inputs:
|
48 |
+
- type: audio
|
49 |
+
name: input
|
50 |
+
title: 音频
|
51 |
+
parameters:
|
52 |
+
- name: hotword
|
53 |
+
title: 热词
|
54 |
+
type: string
|
55 |
+
examples:
|
56 |
+
- name: 1
|
57 |
+
title: 示例1
|
58 |
+
inputs:
|
59 |
+
- name: input
|
60 |
+
data: git://example/asr_example.wav
|
61 |
+
parameters:
|
62 |
+
- name: hotword
|
63 |
+
value: 魔搭
|
64 |
+
model_revision: v2.0.4
|
65 |
+
inferencespec:
|
66 |
+
cpu: 8 #CPU数量
|
67 |
+
memory: 4096
|
68 |
+
---
|
69 |
+
|
70 |
+
# Paraformer-large模型介绍
|
71 |
+
|
72 |
+
## Highlights
|
73 |
+
Paraformer-large热词版模型支持热词定制功能:实现热词定制化功能,基于提供的热词列表进行激励增强,提升热词的召回率和准确率。
|
74 |
+
|
75 |
+
|
76 |
+
## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
|
77 |
+
<strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
|
78 |
+
|
79 |
+
[**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
|
80 |
+
| [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
|
81 |
+
| [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
|
82 |
+
| [**服务部署**](https://www.funasr.com)
|
83 |
+
| [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
|
84 |
+
| [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
|
85 |
+
|
86 |
+
|
87 |
+
## 模型原理介绍
|
88 |
+
|
89 |
+
SeACoParaformer是阿里巴巴语音实验室提出的新一代热词定制化非自回归语音识别模型。相比于上一代基于CLAS的热词定制化方案,SeACoParaformer解耦了热词模块与ASR模型,通过后验概率融合的方式进行热词激励,使激励过程可见可控,并且热词召回率显著提升。
|
90 |
+
|
91 |
+
<p align="center">
|
92 |
+
<img src="fig/seaco.png" alt="SeACoParaformer模型结构" width="380" />
|
93 |
+
|
94 |
+
|
95 |
+
SeACoParaformer的模型结构与训练流程如上图所示,通过引入bias encoder进行热词embedding提取,bias decoder进行注意力建模,SeACoParaformer能够捕捉到Predictor输出和Decoder输出的信息与热词的相关性,并且预测与ASR结果同步的热词输出。通过后验概率的融合,实现热词激励。与ContextualParaformer相比,SeACoParaformer有明显的效果提升,如下图所示:
|
96 |
+
|
97 |
+
<p align="center">
|
98 |
+
<img src="fig/res.png" alt="SeACoParaformer模型结构" width="700" />
|
99 |
+
|
100 |
+
更详细的细节见:
|
101 |
+
- 论文: [SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability](https://arxiv.org/abs/2308.03266)
|
102 |
+
|
103 |
+
## 复现论文中的结果
|
104 |
+
```python
|
105 |
+
from funasr import AutoModel
|
106 |
+
|
107 |
+
model = AutoModel(model="iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch",
|
108 |
+
model_revision="v2.0.4",
|
109 |
+
# vad_model="damo/speech_fsmn_vad_zh-cn-16k-common-pytorch",
|
110 |
+
# vad_model_revision="v2.0.4",
|
111 |
+
# punc_model="damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch",
|
112 |
+
# punc_model_revision="v2.0.4",
|
113 |
+
# spk_model="damo/speech_campplus_sv_zh-cn_16k-common",
|
114 |
+
# spk_model_revision="v2.0.2",
|
115 |
+
device="cuda:0"
|
116 |
+
)
|
117 |
+
|
118 |
+
res = model.generate(input="YOUR_PATH/aishell1_hotword_dev.scp",
|
119 |
+
hotword='./data/dev/hotword.txt',
|
120 |
+
batch_size_s=300,
|
121 |
+
)
|
122 |
+
fout1 = open("dev.output", 'w')
|
123 |
+
for resi in res:
|
124 |
+
fout1.write("{}\t{}\n".format(resi['key'], resi['text']))
|
125 |
+
|
126 |
+
res = model.generate(input="YOUR_PATH/aishell1_hotword_test.scp",
|
127 |
+
hotword='./data/test/hotword.txt',
|
128 |
+
batch_size_s=300,
|
129 |
+
)
|
130 |
+
fout2 = open("test.output", 'w')
|
131 |
+
for resi in res:
|
132 |
+
fout2.write("{}\t{}\n".format(resi['key'], resi['text']))
|
133 |
+
```
|
134 |
+
|
135 |
+
## 基于ModelScope进行推理
|
136 |
+
|
137 |
+
- 推理支��音频格式如下:
|
138 |
+
- wav文件路径,例如:data/test/audios/asr_example.wav
|
139 |
+
- pcm文件路径,例如:data/test/audios/asr_example.pcm
|
140 |
+
- wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav
|
141 |
+
- wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
|
142 |
+
- 已解析的audio音频,例如:audio, rate = soundfile.read("asr_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
|
143 |
+
- wav.scp文件,需符合如下要求:
|
144 |
+
|
145 |
+
```sh
|
146 |
+
cat wav.scp
|
147 |
+
asr_example1 data/test/audios/asr_example1.wav
|
148 |
+
asr_example2 data/test/audios/asr_example2.wav
|
149 |
+
...
|
150 |
+
```
|
151 |
+
|
152 |
+
- 若输入格式wav文件url,api调用方式可参考如下范例:
|
153 |
+
|
154 |
+
```python
|
155 |
+
from modelscope.pipelines import pipeline
|
156 |
+
from modelscope.utils.constant import Tasks
|
157 |
+
|
158 |
+
inference_pipeline = pipeline(
|
159 |
+
task=Tasks.auto_speech_recognition,
|
160 |
+
model='iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', model_revision="v2.0.4")
|
161 |
+
|
162 |
+
rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav', hotword='达摩院 魔搭')
|
163 |
+
print(rec_result)
|
164 |
+
```
|
165 |
+
|
166 |
+
- 输入音频为pcm格式,调用api时需要传入音频采样率参数audio_fs,例如:
|
167 |
+
|
168 |
+
```python
|
169 |
+
rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.pcm', fs=16000, hotword='达摩院 魔搭')
|
170 |
+
```
|
171 |
+
|
172 |
+
- 输入音频为wav格式,api调用方式可参考如下范例:
|
173 |
+
|
174 |
+
```python
|
175 |
+
rec_result = inference_pipeline('asr_example_zh.wav', hotword='达摩院 魔搭')
|
176 |
+
```
|
177 |
+
|
178 |
+
- 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,api调用方式可参考如下范例:
|
179 |
+
|
180 |
+
```python
|
181 |
+
inference_pipeline("wav.scp", output_dir='./output_dir', hotword='达摩院 魔搭')
|
182 |
+
```
|
183 |
+
识别结果输出路径结构如下:
|
184 |
+
|
185 |
+
```sh
|
186 |
+
tree output_dir/
|
187 |
+
output_dir/
|
188 |
+
└── 1best_recog
|
189 |
+
├── score
|
190 |
+
└── text
|
191 |
+
|
192 |
+
1 directory, 3 files
|
193 |
+
```
|
194 |
+
|
195 |
+
score:识别路径得分
|
196 |
+
|
197 |
+
text:语音识别结果文件
|
198 |
+
|
199 |
+
|
200 |
+
- 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
|
201 |
+
|
202 |
+
```python
|
203 |
+
import soundfile
|
204 |
+
|
205 |
+
waveform, sample_rate = soundfile.read("asr_example_zh.wav")
|
206 |
+
rec_result = inference_pipeline(waveform, hotword='达摩院 魔搭')
|
207 |
+
```
|
208 |
+
|
209 |
+
- ASR、VAD、PUNC模型自由组合
|
210 |
+
|
211 |
+
可根据使用需求对VAD和PUNC标点模型进行自由组合,使用方式如下:
|
212 |
+
```python
|
213 |
+
inference_pipeline = pipeline(
|
214 |
+
task=Tasks.auto_speech_recognition,
|
215 |
+
model='iic/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', model_revision="v2.0.4",
|
216 |
+
vad_model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch', vad_model_revision="v2.0.4",
|
217 |
+
punc_model='iic/punc_ct-transformer_zh-cn-common-vocab272727-pytorch', punc_model_revision="v2.0.3",
|
218 |
+
# spk_model="iic/speech_campplus_sv_zh-cn_16k-common",
|
219 |
+
# spk_model_revision="v2.0.2",
|
220 |
+
)
|
221 |
+
```
|
222 |
+
若不使用PUNC模型,可配置punc_model=None,或不传入punc_model参数,如需加入LM模型,可增加配置lm_model='iic/speech_transformer_lm_zh-cn-common-vocab8404-pytorch',并设置lm_weight和beam_size参数。
|
223 |
+
|
224 |
+
## 基于FunASR进行推理
|
225 |
+
|
226 |
+
下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
|
227 |
+
|
228 |
+
### 可执行命令行
|
229 |
+
在命令行终端执行:
|
230 |
+
|
231 |
+
```shell
|
232 |
+
funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=vad_example.wav
|
233 |
+
```
|
234 |
+
|
235 |
+
注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
|
236 |
+
|
237 |
+
### python示例
|
238 |
+
#### 非实时语音识别
|
239 |
+
```python
|
240 |
+
from funasr import AutoModel
|
241 |
+
# paraformer-zh is a multi-functional asr model
|
242 |
+
# use vad, punc, spk or not as you need
|
243 |
+
model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
|
244 |
+
vad_model="fsmn-vad", vad_model_revision="v2.0.4",
|
245 |
+
punc_model="ct-punc-c", punc_model_revision="v2.0.4",
|
246 |
+
# spk_model="cam++", spk_model_revision="v2.0.2",
|
247 |
+
)
|
248 |
+
res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
|
249 |
+
batch_size_s=300,
|
250 |
+
hotword='魔搭')
|
251 |
+
print(res)
|
252 |
+
```
|
253 |
+
注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
|
254 |
+
|
255 |
+
#### 实时语音识别
|
256 |
+
|
257 |
+
```python
|
258 |
+
from funasr import AutoModel
|
259 |
+
|
260 |
+
chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
|
261 |
+
encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
|
262 |
+
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
|
263 |
+
|
264 |
+
model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
|
265 |
+
|
266 |
+
import soundfile
|
267 |
+
import os
|
268 |
+
|
269 |
+
wav_file = os.path.join(model.model_path, "example/asr_example.wav")
|
270 |
+
speech, sample_rate = soundfile.read(wav_file)
|
271 |
+
chunk_stride = chunk_size[1] * 960 # 600ms
|
272 |
+
|
273 |
+
cache = {}
|
274 |
+
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
|
275 |
+
for i in range(total_chunk_num):
|
276 |
+
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
|
277 |
+
is_final = i == total_chunk_num - 1
|
278 |
+
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
|
279 |
+
print(res)
|
280 |
+
```
|
281 |
+
|
282 |
+
注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
|
283 |
+
|
284 |
+
#### 语音端点检测(非实时)
|
285 |
+
```python
|
286 |
+
from funasr import AutoModel
|
287 |
+
|
288 |
+
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
289 |
+
|
290 |
+
wav_file = f"{model.model_path}/example/asr_example.wav"
|
291 |
+
res = model.generate(input=wav_file)
|
292 |
+
print(res)
|
293 |
+
```
|
294 |
+
|
295 |
+
#### 语音端点检测(实时)
|
296 |
+
```python
|
297 |
+
from funasr import AutoModel
|
298 |
+
|
299 |
+
chunk_size = 200 # ms
|
300 |
+
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
301 |
+
|
302 |
+
import soundfile
|
303 |
+
|
304 |
+
wav_file = f"{model.model_path}/example/vad_example.wav"
|
305 |
+
speech, sample_rate = soundfile.read(wav_file)
|
306 |
+
chunk_stride = int(chunk_size * sample_rate / 1000)
|
307 |
+
|
308 |
+
cache = {}
|
309 |
+
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
|
310 |
+
for i in range(total_chunk_num):
|
311 |
+
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
|
312 |
+
is_final = i == total_chunk_num - 1
|
313 |
+
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
|
314 |
+
if len(res[0]["value"]):
|
315 |
+
print(res)
|
316 |
+
```
|
317 |
+
|
318 |
+
#### 标点恢复
|
319 |
+
```python
|
320 |
+
from funasr import AutoModel
|
321 |
+
|
322 |
+
model = AutoModel(model="ct-punc", model_revision="v2.0.4")
|
323 |
+
|
324 |
+
res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
|
325 |
+
print(res)
|
326 |
+
```
|
327 |
+
|
328 |
+
#### 时间戳预测
|
329 |
+
```python
|
330 |
+
from funasr import AutoModel
|
331 |
+
|
332 |
+
model = AutoModel(model="fa-zh", model_revision="v2.0.4")
|
333 |
+
|
334 |
+
wav_file = f"{model.model_path}/example/asr_example.wav"
|
335 |
+
text_file = f"{model.model_path}/example/text.txt"
|
336 |
+
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
337 |
+
print(res)
|
338 |
+
```
|
339 |
+
|
340 |
+
更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
341 |
+
|
342 |
+
|
343 |
+
## 微调
|
344 |
+
|
345 |
+
详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
346 |
+
|
347 |
+
|
348 |
+
## 相关论文以及引用信息
|
349 |
+
|
350 |
+
```BibTeX
|
351 |
+
@article{shi2023seaco,
|
352 |
+
title={SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability},
|
353 |
+
author={Shi, Xian and Yang, Yexin and Li, Zerui and Zhang, Shiliang},
|
354 |
+
journal={arXiv preprint arXiv:2308.03266 (accepted by ICASSP2024)},
|
355 |
+
year={2023}
|
356 |
+
}
|
357 |
+
```
|
wer/paraformer-zh/am.mvn
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<Nnet>
|
2 |
+
<Splice> 560 560
|
3 |
+
[ 0 ]
|
4 |
+
<AddShift> 560 560
|
5 |
+
<LearnRateCoef> 0 [ -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 ]
|
6 |
+
<Rescale> 560 560
|
7 |
+
<LearnRateCoef> 0 [ 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 ]
|
8 |
+
</Nnet>
|
wer/paraformer-zh/asr_example_hotword.wav
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:51792bc95be33075c1a8abb9afb76ad9f72943e84cd723cc8825b2678799b004
|
3 |
+
size 253642
|
wer/paraformer-zh/config.yaml
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This is an example that demonstrates how to configure a model file.
|
2 |
+
# You can modify the configuration according to your own requirements.
|
3 |
+
|
4 |
+
# to print the register_table:
|
5 |
+
# from funasr.utils.register import registry_tables
|
6 |
+
# registry_tables.print()
|
7 |
+
|
8 |
+
# network architecture
|
9 |
+
model: SeacoParaformer
|
10 |
+
model_conf:
|
11 |
+
ctc_weight: 0.0
|
12 |
+
lsm_weight: 0.1
|
13 |
+
length_normalized_loss: true
|
14 |
+
predictor_weight: 1.0
|
15 |
+
predictor_bias: 1
|
16 |
+
sampling_ratio: 0.75
|
17 |
+
inner_dim: 512
|
18 |
+
bias_encoder_type: lstm
|
19 |
+
bias_encoder_bid: false
|
20 |
+
seaco_lsm_weight: 0.1
|
21 |
+
seaco_length_normal: true
|
22 |
+
train_decoder: true
|
23 |
+
NO_BIAS: 8377
|
24 |
+
|
25 |
+
# encoder
|
26 |
+
encoder: SANMEncoder
|
27 |
+
encoder_conf:
|
28 |
+
output_size: 512
|
29 |
+
attention_heads: 4
|
30 |
+
linear_units: 2048
|
31 |
+
num_blocks: 50
|
32 |
+
dropout_rate: 0.1
|
33 |
+
positional_dropout_rate: 0.1
|
34 |
+
attention_dropout_rate: 0.1
|
35 |
+
input_layer: pe
|
36 |
+
pos_enc_class: SinusoidalPositionEncoder
|
37 |
+
normalize_before: true
|
38 |
+
kernel_size: 11
|
39 |
+
sanm_shfit: 0
|
40 |
+
selfattention_layer_type: sanm
|
41 |
+
|
42 |
+
# decoder
|
43 |
+
decoder: ParaformerSANMDecoder
|
44 |
+
decoder_conf:
|
45 |
+
attention_heads: 4
|
46 |
+
linear_units: 2048
|
47 |
+
num_blocks: 16
|
48 |
+
dropout_rate: 0.1
|
49 |
+
positional_dropout_rate: 0.1
|
50 |
+
self_attention_dropout_rate: 0.1
|
51 |
+
src_attention_dropout_rate: 0.1
|
52 |
+
att_layer_num: 16
|
53 |
+
kernel_size: 11
|
54 |
+
sanm_shfit: 0
|
55 |
+
|
56 |
+
# seaco decoder
|
57 |
+
seaco_decoder: ParaformerSANMDecoder
|
58 |
+
seaco_decoder_conf:
|
59 |
+
attention_heads: 4
|
60 |
+
linear_units: 1024
|
61 |
+
num_blocks: 4
|
62 |
+
dropout_rate: 0.1
|
63 |
+
positional_dropout_rate: 0.1
|
64 |
+
self_attention_dropout_rate: 0.1
|
65 |
+
src_attention_dropout_rate: 0.1
|
66 |
+
kernel_size: 21
|
67 |
+
sanm_shfit: 0
|
68 |
+
use_output_layer: false
|
69 |
+
wo_input_layer: true
|
70 |
+
|
71 |
+
predictor: CifPredictorV3
|
72 |
+
predictor_conf:
|
73 |
+
idim: 512
|
74 |
+
threshold: 1.0
|
75 |
+
l_order: 1
|
76 |
+
r_order: 1
|
77 |
+
tail_threshold: 0.45
|
78 |
+
smooth_factor2: 0.25
|
79 |
+
noise_threshold2: 0.01
|
80 |
+
upsample_times: 3
|
81 |
+
use_cif1_cnn: false
|
82 |
+
upsample_type: cnn_blstm
|
83 |
+
|
84 |
+
# frontend related
|
85 |
+
frontend: WavFrontend
|
86 |
+
frontend_conf:
|
87 |
+
fs: 16000
|
88 |
+
window: hamming
|
89 |
+
n_mels: 80
|
90 |
+
frame_length: 25
|
91 |
+
frame_shift: 10
|
92 |
+
lfr_m: 7
|
93 |
+
lfr_n: 6
|
94 |
+
dither: 0.0
|
95 |
+
|
96 |
+
specaug: SpecAugLFR
|
97 |
+
specaug_conf:
|
98 |
+
apply_time_warp: false
|
99 |
+
time_warp_window: 5
|
100 |
+
time_warp_mode: bicubic
|
101 |
+
apply_freq_mask: true
|
102 |
+
freq_mask_width_range:
|
103 |
+
- 0
|
104 |
+
- 30
|
105 |
+
lfr_rate: 6
|
106 |
+
num_freq_mask: 1
|
107 |
+
apply_time_mask: true
|
108 |
+
time_mask_width_range:
|
109 |
+
- 0
|
110 |
+
- 12
|
111 |
+
num_time_mask: 1
|
112 |
+
|
113 |
+
train_conf:
|
114 |
+
accum_grad: 1
|
115 |
+
grad_clip: 5
|
116 |
+
max_epoch: 150
|
117 |
+
val_scheduler_criterion:
|
118 |
+
- valid
|
119 |
+
- acc
|
120 |
+
best_model_criterion:
|
121 |
+
- - valid
|
122 |
+
- acc
|
123 |
+
- max
|
124 |
+
keep_nbest_models: 10
|
125 |
+
log_interval: 50
|
126 |
+
unused_parameters: true
|
127 |
+
|
128 |
+
optim: adam
|
129 |
+
optim_conf:
|
130 |
+
lr: 0.0005
|
131 |
+
scheduler: warmuplr
|
132 |
+
scheduler_conf:
|
133 |
+
warmup_steps: 30000
|
134 |
+
|
135 |
+
dataset: AudioDatasetHotword
|
136 |
+
dataset_conf:
|
137 |
+
seaco_id: 8377
|
138 |
+
index_ds: IndexDSJsonl
|
139 |
+
batch_sampler: DynamicBatchLocalShuffleSampler
|
140 |
+
batch_type: example # example or length
|
141 |
+
batch_size: 1 # if batch_type is example, batch_size is the numbers of samples; if length, batch_size is source_token_len+target_token_len;
|
142 |
+
max_token_length: 2048 # filter samples if source_token_len+target_token_len > max_token_length,
|
143 |
+
buffer_size: 500
|
144 |
+
shuffle: True
|
145 |
+
num_workers: 0
|
146 |
+
|
147 |
+
tokenizer: CharTokenizer
|
148 |
+
tokenizer_conf:
|
149 |
+
unk_symbol: <unk>
|
150 |
+
split_with_space: true
|
151 |
+
|
152 |
+
|
153 |
+
ctc_conf:
|
154 |
+
dropout_rate: 0.0
|
155 |
+
ctc_type: builtin
|
156 |
+
reduce: true
|
157 |
+
ignore_nan_grad: true
|
158 |
+
|
159 |
+
normalize: null
|
160 |
+
|
wer/paraformer-zh/configuration.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"framework": "pytorch",
|
3 |
+
"task" : "auto-speech-recognition",
|
4 |
+
"model": {"type" : "funasr"},
|
5 |
+
"pipeline": {"type":"funasr-pipeline"},
|
6 |
+
"model_name_in_hub": {
|
7 |
+
"ms":"iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch",
|
8 |
+
"hf":""},
|
9 |
+
"file_path_metas": {
|
10 |
+
"init_param":"model.pt",
|
11 |
+
"config":"config.yaml",
|
12 |
+
"tokenizer_conf": {"token_list": "tokens.json", "seg_dict_file": "seg_dict"},
|
13 |
+
"frontend_conf":{"cmvn_file": "am.mvn"}}
|
14 |
+
}
|
wer/paraformer-zh/example/asr_example.wav
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ffa478de2cd570dd54e8762008cd6bbde9871fd79757f1cdbbec7d6b7b49274
|
3 |
+
size 144770
|
wer/paraformer-zh/example/hotword.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
魔搭
|
wer/paraformer-zh/fig/res.png
ADDED
![]() |
Git LFS Details
|
wer/paraformer-zh/fig/seaco.png
ADDED
![]() |
Git LFS Details
|
wer/paraformer-zh/model.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d491689244ec5dfbf9170ef3827c358aa10f1f20e42a7c59e15e688647946d1
|
3 |
+
size 989763045
|
wer/paraformer-zh/seg_dict
ADDED
The diff for this file is too large to render.
See raw diff
|
|
wer/paraformer-zh/tokens.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|