【更好的中⽂语⾳识别SpeechBrainWin1011本地部署,基于Aishell】
环境:Win11x64+Vscode+Python3.7.2x64+Pytorch1.9(CPU or GPU)
本⽂默认Win11,Win10 100%素可以得,默认向下兼容!
⾸先,你得把Vscode弄好(python 插件安装),py环境搭好,我们⽤默认得base py环境即可,当然,你也可以在conda创建py环境
然后。。。
我们创建py脚本:
安装sql2016教程from speechbrain.pretrained import EncoderDecoderASR
import torch
import torchaudio
# /speechbrain/asr-transformer-aishell/tree/main
tablet翻译成中文>familiar的动词和名词
matlab编程 列向量中每个值与其他值绝对值差的最小值
# le/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing#scrollTo=PPB0K9z3B43c
//PS:CPU版本和GPU版本Pytorch加载参数不同,具体参考下⾯⾕歌在线代码
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-aishell", savedir="pretrained_models/asr-transformer-aishell")
# anscribe_file("speechbrain/asr-transformer-aishell/example_mandarin.wav")
audio_1 ="F:/CSharpProject/KaldiDemo/KaldiDemo/bin/x64/Release/妹妹就是爱.flac"
#error:No audio IO backend is available
#安装SoundFile :运⾏指令 pip install SoundFile
#or者安装SoX :运⾏指令: pip install sox
ddd=torchaudio.list_audio_backends()
print(ddd)
eggplant
snt_1, fs = torchaudio.load(audio_1)
wav_sor([1.0])
print('snt_1:',snt_1," wav_lens:",wav_lens)
res=anscribe_batch(snt_1, wav_lens)
print('res:',res)
#对于⽤GPU版pytorch的⼩伙伴,加载模型可以参考以下代码
# Uncomment for using another pre-trained model
#asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-rnnlm-librispeech", savedir="pretrained_models/asr-crdnn-rnnlm-librisp eech",  run_opts={"device":"cuda"})
#asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-transformerlm-librispeech", savedir="pretrained_models/asr-crdnn-tran sformerlm-librispeech",  run_opts={"device":"cuda"})
python安装教程win11
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", savedir="pretrained_models/asr-transf ormer-transformerlm-librispeech",  run_opts={"device":"cuda"})
下⼀步,我们要⽤这个来训练我们的唤醒词,进⾏语⾳唤醒实战,敬请期待我的博客,记得三连(没有)!
PS:本⼈并⾮语⾳⽅⾯专业⼈⼠,不过也在学习,⼤家可以加⼀起探讨⼀下,集思⼴益,号:558174476(游戏与⼈⼯智能⽣命体)