更新记录

1.0.3(2026-01-21)

  • 支持设置执行线程数

1.0.2(2026-01-21)

  • 补充paraformer模型的配置

1.0.1(2026-01-20)

  • 适配uniappX
  • 补充说明文档
查看更多

平台兼容性

uni-app(4.87)

Vue2 Vue3 Chrome Safari app-vue app-nvue Android Android插件版本 iOS 鸿蒙
- - - - - - 5.0 1.0.0 - -
微信小程序 支付宝小程序 抖音小程序 百度小程序 快手小程序 京东小程序 鸿蒙元服务 QQ小程序 飞书小程序 快应用-华为 快应用-联盟
- - - - - - - - - - -

uni-app x(4.87)

Chrome Safari Android Android插件版本 iOS 鸿蒙 微信小程序
- - 5.0 1.0.0 - - -

xwq-sherpa-onnx

开发文档说明

插件功能 (暂只支持安卓端,iOS在开发中,敬请期待...)

  • 实时语音识别
  • 离线语音识别(wav音频文件转文本识别)
  • iOS端开发中....

ASR模型比较大,可以根据自己的需求选择下载,下载地址》》

  • 初始化模型参数说明
属性 类型 默认值 必填 描述
model string - N 模型路径,单模型的情况使用
encoder string - N 编码器模型路径,多模型的情况使用,例如:transducer模型
decoder string - N 解码器模型路径,多模型的情况使用,例如:transducer模型
joiner string - N joiner模型路径,多模型的情况使用,例如:transducer模型
tokens string - N tokens文件路径,一般在多模型文件下才会存在,例如:transducer模型
success ()=>void - Y 初始化模型成功回调
fail ()=>void - Y 初始化模型失败回调
numThreads number 1 N 执行线程数

实时语音识别使用步骤

  • 1.初始化模型识别器
uesSherpaOnnx({
    mode: 'online',
    modelType: "transducer",
    model: '',
    tokens: staticPath + "tokens.txt",
    encoder: staticPath + "encoder-epoch-99-avg-1.int8.onnx",
    decoder: staticPath + "decoder-epoch-99-avg-1.onnx",
    joiner: staticPath + "joiner-epoch-99-avg-1.int8.onnx",
    success: () => {
        console.log('初始化成功11')
        uni.hideLoading()
    },
    fail: (res) => {
        console.log(res)
    }
})
  • 2.设置识别结果回调监听函数
setListner() {
    setRecognizerResultListner((res, final) => {
        console.log('识别结果====', res)
        console.log('final====', final)
        this.content = res
    })
}
  • 3.开始语音识别
start() {
    startRecognizer()
}

离线语音识别(识别wav音频文件)使用步骤

  • 1.初始化模型识别器
initOfflineOnnx() {
    uni.showLoading({
        title: '模型加载中...',
    })
    let path = '/static/offlineOnnxModel/';
    const staticPath = plus.io.convertLocalFileSystemURL(path);
    console.log('staticPath===', staticPath)
    uesSherpaOnnx({
        mode: 'offline',
        modelType: "zipformer2ctc",
        model: '',
        tokens: staticPath + "tokens.txt",
        encoder: staticPath + "encoder-epoch-34-avg-19.int8.onnx",
        decoder: staticPath + "decoder-epoch-34-avg-19.onnx",
        joiner: staticPath + "joiner-epoch-34-avg-19.int8.onnx",
        success: () => {
            console.log('初始化成功11')
            uni.hideLoading()
        },
        fail: (res) => {
            console.log(res)
        }
    })
}
  • 2.开始识别wav音频文件(需要单声道、16kHz、16位 PCM 格式)
recognizerFile() {
    let path = '/static/offlineOnnxModel/';
    const filePath = plus.io.convertLocalFileSystemURL(path) + '1.wav';
    startOfflineRecognizerFile(filePath, (result) => {
        console.log('识别结果===', result)
        this.content = result
    })
}
  • 在不使用的时候可以停止识别
stopRecognizer()  //停止实时语音识别
stopOfflineRecognizerFile() //停止音频文件识别

uniappX页面完整示例

<template>
    <view>
        <button @click="initOnnx">初始化Onnx</button>
        <button @click="setListner">设置结果监听</button>
        <button @click="removeRecognizerResultListner">移除结果监听</button>
        <button @click="start">开始识别</button>
        <button @click="stop">停止识别</button>

        <button @click="initOfflineOnnx">初始化离线语音识别器</button>
        <button @click="recognizerFile">开始识别WAV File音频文件</button>
        <button @click="stopRecognizerFile">停止识别WAV File音频文件</button>
        <view style="padding:10px;">
            <view class="title">
                <text>识别结果:</text>
            </view>
            <view class="content">
                <textarea :value="content" disabled style="padding:15px;background-color: #f2f2f2;width: 100%;"></textarea>
                <!-- <text v-for="(i,k) in resultArr" :key="k"></text> -->
            </view>
        </view>
    </view>
</template>

<script setup>
    import {
        startRecognizer,
        uesSherpaOnnx,
        stopRecognizer,
        setRecognizerResultListner,
        removeOnRecognizerResultListner,
        startOfflineRecognizerFile,
        stopOfflineRecognizerFile
    } from "@/uni_modules/xwq-sherpa-onnx";
    import { UseSherpaOnnxOptions } from "@/uni_modules/xwq-sherpa-onnx/utssdk/interface.uts";

    const content=ref('');

    // 在线识别
    const initOnnx=()=> {
        uni.showLoading({
            title: '模型加载中...',
        })

        uesSherpaOnnx({
            mode: 'online',
            modelType: "transducer",
            model: '',
            tokens: "/static/onlineOnnxModel/tokens.txt",
            encoder: "/static/onlineOnnxModel/encoder-epoch-99-avg-1.int8.onnx",
            decoder: "/static/onlineOnnxModel/decoder-epoch-99-avg-1.onnx",
            joiner: "/static/onlineOnnxModel/joiner-epoch-99-avg-1.int8.onnx",
            success: () => {
                console.log('初始化成功')
                uni.hideLoading()
            },
            fail: (res) => {
                console.log(res)
            }
        } as UseSherpaOnnxOptions)
    };
    //离线识别
    const initOfflineOnnx=()=> {
        uni.showLoading({
            title: '模型加载中...',
        })
        uesSherpaOnnx({
            mode: 'offline',
            modelType: "zipformer2ctc",
            model: '',
            tokens: "/static/offlineOnnxModel/tokens.txt",
            encoder: "/static/offlineOnnxModel/encoder-epoch-34-avg-19.int8.onnx",
            decoder: "/static/offlineOnnxModel/decoder-epoch-34-avg-19.onnx",
            joiner: "/static/offlineOnnxModel/joiner-epoch-34-avg-19.int8.onnx",
            success: () => {
                console.log('初始化成功')
                uni.hideLoading()
            },
            fail: (res) => {
                console.log(res)
            }
        } as UseSherpaOnnxOptions)
    };
    //开始识别
    const start=()=> {
        startRecognizer()
    };
    //停止识别
    const stop=()=> {
        stopRecognizer()
    };
    //设置监听
    const setListner=()=> {
        setRecognizerResultListner((res:string, final:boolean) => {
            console.log('识别结果====', res)
            console.log('final====', final)
            content.value = res
        })
    };
    //移除监听
    const removeRecognizerResultListner=()=> {
        removeOnRecognizerResultListner()
    };

    //离线识别file(wav格式音频)
    const recognizerFile=()=> {
        const filePath = '/static/offlineOnnxModel/1.wav';
        startOfflineRecognizerFile(filePath, (result) => {
            console.log('识别结果===', result)
            content.value = result
        })

    };

    //停止离线识别wav file
    const stopRecognizerFile=()=> {
        stopOfflineRecognizerFile()
    };

</script>

<style>

</style>

uniapp页面完整示例例

<template>
    <view>
        <button @click="initOnnx">初始化Onnx</button>
        <button @click="setListner">设置结果监听</button>
        <button @click="removeRecognizerResultListner">移除结果监听</button>
        <button @click="start">开始识别</button>
        <button @click="stop">停止识别</button>

        <button @click="initOfflineOnnx">初始化离线语音识别器</button>
        <button @click="recognizerFile">开始识别WAV File音频文件</button>
        <button @click="stopRecognizerFile">停止识别WAV File音频文件</button>
        <view style="padding:10px;">
            <view class="title">
                <text>识别结果:</text>
            </view>
            <view class="content">
                <textarea :value="content" disabled style="padding:15px;"></textarea>
                <!-- <text v-for="(i,k) in resultArr" :key="k"></text> -->
            </view>
        </view>
    </view>
</template>

<script>
    import {
        startRecognizer,
        uesSherpaOnnx,
        stopRecognizer,
        setRecognizerResultListner,
        removeOnRecognizerResultListner,
        startOfflineRecognizerFile,
        stopOfflineRecognizerFile
    } from "@/uni_modules/xwq-sherpa-onnx";
    export default {
        data() {
            return {
                resultArr: [],
                content: ""
            }
        },

        mounted() {},
        methods: {
            // 在线识别
            initOnnx() {
                uni.showLoading({
                    title: '模型加载中...',
                })
                let path = '/static/onlineOnnxModel/';
                const staticPath = plus.io.convertLocalFileSystemURL(path);
                console.log('staticPath===', staticPath)

                uesSherpaOnnx({
                    mode: 'online',
                    modelType: "transducer",
                    model: '',
                    tokens: staticPath + "tokens.txt",
                    encoder: staticPath + "encoder-epoch-99-avg-1.int8.onnx",
                    decoder: staticPath + "decoder-epoch-99-avg-1.onnx",
                    joiner: staticPath + "joiner-epoch-99-avg-1.int8.onnx",
                    success: () => {
                        console.log('初始化成功11')
                        uni.hideLoading()
                    },
                    fail: (res) => {
                        console.log(res)
                    }
                })
            },
            //离线识别
            initOfflineOnnx() {
                uni.showLoading({
                    title: '模型加载中...',
                })
                let path = '/static/offlineOnnxModel/';
                const staticPath = plus.io.convertLocalFileSystemURL(path);
                console.log('staticPath===', staticPath)
                uesSherpaOnnx({
                    mode: 'offline',
                    modelType: "zipformer2ctc",
                    model: '',
                    tokens: staticPath + "tokens.txt",
                    encoder: staticPath + "encoder-epoch-34-avg-19.int8.onnx",
                    decoder: staticPath + "decoder-epoch-34-avg-19.onnx",
                    joiner: staticPath + "joiner-epoch-34-avg-19.int8.onnx",
                    success: () => {
                        console.log('初始化成功11')
                        uni.hideLoading()
                    },
                    fail: (res) => {
                        console.log(res)
                    }
                })
            },
            //开始识别
            start() {
                startRecognizer()
            },
            //停止识别
            stop() {
                stopRecognizer()
            },
            //设置监听
            setListner() {
                setRecognizerResultListner((res, final) => {
                    console.log('识别结果====', res)
                    console.log('final====', final)
                    this.content = res
                })
            },
            //移除监听
            removeRecognizerResultListner() {
                removeOnRecognizerResultListner()
            },

            //离线识别file(wav格式音频)
            recognizerFile() {
                let path = '/static/offlineOnnxModel/';
                const filePath = plus.io.convertLocalFileSystemURL(path) + '1.wav';
                startOfflineRecognizerFile(filePath, (result) => {
                    console.log('识别结果===', result)
                    this.content = result
                })

            },

            //停止离线识别wav file
            stopRecognizerFile() {
                stopOfflineRecognizerFile()
            }

        }
    }
</script>

<style>
    .title {
        margin: 20px 0 10px 0;
    }

    .content {
        height: 100%;
        border: 1px solid #ccc;
        background-color: #f2f2f2;
    }
</style>

隐私、权限声明

1. 本插件需要申请的系统权限列表:

麦克风权限

2. 本插件采集的数据、发送的服务器地址、以及数据用途说明:

3. 本插件是否包含广告,如包含需详细说明广告表达方式、展示频率:

暂无用户评论。