skydum

個人的な作業記録とか備忘録代わりのメモ

オープンソース版Whisper.cppの利用

Whisper.app

お盆で暇なので以前から個人的にやってみたいと思っていた音声認識をやってみた。
OpenAIのWhisperの認識精度が高いらしいので使ってみようかと思ったが、オープンソースのバージョンも有るというのを見つけたのでオープンソースの方で試してみた。

利用するのはWhisper.cpp https://github.com/ggerganov/whisper.cpp

Whisperの高速板らしい?

音声認識に利用するサンプルデータは以下のものを利用する。
https://pro-video.jp/voice/announce/

使ってみた感じだとLargeモデルの精度は良いが、CPU版だと時間がかかるのでGPU版を使うかAppleシリコンに対応したバージョンもWhisper.cppにはあるようなのでそっちを使ったほうが良いのかもしれない。

docker(CPU板)を使う

手順(概要)

  1. whisper.cppをgit clone
  2. modelsの中にあるdownload-ggml-model.cmdを実行して利用できるのでるの一覧を確認
  3. ダウンロードしたいモデルを選択してダウンロード(ここではbaseを利用する)
  4. docker pullでコンテナを取得
  5. whisper.cppのディレクトリ配下に音声認識したいファイルを入れておくディレクトリを作成 001-sibutomo.mp3のファイルを利用する
  6. コンテナを起動
  7. whipser.cppはモノラル音声しか音声認識できないので、ファイルの形式を変更
  8. 音声認識を実行

手順(詳細)

  1. git clone https://github.com/ggerganov/whisper.cpp.git
  2. cd whisper.cpp/models
  3. ./download-ggml-model.cmd ``` $ ./download-ggml-model.sh Usage: ./download-ggml-model.sh [models_path]

    Available models: tiny tiny.en tiny-q5_1 tiny.en-q5_1 base base.en base-q5_1 base.en-q5_1 small small.en small.en-tdrz small-q5_1 small.en-q5_1 medium medium.en medium-q5_0 medium.en-q5_0 large-v1 large-v2 large-v2-q5_0 large-v3 large-v3-q5_0


    .en = english-only -q5_[01] = quantized -tdrz = tinydiarize ```

  4. ./download-ggml-model.cmd base
  5. docker pull ghcr.io/ggerganov/whisper.cpp:main
  6. mkdir recognitionsを作って、このディレクトリの中にファイルをコピーしておく
  7. whisper.cppの直下にcdで移動する
  8. `docker run -it --rm --name whisper -v ./recognitions:/recognitions -v models:/models ghcr.io/ggerganov/whisper.cpp:main bash
  9. ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
  10. ./main -m /models/ggml-base.bin -l ja -f output.wav ``` root@47c9b8746119:/app# ./main -m /models/ggml-base.bin -l ja -f output.wav whisper_init_from_file_with_params_no_state: loading model from '/models/ggml-base.bin' whisper_init_with_params_no_state: use gpu = 1 whisper_init_with_params_no_state: flash attn = 0 whisper_init_with_params_no_state: gpu_device = 0 whisper_init_with_params_no_state: dtw = 0 whisper_model_load: loading model whisper_model_load: n_vocab = 51865 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 512 whisper_model_load: n_audio_head = 8 whisper_model_load: n_audio_layer = 6 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 512 whisper_model_load: n_text_head = 8 whisper_model_load: n_text_layer = 6 whisper_model_load: n_mels = 80 whisper_model_load: ftype = 1 whisper_model_load: qntvr = 0 whisper_model_load: type = 2 (base) whisper_model_load: adding 1608 extra tokens whisper_model_load: n_langs = 99 whisper_model_load: CPU total size = 147.37 MB whisper_model_load: model size = 147.37 MB whisper_init_state: kv self size = 18.87 MB whisper_init_state: kv cross size = 18.87 MB whisper_init_state: kv pad size = 3.15 MB whisper_init_state: compute buffer (conv) = 16.26 MB whisper_init_state: compute buffer (encode) = 131.94 MB whisper_init_state: compute buffer (cross) = 4.65 MB whisper_init_state: compute buffer (decode) = 96.35 MB

    system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0 | CANN = 0

    main: processing 'output.wav' (384105 samples, 24.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = ja, task = transcribe, timestamps = 1 ...

    [00:00:00.940 --> 00:00:04.820] 無天下のシャボンダマセッケンだら もう安心 [00:00:04.820 --> 00:00:08.900] 天然の保湿成分が含まれるため 肌にウルを湯を与え [00:00:08.900 --> 00:00:11.280] すくやかに保ちます [00:00:11.280 --> 00:00:14.200] お肌のことでお悩みの方は ぜひ一度 [00:00:14.200 --> 00:00:17.900] 無天下シャボンダマセッケンを お試しください [00:00:17.900 --> 00:00:24.500] おもとめは01,2,0,0,0,5,5,9,5まで

    whisper_print_timings: load time = 106.56 ms whisper_print_timings: fallbacks = 0 p / 1 h whisper_print_timings: mel time = 30.96 ms whisper_print_timings: sample time = 757.17 ms / 918 runs ( 0.82 ms per run) whisper_print_timings: encode time = 3500.39 ms / 2 runs ( 1750.19 ms per run) whisper_print_timings: decode time = 8.68 ms / 1 runs ( 8.68 ms per run) whisper_print_timings: batchd time = 2812.74 ms / 912 runs ( 3.08 ms per run) whisper_print_timings: prompt time = 196.54 ms / 104 runs ( 1.89 ms per run) whisper_print_timings: total time = 7431.80 ms root@47c9b8746119:/app# ```

参考 ggml-large-v3.bin

認識精度が高いけれども時間がかかる。

# ./main -m /models/ggml-large-v3.bin -l ja -f output.wav
whisper_init_from_file_with_params_no_state: loading model from '/models/ggml-large-v3.bin'
whisper_init_with_params_no_state: use gpu    = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw        = 0
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51866
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head  = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 1280
whisper_model_load: n_text_head   = 20
whisper_model_load: n_text_layer  = 32
whisper_model_load: n_mels        = 128
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs       = 100
whisper_model_load:      CPU total size =  3094.36 MB
whisper_model_load: model size    = 3094.36 MB
whisper_init_state: kv self size  =  251.66 MB
whisper_init_state: kv cross size =  251.66 MB
whisper_init_state: kv pad  size  =    7.86 MB
whisper_init_state: compute buffer (conv)   =   36.13 MB
whisper_init_state: compute buffer (encode) =  926.53 MB
whisper_init_state: compute buffer (cross)  =    9.25 MB
whisper_init_state: compute buffer (decode) =  213.06 MB

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0 | CANN = 0

main: processing 'output.wav' (384105 samples, 24.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = ja, task = transcribe, timestamps = 1 ...


[00:00:00.880 --> 00:00:03.800]  無添加のシャボン玉石鹸なら、もう安心!
[00:00:03.800 --> 00:00:10.260]  天然の保湿成分が含まれるため、肌にうるおいを与え、健やかに保ちます。
[00:00:10.260 --> 00:00:16.680]  お肌のことでお悩みの方は、ぜひ一度、無添加シャボン玉石鹸をお試しください。
[00:00:16.680 --> 00:00:22.400]  お求めは、0120-0055-95まで。
[00:00:22.400 --> 00:00:24.000]  ご視聴ありがとうございました


whisper_print_timings:     load time =  2611.40 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =    36.40 ms
whisper_print_timings:   sample time =   492.88 ms /   596 runs (    0.83 ms per run)
whisper_print_timings:   encode time = 82502.83 ms /     2 runs (41251.41 ms per run)
whisper_print_timings:   decode time =   250.92 ms /     2 runs (  125.46 ms per run)
whisper_print_timings:   batchd time = 26409.94 ms /   590 runs (   44.76 ms per run)
whisper_print_timings:   prompt time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time = 112946.36 ms
root@47c9b8746119:/app#