解决whisper 本地运行时GPU 利用率不高的问题

发布于:2025-02-10 ⋅ 阅读:(45) ⋅ 点赞:(0)

        

        我在windows 环境下本地运行whisper 模型,使用的是nivdia RTX4070 显卡,结果发现GPU 的利用率只有2% 。使用

import torch
print(torch.cuda.is_available())

返回TRUE。表示我的cuda 是可用的。

最后在github 的下列网页上找到了问题

极低的 GPU 利用率 #140

最关键的是

1 .运行之前,清除GPU 缓存

torch.cuda.empty_cache()

 2 使用小的whisper 模型,我使用

model =load_model("base").to("cuda")

3 最关键的是 在model.transcribe的参数中设置   beam_size = 5,一下子GPU 的利用率到了20%,当beam_size = 8 时,GPU 利用率可达30%左右。

model.transcribe(arr,language="en", prompt=prompt,fp16 =False,beam_size = 8,verbose =True,condition_on_previous_text =False)["text"]

下面是我完整的测试程序

import os
import sys
import os.path
import openai
#from dotenv import load_dotenv
import torch
#import whisper
from whisper  import load_model
import numpy as np
#from pyannote.audio import Pipeline
from pydub import AudioSegment
#os.environ['OPENAI_API_KEY'] ="sk-ZqGx7uD7sHMyITyIrxFDjbvVEAi84izUGGRwN23N9NbnqTbL"
#os.environ['OPENAI_BASE_URL'] ="https://api.chatanywhere.tech/v1"
print(torch.cuda.is_available())
torch.cuda.empty_cache()
model =load_model("base").to("cuda")
audio = AudioSegment.from_mp3("daily.mp3") #sys.argv[1]

segment_length = 25 * 60
duration = audio.duration_seconds
print('Segment length: %d seconds' % segment_length)
print('Duration: %d seconds' % duration)

segment_filename = os.path.basename("daily.mp3") #sys.argv[1]
segment_filename = os.path.splitext(segment_filename)[0]
number_of_segments = int(duration / segment_length)
segment_start = 0
segment_end = segment_length * 1000
enumerate = 1
prompt = ""

for i in range(number_of_segments):
    audio_segment = audio[segment_start:segment_end]
    exported_file = './tmp/' + segment_filename + '-' + str(enumerate) + '.mp3'
    audio_segment.export(exported_file, format="mp3")
    print('Exported segment %d of %d' % (enumerate, number_of_segments))

    #f = open(exported_file, "rb")
    #audio_segment = audio[segment_start:segment_end]
    if audio_segment.frame_rate != 16000: # 16 kHz
        audio_segment = audio_segment.set_frame_rate(16000)
    if audio_segment.sample_width != 2:   # int16
        audio_segment = audio_segment.set_sample_width(2)
    if audio_segment.channels != 1:       # mono
        audio_segment = audio_segment.set_channels(1)        
    arr = np.array(audio_segment.get_array_of_samples())
    arr = arr.astype(np.float32)/32768.0
    #beam_size = 5非常重要,=8 GPU 利用率30%左右
    data = model.transcribe(arr,language="en", prompt=prompt,fp16 =False,beam_size = 8,verbose =True,condition_on_previous_text =False)["text"]
  
    print('Transcribed segment %d of %d' % (enumerate, number_of_segments))

    f = open(os.path.join('./transcripts/', segment_filename + '.txt'), "a")
    f.write(data)
    f.close()

    prompt += data
    segment_start += segment_length * 1000
    segment_end += segment_length * 1000
    enumerate += 1

 beam_size到底是什么意思我并没有搞清楚

beam size(又名 beam width)控制生成输出时每个步骤中探索的路径数。这是个啥呀?


网站公告

今日签到

点亮在社区的每一天
去签到