Faster Whisper float 16 issue

233 views Asked by At

I wanted to use faster whisper so i used the example they provided such as

def transcribe(audio):
    model = WhisperModel("small")
    segments, info = model.transcribe(audio)
    language = info[0]
    print("Transcription language", info[0])
    segments = list(segments)

    for segment in segments:
        print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
    return language, segments

and run into this issue: Error : [ctranslate2] [thread 28968] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.

I installed cuda 11.8 and cudnn 9 (Cuda is in system path(nvcc --version work). ) but still error. I am using conda env but never got problem with path and venv. Any idea ?

EDIT: Found the error thanks to Abator Abetor and Jérôme Richard. GPU doesn't have Tensor cores

0

There are 0 answers