关于“模型推理-Model Inference”类别

模型推理的问题。

我拿了一个310b的板子,准备在上面部署一个asr引擎。
先试了FunAsr功能可以跑通,但是npu用不起来,然后张烨槟 大佬说不是modelzoo上的模型暂时没有人力可以去看护。于是去modelzoo上找了这个s2t-small-librispeech-asr。

但是这个也跑不起来,具体问题情况如下:

测试

测试代码

import torch
from mindnlp.transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset

model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")

ds = load_dataset(
    "patrickvonplaten/librispeech_asr_dummy",
    "clean",
    split="validation"
)

input_features = processor(
    ds[0]["audio"]["array"],
    sampling_rate=16000,
    return_tensors="ms"
).input_features  # Batch size 1
generated_ids = model.generate(input_features=input_features)

transcription = processor.batch_decode(generated_ids)

测试运行
python test.py

结果报错

Traceback (most recent call last):
  File "/opt/jacky/jacky/speech/LibriSpeech/test.py", line 5, in <module>
    model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
  File "/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/site-packages/mindnlp/transformers/modeling_utils.py", line 2712, in from_pretrained
    pretrained_model_name_or_path = json.load(f)["base_model_name_or_path"]
  File "/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[ERROR] 2025-06-18-07:28:37 (PID:2823, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception

查看报错的mindnlp的代码,加打印把文件名打出来
/opt/Ascend/miniconda3/envs/ascend/lib/python3.10/site-packages/mindnlp/transformers/modeling_utils.py
第2712行前加

        if _adapter_model_path is not None and os.path.isfile(_adapter_model_path):
            print(f"adapter model path={_adapter_model_path}")
            with open(_adapter_model_path, "r", encoding="utf-8") as f:
                _adapter_model_path = pretrained_model_name_or_path
                pretrained_model_name_or_path = json.load(f)["base_model_name_or_path"]

再次运行,发现json文件为:
adapter model path=/opt/jacky/jacky/speech/LibriSpeech/.mindnlp/model/facebook/s2t-small-librispeech-asr/adapter_config.json

到/opt/jacky/jacky/speech/LibriSpeech/.mindnlp/model/facebook/s2t-small-librispeech-asr目录下打开adapter_config.json查看,结果发现这个json文件下载失败了,文件内容如下:

<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
</body>
</html>

查看huggingface上原始的s2t-small-librispeech-asr模型文件列表,发现这个文件是不存在的,再加上这个adapter config的文件名,猜测是mindnlp加的一个适配的配置文件。

有哪位大佬可以帮忙确认一下什么情况吗?