1. 系统环境
硬件环境(Ascend/GPU/CPU): Ascend
MindSpore版本: 不限
执行模式(PyNative/ Graph):不限
2. 问题描述
Llama推理报参数校验错误
(MindSpore) [ma-user transfer_checkpoint]$python test_baichuan_78.py
Traceback (most recent call last):
File "/home/ma-user/work/mindformers/transfer_checkpoint/test_baichuan_78.py", 1ine: 16, in <module>
baichuan_model - LlamaForCausalLM(
File "/home/ma-user/anaconda3/envs/MindSpore/1ib/python3.9/site-packages/mindformers/models/1lama/1lama .py", 1ine 253, in __init__
self.model - LlamaModel (config=config)
File "/home/ma-user/anaconda3/envs/MindSpore/1ib/python3.9/site-packages/mindformers/models/1lama/1lama.py", line 94, in __init__
Validator.check_positive_int(config.batch_size)
File "/home/ma-user/anaconda3/envs/MindSpore/1ib/python3.9/site-packages/mindspore/_checkparam•py", line 331, in check_positive_int
return _check number(arg value, 0, GT,.int, arg-name, prim name)
File -/home/ma-user/anaconda3/envs/Mindspore/1ib/python3.9/site-packages/mindspore/ checkparam.py", line 220, in check number
check param()
File -/home/ma-user/anaconda3/ervs/MindSpore/1ib/python3.9/site-packages/mindspore/ checkparam.py",line 208, in check param
raise TypeError( f"{prim info} must be farg type. __name__}, but got {type(arg_value).__name}'")
TypeError: The input value must be int. but got 'NoneType.
3. 解决方案
出错位置在做参数校验,上面是对config.batch_size
做校验,检验结果为config.batch_size
为int
,但是实际是NoneType
,表示传入的值是一个空值;需要检查config.batch_size
是否配置,在yaml
中配置位置是否错误,值是否正确;
其他参数校验错误,也可参照以上解决办法。