MindIR推理模型导出时不支持列表/字典推导式

我在导出MindIR格式的推理模型时,遇到了如下错误:


该错误表明将代码编译为计算图时,mindspore框架无法处理代码中的列表推导式和字典推导式,根据错误报告中建议将jit_syntax_level 设置为LAX也无法解决。请问有没有在不展开推导式的情况下导出推理模型的办法?mindspore版本:2.6.0

用户您好,欢迎使用mindspore,已经收到您的问题,会尽快分析答复~

用户您好,可以麻烦提供下完整的报错截图,以及脚本目录下的rank0/om目录下的analyze_fail.ir文件吗?

报错如下:
[WARNING] CORE(1196275,7b46ffae3440,python):2025-09-02-18:56:47.031.197 [mindspore/core/utils/ms_context.cc:528] GetJitLevel] Pynative mode can not set jit_level to O2, use O0 instead.
[WARNING] UTILS(1196275,7b46ffae3440,python):2025-09-02-18:56:48.183.876 [mindspore/ccsrc/utils/comm_manager.cc:80] GetInstance] CommManager instance for CPU not found, return default instance.
Traceback (most recent call last):
File “/root/DPFT-mindspore/model.py”, line 46, in
ms.export(
File “/root/miniconda3/envs/mindspore_env/lib/python3.9/site-packages/mindspore/train/serialization.py”, line 2077, in export
_export(net, file_name, file_format, *inputs, **kwargs)
File “/root/miniconda3/envs/mindspore_env/lib/python3.9/site-packages/mindspore/train/serialization.py”, line 2133, in _export
_save_mindir(net, file_name, *inputs, **kwargs)
File “/root/miniconda3/envs/mindspore_env/lib/python3.9/site-packages/mindspore/train/serialization.py”, line 2373, in _save_mindir
mindir_stream, net_dict = _cell_info(net, incremental, *inputs)
File “/root/miniconda3/envs/mindspore_env/lib/python3.9/site-packages/mindspore/train/serialization.py”, line 2354, in _cell_info
graph_id, _ = _executor.compile(net, *inputs, phase=phase_name, do_convert=False)
File “/root/miniconda3/envs/mindspore_env/lib/python3.9/site-packages/mindspore/common/api.py”, line 1967, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase)
ValueError: When handling script ‘call_func_str(input_0,) in graph mode’, the inputs should be constant, but found variable ‘input_0’ to be nonconstant. Try to set jit_syntax_level to LAX.


  • C++ Call Stack: (For framework developers)

mindspore/ccsrc/pipeline/jit/ps/static_analysis/prim.cc:3625 CheckInterpretInput


  • The Traceback of Net Construct Code:

0 In file /root/DPFT-mindspore/models/dpft.py:250, 18~60

        batch=[features[input] for input in self.inputs],
              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1 In file /root/DPFT-mindspore/models/dpft.py:240, 19~92

    features = {input: self.embeddings[input](features[input]) for input in self.inputs}
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

2 In file /root/DPFT-mindspore/models/dpft.py:236, 19~87

    features = {input: self.necks[input](features[input]) for input in self.inputs}
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

3 In file /root/DPFT-mindspore/models/dpft.py:230~233, 19~9

    features = {

4 In file /root/DPFT-mindspore/models/dpft.py:224, 19~88

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

5 In file /root/DPFT-mindspore/models/dpft.py:224, 27~62

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

6 In file /root/DPFT-mindspore/models/backbones/resnet.py:80~81, 8~46

    if self.channel_last:

7 In file /root/DPFT-mindspore/models/backbones/resnet.py:81, 12~46

        x = ops.transpose(x, (0, 3, 1, 2))  # (B, H, W, C) -> (B, C, H, W)
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

8 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

9 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

10 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

11 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

12 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

13 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

14 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

15 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

16 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

17 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

18 In file /root/DPFT-mindspore/models/dpft.py:224, 27~41

    features = {input: self.backbones[input](batch[input]) for input in self.inputs}
                       ^~~~~~~~~~~~~~

19 In file /root/DPFT-mindspore/models/backbones/resnet.py:88~90, 8~27

    for  i,layer in enumerate(self.body):
    ^

20 In file /root/DPFT-mindspore/models/backbones/resnet.py:93~94, 8~54

    if self.channel_last:

21 In file /root/DPFT-mindspore/models/backbones/resnet.py:94, 12~54

        features = self._to_channel_last(features)
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

22 In file /root/DPFT-mindspore/models/backbones/resnet.py:94, 23~54

        features = self._to_channel_last(features)
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

23 In file /root/DPFT-mindspore/models/backbones/resnet.py:75, 15~92

    return OrderedDict({k: ops.transpose(v, (0, 2, 3, 1)) for k, v in features.items()})
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

(See file ‘/root/DPFT-mindspore/rank_0/om/analyze_fail.ir’ for more details. Get instructions about analyze_fail.ir at 搜索 | 昇思MindSpore社区)
ir文件如下:
analyze_fail.zip (16.5 KB)

用户您好,根据日志和ir图可以看到,
image
您这里call了self._to_channel_last,但是这个属性实际上是个dict


不可以被call。您可以检查下您的代码逻辑。

你这边的那个features是个字典吗?我记得是静态图语法不支持这种写法的,而导出mindir需要支持静态图语法;有没有可以直接复现这个问题的简单代码,发出来一起看看?


feature是一个通过列表推导式生成的字典,在mindspore中可以通过load_param_into_net 将权重加载到网络中,并在graph mode下推理,但是无法导出为mindIR, When handling script ‘call_func_str(input_0,) in graph mode’, the inputs should be constant, but found variable ‘input_0’ to be nonconstant. Try to set jit_syntax_level to LAX.但是inputs的形状是确定的,所以features的形状也就确定了


这是一个类的方法,OrderedDict 是返回值

用户您好,ordereddict属于第三方库,无法在图内表达,不支持mindir导出。

用户您好,有生成之前提到的analyzefail.ir文件吗

analyze_fail.zip (27.1 KB)

用户您好,您这里shape图内表达有误是因为静态图内不支持使用dict()创建dict。代码位置在 /root/DPFT-mindspore/models/fusers/mpfusion.py:748, 19~64/ layerDict =dict(zip(self.mpfusion.values(), self.heads))/

如果您要导出mindir,建议您设置jit_syntax_level为STRICT以获取更准确的报错信息。

zip()和set() 也不能用吗?

用户您好,zip可以,set不行

用户您好,MindSpore支撑人已经分析并给出了问题的原因,由于较长时间未看到您采纳回答,这里版主将进行采纳回答的结帖操作,如果还其他疑问请发新帖子提问,谢谢支持~

此话题已在最后回复的 60 分钟后被自动关闭。不再允许新回复。