mindformers推理qwen2.5-72b报显存不足及解决

1 系统环境

硬件环境(Ascend/GPU/CPU): Ascend
MindSpore版本: mindspore=2.2.10
执行模式(PyNative/ Graph):不限
Python版本: Python=3.8
操作系统平台: linux

2 报错信息

2.1 问题描述

命令:

bash ../../scripts/msrun_launcher.sh "run_qwen2.py  
--config predict_qwen2_72b_instruct.yaml  
--load_checkpoint /home/tzl/qwen2.5_72b/ckpt/qwen2.5_72b.ckpt  
--vocab_file /home/tzl/qwen2.5_72b/vocab.json  
--merges_file /home/tzl/qwen2.5_72b/merges.txt  
--run_mode predict  
--use_parallel True  
--auto_trans_ckpt True  
--predict_data 帮助我制定一份去上海的旅游攻略" 8

推理报显存不足错误。

2.2 报错信息

[WARNING] DEVICE( 34291, ffff89caec20,python ):2024-12-16-10:12:52.891.209 [m indspore/ccsrc/plugin/dev ice/ascend/hal/device/ascend memory_adapter.cc:145] Initialize] Reserved memory size for othercomponents(2113929216) is less thanrecommend size(4068186112), It may leadto Out Of Memory in HCCLor other components, Please double check context key 'variable_memory_max_size'/'max_device_memory'  
[MS_ALLOC_CONF]Runtime config: enable_vmm: False  
[WARNING] DEVICE( 34291, ffff89caec20,python):2024-12-16-10:12:56.763.772[mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_collective_comm/multi_ascend_collective_comm_lib.cc:76] Initialize]Loading LCCL because env MS_ENABLE_LCCLis set to on. Pay attentionthat LCCL only supports communicationgroup within single node inKernelByKernel for now.

2.3 日志信息

/usr/local/lib/python3.10/dist-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  
setattr(self, word, getattr(machar, word).flat[0])  
/usr/local/lib/python3.10/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  
return self.float_to_str(self.smallest_subnormal)  
/usr/local/lib/python3.10/dist-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  
setattr(self, word, getattr(machar, word).flat[0])  
/usr/local/lib/python3.10/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  
return self.float_to_str(self.smallest_subnormal)  
Namespace(task='text_generation', config='predict_qwen2_72b_instruct.yaml', run_mode='predict', load_checkpoint='/home/tzl/qwen2.5_72b/ckpt/qwen2.5_72b.ckpt', auto_trans_ckpt=True, vocab_file='/home/tzl/qwen2.5_72b/vocab.json', merges_file='/home/tzl/qwen2.5_72b/merges.txt', predict_data='帮助我制定一份去上海的旅游攻略', seq_length=None, predict_length=8192, use_parallel=True, device_id=-1, use_past=None, do_sample=None, top_k=None, top_p=None, train_dataset='', remote_save_url=None, batch_size=1)  
2024-12-16 10:12:37,702 - mindformers[mindformers/core/context/build_context.py:188] - INFO - Predict context config, jit_level: O0, infer_boost: on  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:37.705.272 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:481] Connect] Connection 210 source: 127.0.0.1:33806, destination: 127.0.0.1:8118  
[WARNING] DISTRIBUTED(34291,fffd4ffff120,python):2024-12-16-10:12:37.705.297 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:78] ConnectedEventHandler] Connection from 127.0.0.1:33806 to 127.0.0.1:8118 is successfully created. System errno: Success  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:37.705.311 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:490] Connect] Waiting for the state of the connection to 127.0.0.1:8118 to be connected...Retry number:1  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:38.705.539 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:481] Connect] Connection 211 source: 127.0.0.1:33862, destination: 127.0.0.1:8118  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:38.705.564 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:490] Connect] Waiting for the state of the connection to 127.0.0.1:8118 to be connected...Retry number:2  
[WARNING] DISTRIBUTED(34291,fffd5507f120,python):2024-12-16-10:12:38.705.582 [mindspore/ccsrc/distributed/rpc/tcp/tcp_comm.cc:78] ConnectedEventHandler] Connection from 127.0.0.1:33862 to 127.0.0.1:8118 is successfully created. System errno: Success  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:39.705.997 [mindspore/ccsrc/distributed/cluster/cluster_context.cc:194] BuildCluster] Topology build timed out., retry(1/2400).  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:42.706.110 [mindspore/ccsrc/distributed/cluster/cluster_context.cc:194] BuildCluster] Topology build timed out., retry(2/2400).  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:45.706.231 [mindspore/ccsrc/distributed/cluster/cluster_context.cc:196] BuildCluster] Cluster is successfully initialized.  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:45.706.399 [mindspore/ccsrc/distributed/cluster/cluster_context.cc:260] PostProcess] This node 0 rank id: 0  
[WARNING] DEVICE(34291,ffff89caec20,python):2024-12-16-10:12:52.891.209 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:145] Initialize] Reserved memory size for other components(2113929216) is less than recommend size(4068186112), It may lead to Out Of Memory in HCCL or other components, Please double check context key 'variable_memory_max_size'/'max_device_memory'  
[MS_ALLOC_CONF]Runtime config: enable_vmm:False  
[WARNING] DEVICE(34291,ffff89caec20,python):2024-12-16-10:12:56.763.772 [mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_collective_comm/multi_ascend_collective_comm_lib.cc:76] Initialize] Loading LCCL because env MS_ENABLE_LCCL is set to on. Pay attention that LCCL only supports communication group within single node in KernelByKernel for now.  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:56.774.255 [mindspore/ccsrc/distributed/collective/collective_manager.cc:309] CreateCommunicationGroup] Start to create communication group: hccl_world_group [const vector]{0, 1, 2, 3, 4, 5, 6, 7}  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:12:56.788.319 [mindspore/ccsrc/distributed/collective/collective_manager.cc:374] CreateCommunicationGroup] Begin initialize communication group on the device side: hccl_world_group  
[WARNING] DISTRIBUTED(34291,ffff89caec20,python):2024-12-16-10:13:05.961.917 [mindspore/ccsrc/distributed/collective/collective_manager.cc:384] CreateCommunicationGroup] End initialize communication group on the device side: hccl_world_group  
2024-12-16 10:13:05,965 - mindformers[mindformers/tools/utils.py:182] - INFO - set strategy path to './output/strategy/ckpt_strategy_rank_0.ckpt'  
2024-12-16 10:13:05,990 - mindformers[mindformers/trainer/trainer.py:985] - INFO - Load configs in /home/tzl/mindformers/configs/gpt2/run_gpt2.yaml to build trainer.  
2024-12-16 10:13:05,990 - mindformers[mindformers/trainer/trainer.py:1068] - INFO - ..........Init Config..........  
2024-12-16 10:13:05,990 - mindformers[mindformers/core/parallel_config.py:52] - INFO - initial parallel_config from dict: {'data_parallel': 1, 'model_parallel': 8, 'pipeline_stage': 1, 'micro_batch_num': 1, 'vocab_emb_dp': False, 'gradient_aggregation_group': 4}  
2024-12-16 10:13:05,992 - mindformers[mindformers/tools/utils.py:167] - INFO - set output path to '/home/tzl/mindformers/research/qwen2/output'  
2024-12-16 10:13:06,008 - mindformers[mindformers/trainer/base_trainer.py:92] - INFO - host_name: huawei, host_ip: 172.104.223.8  
2024-12-16 10:13:06,009 - mindformers[mindformers/trainer/base_trainer.py:98] - INFO - Now Running Task is: text_generation, Model is: qwen2_72b  
2024-12-16 10:13:06,009 - mindformers[mindformers/trainer/base_trainer.py:124] - WARNING - Input model name is not in the supported list or unspecified.  
2024-12-16 10:13:06,009 - mindformers[mindformers/trainer/base_trainer.py:125] - WARNING - See the list of supported task and model name: ['baichuan2_13b', 'baichuan2_7b', 'baichuan_7b', 'bloom_176b', 'bloom_560m', 'bloom_65b', 'bloom_7.1b', 'codegeex2_6b', 'codellama_34b', 'common', 'deepseek1_5_7b', 'deepseek_33b', 'glm2_6b', 'glm2_6b_lora', 'glm3_6b', 'glm4_9b', 'glm_6b', 'glm_6b_chat', 'glm_6b_lora', 'glm_6b_lora_chat', 'gpt2', 'gpt2_13b', 'gpt2_52b', 'gpt2_lora', 'gpt2_xl', 'gpt2_xl_lora', 'internlm_7b', 'internlm_7b_lora', 'llama2_13b', 'llama2_70b', 'llama2_7b', 'llama_13b', 'llama_65b', 'llama_7b', 'llama_7b_lora', 'llama_7b_slora', 'pangualpha_13b', 'pangualpha_2_6b', 'qwen_7b', 'qwen_7b_lora', 'skywork_13b', 'yi_34b', 'yi_6b', 'ziya_13b']  
2024-12-16 10:13:06,010 - mindformers[mindformers/trainer/base_trainer.py:126] - WARNING - The default model config: /home/tzl/mindformers/configs/gpt2/run_gpt2.yaml will now be used for the text_generation task  
2024-12-16 10:13:06,010 - mindformers[mindformers/trainer/trainer.py:1139] - INFO - ..........Init Model..........  
2024-12-16 10:13:06,010 - mindformers[mindformers/trainer/trainer.py:318] - INFO - ==========Trainer Init Success!==========  
2024-12-16 10:13:06,011 - mindformers[mindformers/tools/utils.py:570] - INFO - Remake ./output/strategy...  
2024-12-16 10:13:06,012 - mindformers[mindformers/tools/utils.py:587] - INFO - Folder ./output/strategy is remaked.  
2024-12-16 10:13:06,012 - mindformers[mindformers/tools/utils.py:570] - INFO - Remake ./output/transformed_checkpoint...  
2024-12-16 10:13:06,012 - mindformers[mindformers/tools/utils.py:587] - INFO - Folder ./output/transformed_checkpoint is remaked.  
2024-12-16 10:13:06,012 - mindformers[mindformers/trainer/trainer.py:1139] - INFO - ..........Init Model..........  
2024-12-16 10:13:06,013 - mindformers[mindformers/trainer/base_trainer.py:190] - INFO - The current parallel mode is semi_auto_parallel, full batch is True,so global batch size will be changed: global_batch_size = batch_size * data_parallel * micro_batch_interleave_num * gradient_accumulation_steps = 1 = 1 * 1 * 1 * 1  
2024-12-16 10:13:06,013 - mindformers[mindformers/trainer/base_trainer.py:416] - INFO - .........Build Network From Config..........  
2024-12-16 10:13:06,015 - mindformers[mindformers/version_control.py:103] - INFO - The Lazy Inline compilation acceleration feature only works in pipeline parallel mode (pipeline_stage > 1). Current pipeline stage=1, the feature is disabled by default. You can also enable lazy inline without pipeline parallel, by setting environment variable export ENABLE_LAZY_INLINE_NO_PIPELINE=1.  
2024-12-16 10:13:06,024 - mindformers[mindformers/models/llama/llama.py:95] - INFO - Open prefill flatten and disable custom flash attention op:False  
2024-12-16 10:13:06,024 - mindformers[mindformers/models/llama/llama.py:102] - INFO - MoE config is None, use normal FFN  
2024-12-16 10:13:06,547 - mindformers[mindformers/models/utils.py:123] - INFO - num_layers per stage: [[80]]  
2024-12-16 10:13:06,547 - mindformers[mindformers/models/utils.py:124] - INFO - Accumulated num_layers per stage: [[80]]  
2024-12-16 10:13:06,547 - mindformers[mindformers/models/utils.py:125] - INFO - Pipeline id list: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]  
2024-12-16 10:13:06,547 - mindformers[mindformers/models/utils.py:126] - INFO - Interleave id list: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]  
2024-12-16 10:13:06,548 - mindformers[mindformers/models/utils.py:136] - INFO - Formative layer_recompute: [[0]]  
2024-12-16 10:13:06,548 - mindformers[mindformers/models/utils.py:137] - INFO - Formative select_recompute: {'feed_forward\.mul': [[0]], 'feed_forward\.w1\.activation\.silu': [[0]]}  
2024-12-16 10:13:06,548 - mindformers[mindformers/models/utils.py:138] - INFO - Formative select_comm_recompute: {'.*\.norm': [[0]]}  
2024-12-16 10:13:06,576 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
[WARNING] ME(34291:281472993520672,MainProcess):2024-12-16-10:13:06.576.000 [mindspore/common/parameter.py:837] This interface may be deleted in the future.  
2024-12-16 10:13:06,601 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,628 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,655 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,679 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,709 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,735 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,759 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,783 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:06,805 - mindformers[mindformers/version_control.py:63] - INFO - Predict enable lazy inline.  
2024-12-16 10:13:08,949 - mindformers[mindformers/models/modeling_utils.py:1502] - INFO - model built, but weights is unloaded, since the config has no checkpoint_name_or_path attribute or checkpoint_name_or_path is None.  
2024-12-16 10:13:08,949 - mindformers[mindformers/models/llama/llama.py:407] - INFO - Predict run mode:True  
2024-12-16 10:13:08,990 - mindformers[mindformers/trainer/base_trainer.py:579] - INFO - Network Parameters: 72706 M.  
2024-12-16 10:13:09,319 - mindformers[mindformers/trainer/utils.py:367] - INFO - .........Building model.........  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.797 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 1 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.879 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 2 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.894 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 3 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.911 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 4 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.925 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 5 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.936 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 6 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.949 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 7 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.962 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 8 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.976 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 9 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.407.987 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 10 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:09.408.006 [mindspore/ccsrc/frontend/parallel/step_parallel_utils.cc:2156] ExtendInputArgsAbstractShape] The input 12 is not a tensor.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:37.410.646 [mindspore/ccsrc/frontend/parallel/pass/dataset_reader_optimizer.cc:305] BroadcastDataset] For now on, only dataset sink mode support dataset reader optimizer.  
[WARNING] PARALLEL(34291,ffff89caec20,python):2024-12-16-10:13:46.538.237 [mindspore/ccsrc/frontend/parallel/pass/overlap_recompute_allgather_and_flashattention_grad.cc:194] OverlapRecomputeAllGatherAndFlashAttentionGrad] Currently, duplicated allgather overlap with flashattention grad only support in lazy_line mode.  
2024-12-16 10:14:55,193 - mindformers[mindformers/trainer/utils.py:380] - INFO - /home/tzl/mindformers/research/qwen2/output is_share_disk: False  
2024-12-16 10:14:55,194 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:163] - INFO - rank_id: 0  
2024-12-16 10:14:55,195 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:164] - INFO - world_size: 8  
2024-12-16 10:14:55,195 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:165] - INFO - transform_process_num: 1  
2024-12-16 10:14:55,195 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:166] - INFO - transform_rank_id_list: [0]  
2024-12-16 10:14:55,195 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:225] - INFO - The strategy files under ./output/strategy will be used as the dst_strategy.  
2024-12-16 10:14:55,196 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:485] - INFO - .........Collecting strategy.........  
2024-12-16 10:14:55,196 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:490] - INFO - pipeline_stage = 1, strategy using ./output/strategy/ckpt_strategy_rank_0.ckpt  
2024-12-16 10:14:55,197 - mindformers[mindformers/tools/ckpt_transform/utils.py:41] - INFO - Soft link of checkpoint file from /home/tzl/qwen2.5_72b/ckpt/qwen2.5_72b.ckpt to /tmp/tmpottchla/qwen2/rank_0/qwen2.5_72b.ckpt created.  
2024-12-16 10:14:55,197 - mindformers[mindformers/tools/utils.py:570] - INFO - Remake ./output/transformed_checkpoint/qwen2...  
2024-12-16 10:14:55,198 - mindformers[mindformers/tools/utils.py:587] - INFO - Folder ./output/transformed_checkpoint/qwen2 is remaked.  
2024-12-16 10:14:55,198 - mindformers[mindformers/tools/utils.py:690] - INFO - Wait Remake ./output/transformed_checkpoint/qwen2 by main rank.  
2024-12-16 10:14:57,111 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:275] - INFO - The transformed checkpoint will be saved under ./output/transformed_checkpoint/qwen2.  
2024-12-16 10:14:57,112 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:349] - INFO - .........Transforming ckpt.........  
2024-12-16 10:14:57,112 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:350] - INFO - src_checkpoint: /tmp/tmpottchla/qwen2  
2024-12-16 10:14:57,112 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:351] - INFO - src_strategy: None  
2024-12-16 10:14:57,113 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:352] - INFO - dst_checkpoint: ./output/transformed_checkpoint/qwen2  
2024-12-16 10:14:57,113 - mindformers[mindformers/tools/ckpt_transform/transform_checkpoint.py:353] - INFO - dst_strategy: ./output/strategy/ckpt_strategy_rank_0.ckpt

3 根因分析

这是正常运行时出现的 WARNING,不是报错,也不会影响正常的推理功能,仅作为提示。

4 解决方案

如果在微调大模型时遇到显存不足,那么报错应该是 “C++ Stack Error” 等这类的错误提示。
上面看到的 False 是运行时配置的环境变量 enable_vmm,将其设置为 False 之后可以加速大模型训练和推理用的环境变量。

如果对大模型的训练和推理速度要求不高,可以修改为True。