mindformers进行Lora微调时候程序崩溃并提示Bus error

使用 MindFormers 对 Qwen3-8B 模型进行 LoRA 微调时,日志中出现了多个警告信息,最终程序崩溃并提示Bus error (core dumped)(总线错误 / 核心转储),需要分析这些问题并解决训练崩溃的问题。

问题分析

从日志信息来看,导致Bus error (core dumped)的核心原因主要有以下几点:

1. 内存资源不足(最主要原因)

  • 日志中明确提示:Reserved memory size for other components(3175088128) is less than recommend size(4069086208),说明给 Ascend 设备预留的内存不足,极易导致内存溢出(OOM)

  • 大量BFloat16类型的模型参数被转换为Float32The type of Parameter#xxx:BFloat16 in 'parameter_dict' is different from the type of it in 'net':Float32,Float32 会占用 2 倍于 BF16 的内存,大幅增加内存消耗

  • Qwen3-8B 模型即使使用 LoRA 微调,单卡训练时内存压力依然很大,内存不足会直接触发总线错误

2. 环境版本不兼容

  • 自定义算子编译版本(8.3)与当前 Ascend 软件包版本(8.2)不匹配:The version 8.3 used for compiling the custom operator does not match Ascend AI software package version 8.2,这会导致算子执行时的内存访问错误

3. 其他次要警告(不致命但影响稳定性)

  • HuggingFace 配置参数不兼容(如torch_dtypeuse_cache等):仅为警告,不影响核心运行

  • checkpoint 文件缺少元数据信息:不影响加载,但可能导致参数匹配问题

运行如下:

(lora1) root@aide-20260106-31bb61b-36700793-58b476cdcc-96vl8:~/mindformers# python run_mindformer.py --config /root/lora1_qwen3_8b/finetune_qwen3_8b_lora.yaml --use_parallel False --run_mode finetune

/root/.conda/envs/lora1/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class ‘numpy.float64’> type is zero.
setattr(self, word, getattr(machar, word).flat[0])
/root/.conda/envs/lora1/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class ‘numpy.float64’> type is zero.
return self._float_to_str(self.smallest_subnormal)
/root/.conda/envs/lora1/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class ‘numpy.float32’> type is zero.
setattr(self, word, getattr(machar, word).flat[0])
/root/.conda/envs/lora1/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class ‘numpy.float32’> type is zero.
return self._float_to_str(self.smallest_subnormal)
2026-01-22 00:26:01,707 - mindformers/root/mindformers/output/log[/root/.conda/envs/lora1/lib/python3.9/warnings.py:109] - WARNING - UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
2026-01-22 00:26:15,928 - mindformers/root/mindformers/output/log[mindformers/tools/register/template.py:84] - WARNING - The input config swap_config is empty.
2026-01-22 00:26:15,928 - mindformers/root/mindformers/output/log[mindformers/tools/register/template.py:84] - WARNING - The input config metric is empty.
2026-01-22 00:26:15,929 - mindformers/root/mindformers/output/log[mindformers/tools/register/template.py:84] - WARNING - The input config monitor_config is empty.
2026-01-22 00:26:15,930 - mindformers/root/mindformers/output/log[mindformers/utils/file_utils.py:50] - INFO - set output path to ‘/root/mindformers/output’
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.920.040 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘device_target’ will be deprecated and removed in a future version. Please use the api mindspore.set_device() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.951.621 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘device_id’ will be deprecated and removed in a future version. Please use the api mindspore.set_device() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.951.810 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘max_device_memory’ will be deprecated and removed in a future version. Please use the api mindspore.runtime.set_memory() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.952.386 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘max_call_depth’ will be deprecated and removed in a future version. Please use the api mindspore.set_recursion_limit() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.952.493 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘memory_optimize_level’ will be deprecated and removed in a future version. Please use the api mindspore.runtime.set_memory() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:18.952.625 [mindspore/context.py:1334] For ‘context.set_context’, the parameter ‘ascend_config’ will be deprecated and removed in a future version. Please use the api mindspore.device_context.ascend.op_precision.precision_mode(),
mindspore.device_context.ascend.op_precision.op_precision_mode(),
mindspore.device_context.ascend.op_precision.matmul_allow_hf32(),
mindspore.device_context.ascend.op_precision.conv_allow_hf32(),
mindspore.device_context.ascend.op_tuning.op_compile() instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:19.112.038 [mindspore/context.py:903] For ‘context.set_context’, ‘dataset_broadcast_opt_level’ parameter is deprecated, and will be removed in the next version, Please use ‘dataset_broadcast_opt_level’ instead.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:19.284.999 [mindspore/runtime/thread_bind_core.py:499] Failed to acquire available cpu info, from Failed to parse the result of executing ‘cat /sys/fs/cgroup/cpuset/cpuset.cpus’. Will not enable bind core feature.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:26:19.285.352 [mindspore/runtime/executor.py:155] set_cpu_affinity is not enabled because the environment does not meet the basic conditions for binding core.
2026-01-22 00:26:19,315 - mindformers/root/mindformers/output/log[mindformers/tools/register/template.py:84] - WARNING - The input config swap_config is empty.

2026-01-22 00:26:19,359 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:273] - INFO - The current parallel mode is stand_alone, batch size per card will not be changed: batch_size_per_card = 1
2026-01-22 00:26:19,360 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:277] - INFO - global_batch_size = batch_size_per_card * device_num * gradient_accumulation_steps = 1 = 1 * 1 * 1
2026-01-22 00:26:19,360 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:289] - INFO - parallel_config will be change to default config: [ParallelConfig]
swap:[ParallelConfig]
backward_prefetch:backward_prefetch
layers:layers
_swap:False
_default_prefetch:1
_layer_swap:
_op_swap:{‘.*\.flash_attention’: [{‘backward_prefetch’: 1, ‘layers’: True}]}

_recompute:[ParallelConfig]
_recompute:True
_select_recompute:False
_select_comm_recompute:False
_parallel_optimizer_comm_recompute:True
_mp_comm_recompute:True
_recompute_slice_activation:False
_select_recompute_exclude:False
_select_comm_recompute_exclude:False

select_recompute:False
use_seq_parallel:False
context_parallel_algo:ContextParallelAlgo.COLOSSALAI_CP
ulysses_degree_in_cp:1
mem_coeff:0.1
_optimizer_shard:None
_gradient_aggregation_group:1
_embed_dp_mp_config:[ParallelConfig]
_dp_mp_config:[ParallelConfig]
_data_parallel:1
_model_parallel:1
_context_parallel:1
use_seq_parallel:True
select_recompute:False
context_parallel_algo:ContextParallelAlgo.colossalai_cp

_vocab_emb_dp:True
use_seq_parallel:True
select_recompute:False

_pp_config:[ParallelConfig]
_pipeline_stage:1
_micro_batch_num:1
_seq_split_num:1

_moe_config:[ParallelConfig]
_dpmp:[ParallelConfig]
_data_parallel:1
_model_parallel:1
_context_parallel:1
use_seq_parallel:True
select_recompute:False
context_parallel_algo:ContextParallelAlgo.colossalai_cp

_expert_parallel:1
use_seq_parallel:True
select_recompute:False
enable_deredudency:False
npu_nums_per_device:1

.
2026-01-22 00:26:19,362 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:1120] - INFO - …Build Dataset For Train…
2026-01-22 00:26:19,362 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:435] - INFO - …Build Dataset From Config…
2026-01-22 00:26:19,363 - mindformers/root/mindformers/output/log[mindformers/dataset/causal_language_model_dataset.py:309] - INFO - Now Create Causal Language Model Dataset.
2026-01-22 00:26:19,364 - mindformers/root/mindformers/output/log[mindformers/dataset/base_dataset.py:84] - INFO - Now dataset_strategy is [[1, 1], [1, 1], [1, 1], [1, 1]], shard_id: 0, num_shards: 1
2026-01-22 00:26:19,365 - mindformers/root/mindformers/output/log[mindformers/dataset/dataloader/hf_dataloader.py:297] - INFO - > using datasets.load_dataset to load dataset. Pass ‘load_func’ in config.load to change this behavior.
2026-01-22 00:26:21,324 - mindformers/root/mindformers/output/log[mindformers/dataset/dataloader/hf_dataloader.py:331] - INFO - > processing AlpacaInstructDataHandler in handler module …
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won’t be available and only tokenizers, configuration and file/data utilities can be used.
Map: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5306/5306 [00:06<00:00, 829.61 examples/s]
2026-01-22 00:26:38,009 - mindformers/root/mindformers/output/log[mindformers/dataset/dataloader/hf_dataloader.py:331] - INFO - > processing PackingHandler in handler module …
2026-01-22 00:26:38,009 - mindformers/root/mindformers/output/log[mindformers/dataset/handler/base_handler.py:41] - INFO - tokenizer not set or it have no pad_token_id, set 0 as pad_token_id.
Packing: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5306/5306 [00:01<00:00, 2782.80it/s]
2026-01-22 00:26:40,441 - mindformers/root/mindformers/output/log[mindformers/trainer/base_trainer.py:1124] - INFO - Create train dataset finish, dataset size:216
2026-01-22 00:26:40,441 - mindformers/root/mindformers/output/log[mindformers/trainer/utils.py:221] - INFO - Will be Training epochs:1, sink_size:1

2026-01-22 00:27:11,884 - mindformers/root/mindformers/output/log[mindformers/utils/load_checkpoint_utils.py:668] - WARNING - no metadata info in file.Please make sure remove_redundancy is consistent in file and config.
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:11.892.308 [mindspore/train/serialization.py:331] The type of Parameter#1142:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:24.998.470 [mindspore/train/serialization.py:331] The type of Parameter#1085:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:24.999.336 [mindspore/train/serialization.py:331] The type of Parameter#1090:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:25.378.676 [mindspore/train/serialization.py:331] The type of Parameter#1091:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:25.918.438 [mindspore/train/serialization.py:331] The type of Parameter#1092:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time

[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:47.464.413 [mindspore/train/serialization.py:331] The type of Parameter#1128:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:47.464.838 [mindspore/train/serialization.py:331] The type of Parameter#1126:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:49.990.989 [mindspore/train/serialization.py:331] The type of Parameter#1127:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:51.257.775 [mindspore/train/serialization.py:331] The type of Parameter#1133:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:51.258.421 [mindspore/train/serialization.py:331] The type of Parameter#1138:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:51.690.418 [mindspore/train/serialization.py:331] The type of Parameter#1139:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:52.319.481 [mindspore/train/serialization.py:331] The type of Parameter#1140:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:52.320.012 [mindspore/train/serialization.py:331] The type of Parameter#1137:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:52.320.335 [mindspore/train/serialization.py:331] The type of Parameter#1136:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:52.320.742 [mindspore/train/serialization.py:331] The type of Parameter#1134:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:54.809.108 [mindspore/train/serialization.py:331] The type of Parameter#1135:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:56.528.63 [mindspore/train/serialization.py:331] The type of Parameter#1204:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:56.617.43 [mindspore/train/serialization.py:331] The type of Parameter#1209:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:56.492.046 [mindspore/train/serialization.py:331] The type of Parameter#1141:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:57.109.231 [mindspore/train/serialization.py:331] The type of Parameter#1210:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:57.109.779 [mindspore/train/serialization.py:331] The type of Parameter#1208:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:57.110.095 [mindspore/train/serialization.py:331] The type of Parameter#1207:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:57.110.487 [mindspore/train/serialization.py:331] The type of Parameter#1205:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:27:59.635.307 [mindspore/train/serialization.py:331] The type of Parameter#1206:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:00.884.642 [mindspore/train/serialization.py:331] The type of Parameter#1211:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:00.885.281 [mindspore/train/serialization.py:331] The type of Parameter#1216:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:01.317.647 [mindspore/train/serialization.py:331] The type of Parameter#1217:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:01.948.001 [mindspore/train/serialization.py:331] The type of Parameter#1218:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:01.948.551 [mindspore/train/serialization.py:331] The type of Parameter#1215:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:01.948.971 [mindspore/train/serialization.py:331] The type of Parameter#1214:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:01.949.379 [mindspore/train/serialization.py:331] The type of Parameter#1212:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:04.462.440 [mindspore/train/serialization.py:331] The type of Parameter#1213:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:05.704.897 [mindspore/train/serialization.py:331] The type of Parameter#1219:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:05.705.530 [mindspore/train/serialization.py:331] The type of Parameter#1224:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:06.139.366 [mindspore/train/serialization.py:331] The type of Parameter#1225:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:06.757.339 [mindspore/train/serialization.py:331] The type of Parameter#1226:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:06.757.895 [mindspore/train/serialization.py:331] The type of Parameter#1223:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:06.758.214 [mindspore/train/serialization.py:331] The type of Parameter#1222:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:06.758.647 [mindspore/train/serialization.py:331] The type of Parameter#1220:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:09.272.188 [mindspore/train/serialization.py:331] The type of Parameter#1221:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:10.516.598 [mindspore/train/serialization.py:331] The type of Parameter#1143:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:10.517.251 [mindspore/train/serialization.py:331] The type of Parameter#1148:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:10.954.925 [mindspore/train/serialization.py:331] The type of Parameter#1149:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:11.586.107 [mindspore/train/serialization.py:331] The type of Parameter#1150:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:11.586.646 [mindspore/train/serialization.py:331] The type of Parameter#1147:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:11.586.969 [mindspore/train/serialization.py:331] The type of Parameter#1146:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:11.587.415 [mindspore/train/serialization.py:331] The type of Parameter#1144:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:14.100.811 [mindspore/train/serialization.py:331] The type of Parameter#1145:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:15.347.522 [mindspore/train/serialization.py:331] The type of Parameter#1151:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:15.348.182 [mindspore/train/serialization.py:331] The type of Parameter#1156:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:15.789.753 [mindspore/train/serialization.py:331] The type of Parameter#1157:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:16.423.097 [mindspore/train/serialization.py:331] The type of Parameter#1158:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:16.423.636 [mindspore/train/serialization.py:331] The type of Parameter#1155:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:16.423.994 [mindspore/train/serialization.py:331] The type of Parameter#1154:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:16.424.439 [mindspore/train/serialization.py:331] The type of Parameter#1152:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:18.914.926 [mindspore/train/serialization.py:331] The type of Parameter#1153:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:20.137.485 [mindspore/train/serialization.py:331] The type of Parameter#1159:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:20.138.157 [mindspore/train/serialization.py:331] The type of Parameter#1164:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:20.569.802 [mindspore/train/serialization.py:331] The type of Parameter#1165:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:21.187.023 [mindspore/train/serialization.py:331] The type of Parameter#1166:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:21.187.559 [mindspore/train/serialization.py:331] The type of Parameter#1163:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:21.187.885 [mindspore/train/serialization.py:331] The type of Parameter#1162:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
[WARNING] ME(19780:281473461764128,MainProcess):2026-01-22-00:28:21.188.304 [mindspore/train/serialization.py:331] The type of Parameter#1160:BFloat16 in ‘parameter_dict’ is different from the type of it in ‘net’:Float32, then the type convert from BFloat16 to Float32 in the network. May consume additional memory and time
Bus error (core dumped)

用户您好,已收到上述问题,会尽快分析和答复~

您好,请问您的训练配置文件能否提供一下呢?

以及硬件配置能否提供?是64G还是32G?

finetune_qwen3_8b_lora.yaml

seed: 42
output_dir: ‘./output’
load_checkpoint: ‘’
load_ckpt_format: ‘safetensors’
src_strategy_path_or_dir: ‘’
auto_trans_ckpt: True # If true, automatically transforms the loaded checkpoint for distributed model compatibility
only_save_strategy: False
resume_training: False
use_parallel: True
run_mode: ‘finetune’
use_legacy: False

**************************************************************************************

pretrained_model_dir: “/share/new_models/qwen3/Qwen3-8B”

Trainer configuration

trainer:
type: CausalLanguageModelingTrainer
model_name: ‘Qwen3’

Runner configuration

runner_config:
epochs: 1
batch_size: 1
gradient_accumulation_steps: 1

Optimizer configuration

optimizer:
type: AdamW
betas: [0.9, 0.95]
eps: 1.e-8
weight_decay: 0.0

Learning rate scheduler configuration

lr_schedule:
type: ConstantWarmUpLR
learning_rate: 1.e-6
warmup_ratio: 0
total_steps: -1 # -1 indicates using the total steps from the dataset

Dataset configuration

train_dataset: &train_dataset
input_columns: [“input_ids”, “labels”, “loss_mask”, “position_ids”]
construct_args_key: [“input_ids”, “labels”, “loss_mask”, “position_ids”]

data_loader:
type: HFDataLoader

# datasets load arguments
load_func: 'load_dataset'
## ************************************************************************************************
# path: "llm-wizard/alpaca-gpt4-data-zh"
path: "json"  # 如果使用本地json文件离线加载数据集,可以取消注释下面两行,并注释掉上面一行
data_files: '/root/lora1_qwen3_8b/dog_prevention.json'
split: "train"

# MindFormers dataset arguments
create_attention_mask: False
create_compressed_eod_mask: False
compressed_eod_mask_length: 128
use_broadcast_data: True
shuffle: False

# dataset process arguments
handler:
  # - type: take**************************************************
  #   n: 2000**************************************************
  - type: AlpacaInstructDataHandler
    seq_length: 4096
    padding: False
    tokenizer:
      trust_remote_code: True
      padding_side: 'right'
  - type: PackingHandler
    seq_length: 4096
    pack_strategy: 'pack'

num_parallel_workers: 8
python_multiprocessing: False
drop_remainder: True
numa_enable: False
prefetch_size: 1
seed: 1234
train_dataset_task:
type: CausalLanguageModelDataset
dataset_config: *train_dataset

MindSpore context initialization configuration, reference: mindspore.set_context | MindSpore 2.6.0 documentation | MindSpore

context:
mode: 1 # 0–Graph Mode; 1–Pynative Mode****************************************0->1
device_target: “Ascend” # Target device to run (only supports “Ascend”)
max_device_memory: “58GB” # Maximum memory available for the device
memory_optimize_level: “O0” # Memory optimization level
jit_config: # Global JIT configuration for compilation
jit_level: “O0” # Compilation optimization level
ascend_config: # Parameters specific to the Ascend hardware platform
precision_mode: “must_keep_origin_dtype” # Mixed precision mode setting
parallel_speed_up_json_path: “./configs/qwen3/parallel_speed_up.json” # Path to the parallel speedup JSON file

Parallel configuration

parallel_config:
data_parallel: 1 # Number of data parallel******************************************** &dp 1->1
model_parallel: 1 # Number of model parallel4->1
pipeline_stage: 1 # Number of pipeline parallel
*****4->1
micro_batch_num: 1 # Pipeline parallel microbatch size**********************************************4->1
use_seq_parallel: True # Whether to enable sequence parallelism
gradient_aggregation_group: 1 # Size of the gradient communication operator fusion group

When model_parallel > 1, setting micro_batch_interleave_num to 2 may accelerate the training process.

micro_batch_interleave_num: 1

Parallel context configuration

parallel:
parallel_mode: 1 # 0–data parallel; 1–semi-auto parallel; 2–auto parallel; 3–hybrid parallel
enable_alltoall: True # Enables AllToAll communication operator during parallel communication
full_batch: False # Whether to load the full batch of data in parallel mode
dataset_strategy: [
[1, 1],
[1, 1],
[1, 1],
[1, 1]
] # Must match the length of train_dataset.input_columns
search_mode: “sharding_propagation” # Fully-automatic parallel strategy search mode
strategy_ckpt_config:
save_file: “./ckpt_strategy.ckpt” # Path for saving the parallel slicing strategy file
only_trainable_params: False # Whether to save/load slicing strategy for trainable parameters only
enable_parallel_optimizer: False # Whether to enable optimizer parallelism

Recomputation configuration

recompute_config:
recompute: True
select_recompute: False
parallel_optimizer_comm_recompute: True
mp_comm_recompute: True

Model configuration

model:
model_config:

Configurations from MindFormers

qkv_concat: True
hidden_dropout: 0.0
input_sliced_sig: True
untie_embeddings_and_output_weights: True
position_embedding_type: “rope”
use_contiguous_weight_layout_attention: False
offset: 0
params_dtype: “float32”
compute_dtype: “bfloat16”
layernorm_compute_dtype: “float32”
softmax_compute_dtype: “float32”
rotary_dtype: “float32”
fp32_residual_connection: True

# 在model_config层级下添加pet_config
pet_config:
  pet_type: lora
  lora_rank: 8
  lora_alpha: 16
  lora_dropout: 0.1
  lora_a_init: 'normal'
  lora_b_init: 'zeros'
  target_modules: '.*word_embeddings|.*linear_qkv|.*linear_proj|.*linear_fc1|.*linear_fc2'
  freeze_include: ['*']
  freeze_exclude: ['*lora*']

Callbacks configuration, reference: Configuration File Descriptions | MindSpore Transformers 1.5.0 documentation | MindSpore

callbacks:

  • type: MFLossMonitor # Prints training progress information
  • type: CheckpointMonitor # Saves model weights during training
    prefix: “qwen3” # Prefix for saved file names
    save_checkpoint_steps: 5000 # Interval steps for saving model weights
    keep_checkpoint_max: 1 # Maximum number of saved model weight files
    integrated_save: False # Whether to aggregate weights for saving
    async_save: False # Whether to save model weights asynchronously
    checkpoint_format: “safetensors” # Format for saving checkpoints

Wrapper cell configuration

runner_wrapper:
type: MFTrainOneStepCell
scale_sense: 1.0
use_clip_grad: True

profile: False
profile_start_step: 1
profile_stop_step: 10
init_start_profile: False
profile_communication: False
profile_memory: True
layer_scale: False
layer_decay: 0.65
lr_scale_factor: 256

请问可以提供一下mindspore transformers版本 以及 mindspore版本吗?

确认一下mindspore transformers版本是否与mindspore 和 cann版本配套

安装指南 | MindSpore Transformers 1.8.0 文档 | 昇思MindSpore社区

此话题已在最后回复的 60 分钟后被自动关闭。不再允许新回复。