qwen-7b全参微调报错RuntimeError: Preprocess failed before run graph 1.

1. 系统环境

硬件环境(Ascend/GPU/CPU): Ascend910
MindSpore版本: mindspore=2.2.0
执行模式(PyNative/ Graph):不限
Python版本: Python=3.9
操作系统平台: 不限

2. 报错信息

2.1 问题描述

将qwen-7b全参微调时报错RuntimeError: Preprocess failed before run graph 1.

2.2 配置信息

seed: 0
output_dir: './output'
load_checkpoint: '/home/ma-user/work/infer/qwen_7b_base.ckpt'
src_strategy_path_or_dir: ''
auto_trans_ckpt: True  # If true, auto transform load_checkpoint to load in distributed model
only_save_strategy: False
resume_training: False
use_parallel: True
run_mode: 'finetune'

# trainer config
trainer:
  type: CausalLanguageModelingTrainer
  model_name: 'qwen_7b'

# dataset
train_dataset: &train_dataset
  data_loader:
    type: MindDataset
    dataset_dir: "/home/ma-user/work/infer/alpaca.mindrecord"
    shuffle: True
  input_columns: ["input_ids", "labels", "attention_mask"]
  num_parallel_workers: 8
  python_multiprocessing: False
  drop_remainder: True
  batch_size: 1
  repeat: 1
  numa_enable: False
  prefetch_size: 1
train_dataset_task:
  type: CausalLanguageModelDataset
  dataset_config: *train_dataset

# runner config
runner_config:
  epochs: 5
  batch_size: 1
  sink_mode: True
  sink_size: 2
runner_wrapper:
  type: MFTrainOneStepCell
  scale_sense:
    type: DynamicLossScaleUpdateCell
    loss_scale_value: 65536
    scale_factor: 2
    scale_window: 1000
  use_clip_grad: True

# optimizer
optimizer:
  type: FP32StateAdamWeightDecay
  beta1: 0.9
  beta2: 0.95
  eps: 1.e-6
  weight_decay: 0.1

# lr sechdule
lr_schedule:
  type: CosineWithWarmUpLR
  learning_rate: 1.e-5
  warmup_ratio: 0.01
  total_steps: -1 # -1 means it will load the total steps of the dataset

# callbacks
callbacks:
  - type: MFLossMonitor
  - type: CheckpointMointor
    prefix: "qwen"
    save_checkpoint_steps: 10000
    keep_checkpoint_max: 3
    integrated_save: False
    async_save: False
  - type: ObsMonitor

# default parallel of device num = 8 910
parallel_config:
  data_parallel: 8
  model_parallel: 1
  pipeline_stage: 1
  micro_batch_num: 1
  vocab_emb_dp: False
  gradient_aggregation_group: 4
# when model parallel is greater than 1, we can set micro_batch_interleave_num=2, that may accelerate the train process.
micro_batch_interleave_num: 1

model:
  model_config:
    type: QwenConfig
    batch_size: 1
    seq_length: 1024
    hidden_size: 4096
    num_hidden_layers: 32
    num_attention_heads: 32
    vocab_size: 151936
    intermediate_size: 11008
    rms_norm_eps: 1.0e-6
    emb_dropout_prob: 0.0
    eos_token_id: 151643
    pad_token_id: 151643
    compute_dtype: "float16"
    layernorm_compute_type: "float32"
    softmax_compute_type: "float16"
    rotary_dtype: "float16"
    param_init_type: "float16"
    use_past: True
    use_flash_attention: False
    use_past_shard: False
    offset: 0
    checkpoint_name_or_path: "/home/ma-user/work/infer/qwen_7b_base.ckpt"
    repetition_penalty: 1
    max_decode_length: 512
    top_k: 0
    top_p: 0.8
    do_sample: False

    # configuration items copied from Qwen
    rotary_pct: 1.0
    rotary_emb_base: 10000
    kv_channels: 128

  arch:
    type: QwenForCausalLM

processor:
  return_tensors: ms
  tokenizer:
    model_max_length: 8192
    vocab_file: "/path/qwen.tiktoken"
    pad_token: "<|endoftext|>"
    type: QwenTokenizer
  type: QwenProcessor

# mindspore context init config
context:
  mode: 0 #0--Graph Mode; 1--Pynative Mode
  device_target: "Ascend"
  enable_graph_kernel: False
  graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true"
  ascend_config:
    precision_mode: "must_keep_origin_dtype"
  max_call_depth: 10000
  max_device_memory: "58GB"
  save_graphs: False
  save_graphs_path: "./graph"
  device_id: 0

# parallel context config
parallel:
  parallel_mode: 1 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel
  gradients_mean: False
  enable_alltoall: False
  full_batch: True
  search_mode: "sharding_propagation"
  enable_parallel_optimizer: True
  strategy_ckpt_config:
    save_file: "./ckpt_strategy.ckpt"
    only_trainable_params: False
  parallel_optimizer_config:
    gradient_accumulation_shard: False
    parallel_optimizer_threshold: 64

infer:
    prefill_model_path: "/path/qwen_7b_prefill.mindir"
    increment_model_path: "path/qwen_7b_inc.mindir"
    infer_seq_length: 1024

2.3****报错信息

Traceback (most recent call last):
  File "/home/ma-user/work/mindformers-dev/research/qwen/run_qwen.py", line 165, in <module>
    main(task=args.task,
  File "/home/ma-user/work/mindformers-dev/research/qwen/run_qwen.py", line 118, in main
    trainer.finetune(finetune_checkpoint=ckpt, auto_trans_ckpt=auto_trans_ckpt)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1313, in wrapper
    return func(*args, **kwargs)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindformers/trainer/trainer.py", line 498, in finetune
    self.trainer.train(
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 97, in train
    self.training_process(
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindformers/trainer/base_trainer.py", line 696, in training_process
    transform_and_load_checkpoint(config, model, network, dataset)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 311, in transform_and_load_checkpoint
    build_model(config, model, dataset, do_eval=do_eval, do_predict=do_predict)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 436, in build_model
    model.build(train_dataset=dataset, epoch=config.runner_config.epochs,
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 1274, in build
    self._init(train_dataset, valid_dataset, sink_size, epoch)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 529, in _init
    train_network.compile(*inputs)
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
    _cell_graph_executor.compile(self, phase=self.phase,
  File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1547, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: Preprocess failed before run graph 1.

----------------------------------------------------
- Framework Error Message:
----------------------------------------------------
Out of Memory!!! Request memory size: 35051914240B, Memory Statistic:
Device HBM memory size: 32768M
MindSpore Used memory size: 30678M
MindSpore memory base address: 0x124180000000
Total Static Memory size: 12040M
Total Dynamic memory size: 0M
Dynamic memory size of this graph: 0M

Please try to reduce 'batch_size' or check whether exists extra large shape. For more details, please refer to 'Out of Memory' at https://www.mindspore.cn .

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_kernel_executor.cc:286 PreprocessBeforeRunGraph
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:185 MallocDynamicDevMem

[WARNING] MD(123569,ffffa9748010,python):2024-01-04-14:24:06.999.824 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:163] ~DataQueueOp]
preprocess_batch: 100;
batch_queue: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1;
            push_start_time -> push_end_time
2024-01-04-14:18:18.185.953 -> 2024-01-04-14:18:18.186.203
2024-01-04-14:18:18.205.302 -> 2024-01-04-14:18:18.220.553
2024-01-04-14:18:18.220.798 -> 2024-01-04-14:18:18.221.154
2024-01-04-14:18:18.221.300 -> 2024-01-04-14:18:18.221.999
2024-01-04-14:18:18.222.250 -> 2024-01-04-14:18:18.222.646
2024-01-04-14:18:18.223.158 -> 2024-01-04-14:18:18.223.419
2024-01-04-14:18:18.223.665 -> 2024-01-04-14:18:18.223.945
2024-01-04-14:18:18.224.214 -> 2024-01-04-14:18:18.224.818
2024-01-04-14:18:18.225.096 -> 2024-01-04-14:18:18.225.442
2024-01-04-14:18:18.225.901 -> 2024-01-04-14:18:18.226.317
For more details, please refer to the FAQ at https://www.mindspore.cn/docs/en/master/faq/data_processing.html.

3. 根因分析

单从报错上看是卡内存不足

Out of Memory!!! Request memory size: 35051914240B, Memory Statistic: Device HBM memory size: 32768M MindSpore Used memory size: 30678M

但是看yaml配置文件
batch_size: 1
seq_length: 1024
这个配置设置并不高,按理说910的32g是可以支持的。
然后再配置文件里面有看到另外一个参数
max_device_memory: "58GB"
这个明显不对了,这个应该是给设备内存64G用的。910只有32G,所以设置超限制了。

4. 解决方案

修改max_device_memory
910最大支持32G,去除默认占用的,可以设置为"29G"