C++ 加载模型报corrupted double-linked list (not small)

使用python加载convert之后的MINDIR模型没有问题,但是使用C++加载和推理没问题,就是程序运行结束后报:

corrupted double-linked list (not small)
Aborted(核心已转储)

用户您好,已收到上述的问题,会尽快的分析答复~

最后做了内存释放了没?

我没做任何操作啊

int main(int argc, const char** argv) {
	std::string model_path = argv[1];
	std::string config_file = argv[2];
	int device_id = atoi(argv[3]);
	/*InferenceEngineForAscend inference_interface(model_path, config_file, device_id);*/


	int batch_size = atoi(argv[4]);

	int flow_num = 8;
	int packet_length = 128;

	int seq_length = 64;
	int seq_feature = 5;

	// Create and init context, add CPU device info
	auto context = std::make_shared<mindspore::Context>();
	if (context == nullptr) {
		std::cerr << "New context failed." << std::endl;
		return -1;
	}
	auto& device_list = context->MutableDeviceInfo();
	auto device_info = std::make_shared<mindspore::AscendDeviceInfo>();
	if (device_info == nullptr) {
		std::cerr << "New CPUDeviceInfo failed." << std::endl;
		return -1;
	}
	
	device_info->SetDeviceID(0);
	device_list.push_back(device_info);

	
	std::shared_ptr<mindspore::Model> model = std::make_shared<mindspore::Model>();

	auto build_ret = model->Build(model_path, mindspore::kMindIR, context);
	if (build_ret != mindspore::kSuccess) {
		std::cerr << "Build model error " << build_ret << std::endl;
		return -1;
	}

	//

	std::cout << "batch_size: " << batch_size << std::endl;

	return 0;
}

什么都没做,就加载了一个模型。而且我做了实验,直接使用mindspore export出的模型没有问题,就是通过mindspore-lite工具转换了之后就会出现这个问题,无论是转换的onnx还是MINDIR,都会有

AscendDeviceInfo:用的是Ascend的硬件后端吗?

如果是Ascend硬件,需要调用Finalize接口

1 个赞