问题描述
TypeError: to() received an invalid combination of arguments - got (dtype=mindspore._c_expression.typing.Bool, device=torch.device, ), but expected one of:
- (torch.device device = None, torch.dtype dtype = None, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
- (torch.dtype dtype, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
- (Tensor tensor, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
硬件环境:
/device ascend
软件环境:
– MindSpore version (e.g., 1.7.0.Bxxx) :2.7.0
– Python version (e.g., Python 3.7.5) :3.10
– OS platform and distribution (e.g., Linux Ubuntu 16.04):
– GCC/Compiler version (if compiled from source):
执行模式:
/mode pynative
重现步骤:
pip install mindnlp==0.5.1
python newchat.py xiyouji.txt
日志:
Traceback (most recent call last):
File “/home/ma-user/work/statphys_assistant/newchat.py”, line 225, in
main()
File “/home/ma-user/work/statphys_assistant/newchat.py”, line 158, in main
faiss_db = load_knowledge_base(args.filename)
File “/home/ma-user/work/statphys_assistant/newchat.py”, line 38, in load_knowledge_base
faiss = FAISS.from_texts(split_docs, embeddings)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py”, line 1043, in from_texts
embeddings = embedding.embed_documents(texts)
File “/home/ma-user/work/statphys_assistant/embedding.py”, line 39, in embed_documents
embeddings = self.encode_texts(texts)
File “/home/ma-user/work/statphys_assistant/embedding.py”, line 33, in encode_texts
embeddings = self.embedding_model.encode(texts)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/torch/utils/_contextlib.py”, line 120, in decorate_context
return func(*args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py”, line 1094, in encode
out_features = self.forward(features, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py”, line 1175, in forward
input = module(input, **module_kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1786, in _call_impl
return forward_call(*args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py”, line 262, in forward
outputs = self.auto_model(**trans_features, **kwargs, return_dict=True)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1786, in _call_impl
return forward_call(*args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/transformers/utils/generic.py”, line 1083, in wrapper
outputs = func(self, *args, **kwargs)
File “/usr/local/python3.10.14/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py”, line 393, in forward
“full_attention”: create_causal_mask(**mask_kwargs),
File “/usr/local/python3.10.14/lib/python3.10/site-packages/mindnlp/transformers/masking_utils.py”, line 681, in create_causal_mask
early_exit, attention_mask, packed_sequence_mask, kv_length, kv_offset = _preprocess_mask_arguments(
File “/usr/local/python3.10.14/lib/python3.10/site-packages/mindnlp/transformers/masking_utils.py”, line 616, in _preprocess_mask_arguments
attention_mask = attention_mask.to(device=cache_position.device, dtype=mindtorch.bool)
TypeError: to() received an invalid combination of arguments - got (dtype=mindspore._c_expression.typing.Bool, device=torch.device, ), but expected one of:
(torch.device device = None, torch.dtype dtype = None, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
(torch.dtype dtype, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
(Tensor tensor, bool non_blocking = False, bool copy = False, *, torch.memory_format memory_format = None)
[MS_ALLOC_CONF]Runtime config: enable_vmm:True vmm_align_size:2MB