首页 Paddle框架 帖子详情
paddlenlp+ChatGLM2+langchain每次回答某个问题时都报GPU错误
收藏
快速回复
Paddle框架 问答深度学习 249 0
paddlenlp+ChatGLM2+langchain每次回答某个问题时都报GPU错误
收藏
快速回复
Paddle框架 问答深度学习 249 0

      项目简介:通过paddlenlp加载ChatGLM2,结合langchain,对本地几个文件的内容进行向量化,然后基于这些文件进行问答。如果是文档内容相关的问题,一般都能正常回答,并给出相关文档。有些与文档内容无关的问题(八达岭长城在哪里?),也可以基于模型掌握的知识进行回答;但有的问题(商标法的主要内容包括什么?)每次都触发报错。可能是框架有bug。

      错误信息首先是重复以下信息64遍:

Error: ../paddle/phi/kernels/funcs/gather.cu.h:67 Assertion `index_value >= 0 && index_value < input_dims[j]` failed. The index is out of bounds, please check whether the dimensions of index and input meet the requirements. It should be less than [2048] and greater than or equal to 0, but received [0]

      我查看了对应的源文件和相关代码:/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/include/paddle/phi/kernels/funcs/gather.cu.h

GatherNdCUDAKernel
CUDA_KERNEL_LOOP_TYPE
auto index_value = indices[indices_i * end_size + j];
PADDLE_ENFORCE(
index_value >= 0 && index_value < input_dims[j],
"The index is out of bounds, "
"please check whether the dimensions of index and "
"input meet the requirements. It should "
"be less than [%d] and greater than or equal to 0, but received [%d]",
input_dims[j],
index_value);

      结合错误信息和代码可以看出:

input_dims[j] = 2048
index_value = 0
应该满足条件index_value >= 0 && index_value < input_dims[j],不应引发报错。

     引发报错的相关代码和异常信息:

from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
from langchain.vectorstores import FAISS

QAchain = RetrievalQA.from_llm( llm=llm,
                     retriever=vectorstore.as_retriever(search_kwargs={"k": 4}),
                     return_source_documents=True, )

 

OSError Traceback (most recent call last)
Cell In[7], line 6
4 if question == '':
5 break
----> 6 result = QAchain({"query":question})
7 #result = QAchain({"question":input()})
8 #print(result)
9 print("答案:%s\n参考了以下资料:\n" % result['result'])

File ~/external-libraries/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )

File ~/external-libraries/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
246 run_manager = callback_manager.on_chain_start(
247 dumpd(self),
248 inputs,
249 )
250 try:
251 outputs = (
--> 252 self._call(inputs, run_manager=run_manager)
253 if new_arg_supported
254 else self._call(inputs)
255 )
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)

File ~/external-libraries/langchain/chains/retrieval_qa/base.py:140, in BaseRetrievalQA._call(self, inputs, run_manager)
138 else:
139 docs = self._get_docs(question) # type: ignore[call-arg]
--> 140 answer = self.combine_documents_chain.run(
141 input_documents=docs, question=question, callbacks=_run_manager.get_child()
142 )
144 if self.return_source_documents:
145 return {self.output_key: answer, "source_documents": docs}

File ~/external-libraries/langchain/chains/base.py:456, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
451 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
452 _output_key
453 ]
455 if kwargs and not args:
--> 456 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
457 _output_key
458 ]
460 if not kwargs and not args:
461 raise ValueError(
462 "`run` supported with either positional arguments or keyword arguments,"
463 " but none were provided."
464 )

File ~/external-libraries/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )

File ~/external-libraries/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
246 run_manager = callback_manager.on_chain_start(
247 dumpd(self),
248 inputs,
249 )
250 try:
251 outputs = (
--> 252 self._call(inputs, run_manager=run_manager)
253 if new_arg_supported
254 else self._call(inputs)
255 )
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)

File ~/external-libraries/langchain/chains/combine_documents/base.py:106, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
104 # Other keys are assumed to be needed for LLM prediction
105 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 106 output, extra_return_dict = self.combine_docs(
107 docs, callbacks=_run_manager.get_child(), **other_keys
108 )
109 extra_return_dict[self.output_key] = output
110 return extra_return_dict

File ~/external-libraries/langchain/chains/combine_documents/stuff.py:165, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
163 inputs = self._get_inputs(docs, **kwargs)
164 # Call predict on the LLM.
--> 165 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}

File ~/external-libraries/langchain/chains/llm.py:252, in LLMChain.predict(self, callbacks, **kwargs)
237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
238 """Format prompt with kwargs and pass to LLM.
239
240 Args:
(...)
250 completion = llm.predict(adjective="funny")
251 """
--> 252 return self(kwargs, callbacks=callbacks)[self.output_key]

File ~/external-libraries/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )

File ~/external-libraries/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
246 run_manager = callback_manager.on_chain_start(
247 dumpd(self),
248 inputs,
249 )
250 try:
251 outputs = (
--> 252 self._call(inputs, run_manager=run_manager)
253 if new_arg_supported
254 else self._call(inputs)
255 )
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)

File ~/external-libraries/langchain/chains/llm.py:92, in LLMChain._call(self, inputs, run_manager)
87 def _call(
88 self,
89 inputs: Dict[str, Any],
90 run_manager: Optional[CallbackManagerForChainRun] = None,
91 ) -> Dict[str, str]:
---> 92 response = self.generate([inputs], run_manager=run_manager)
93 return self.create_outputs(response)[0]

File ~/external-libraries/langchain/chains/llm.py:102, in LLMChain.generate(self, input_list, run_manager)
100 """Generate LLM result from inputs."""
101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 102 return self.llm.generate_prompt(
103 prompts,
104 stop,
105 callbacks=run_manager.get_child() if run_manager else None,
106 **self.llm_kwargs,
107 )

File ~/external-libraries/langchain/llms/base.py:455, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
447 def generate_prompt(
448 self,
449 prompts: List[PromptValue],
(...)
452 **kwargs: Any,
453 ) -> LLMResult:
454 prompt_strings = [p.to_string() for p in prompts]
--> 455 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)

File ~/external-libraries/langchain/llms/base.py:586, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, **kwargs)
577 raise ValueError(
578 "Asked to cache, but no cache found at `langchain.cache`."
579 )
580 run_managers = [
581 callback_manager.on_llm_start(
582 dumpd(self), [prompt], invocation_params=params, options=options
583 )[0]
584 for callback_manager, prompt in zip(callback_managers, prompts)
585 ]
--> 586 output = self._generate_helper(
587 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
588 )
589 return output
590 if len(missing_prompts) > 0:

File ~/external-libraries/langchain/llms/base.py:492, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
490 for run_manager in run_managers:
491 run_manager.on_llm_error(e)
--> 492 raise e
493 flattened_outputs = output.flatten()
494 for manager, flattened_output in zip(run_managers, flattened_outputs):

File ~/external-libraries/langchain/llms/base.py:479, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
469 def _generate_helper(
470 self,
471 prompts: List[str],
(...)
475 **kwargs: Any,
476 ) -> LLMResult:
477 try:
478 output = (
--> 479 self._generate(
480 prompts,
481 stop=stop,
482 # TODO: support multiple run managers
483 run_manager=run_managers[0] if run_managers else None,
484 **kwargs,
485 )
486 if new_arg_supported
487 else self._generate(prompts, stop=stop)
488 )
489 except (KeyboardInterrupt, Exception) as e:
490 for run_manager in run_managers:

File ~/external-libraries/langchain/llms/base.py:965, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
962 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
963 for prompt in prompts:
964 text = (
--> 965 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
966 if new_arg_supported
967 else self._call(prompt, stop=stop, **kwargs)
968 )
969 generations.append([Generation(text=text)])
970 return LLMResult(generations=generations)

File ~/work/llm.py:189, in ChatGLM2._call(self, prompt, stop, run_manager, **kwargs)
182 def _call(
183 self,
184 prompt: str,
(...)
187 **kwargs: Any,
188 ) -> str:
--> 189 return self.predict(prompt)

File ~/work/llm.py:245, in ChatGLM2.predict(self, texts)
243 src_length=self.parameter.get("src_length",LLM_MAX_TOKEN)
244 input_map = self.preprocess(texts,src_length)
--> 245 infer_result = self.infer(input_map)
246 output = self.postprocess(infer_result)
247 return output

File ~/work/llm.py:219, in ChatGLM2.infer(self, inputs)
208 kwargs = {
209 "decode_strategy": self.parameter.get("decode_strategy","sampling"),
210 "top_k": self.parameter.get("top_k",1),
(...)
215 "use_cache": self.parameter.get("use_cache",True),
216 }
217 # print("参数设置完成,开始生成回答,")
218 # print(kwargs)
--> 219 result = self.model.generate(
220 **inputs,
221 **kwargs,
222 )
223 result = result[0]
224 return result

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/decorator.py:232, in decorate..fun(*args, **kw)
230 if not kwsyntax:
231 args, kw = fix(args, kw, sig)
--> 232 return caller(func, *(extras + args), **kw)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/fluid/dygraph/base.py:347, in _DecoratorContextManager.__call__.._decorate_function(func, *args, **kwargs)
344 @decorator.decorator
345 def _decorate_function(func, *args, **kwargs):
346 with self:
--> 347 return func(*args, **kwargs)

File ~/paddlenlp/transformers/generation_utils.py:960, in GenerationMixin.generate(self, input_ids, attention_mask, position_ids, max_length, min_length, decode_strategy, temperature, top_k, top_p, repetition_penalty, num_beams, num_beam_groups, length_penalty, early_stopping, bos_token_id, eos_token_id, pad_token_id, decoder_start_token_id, forced_bos_token_id, forced_eos_token_id, no_repeat_ngram_size, num_return_sequences, diversity_rate, use_cache, use_fast, use_fp16_decoding, **model_kwargs)
948 return self.sample_d2s(
949 input_ids,
950 logits_processors,
(...)
957 **model_kwargs,
958 )
959 else:
--> 960 return self.sample(
961 input_ids,
962 logits_processors,
963 max_len,
964 pad_token_id,
965 eos_token_id,
966 top_k,
967 top_p,
968 temperature,
969 **model_kwargs,
970 )
972 elif decode_strategy == "beam_search":
973 batch_size = input_ids.shape[0]

File ~/paddlenlp/transformers/generation_utils.py:1116, in GenerationMixin.sample(self, input_ids, logits_processors, max_length, pad_token_id, eos_token_id, top_k, top_p, temperature, min_tokens_to_keep, **model_kwargs)
1113 while cur_len < max_length:
1114 # prepare model inputs & get model output
1115 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-> 1116 outputs = self(**model_inputs)
1118 if isinstance(outputs, tuple):
1119 logits = outputs[0]

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File ~/paddlenlp/transformers/chatglm_v2/modeling.py:825, in ChatGLMv2ForCausalLM.forward(self, input_ids, position_ids, attention_mask, past_key_values, inputs_embeds, labels, use_cache, output_hidden_states, return_dict, return_last_logit)
822 use_cache = use_cache if use_cache is not None else self.config.use_cache
823 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
--> 825 transformer_outputs = self.chatglm_v2(
826 input_ids=input_ids,
827 position_ids=position_ids,
828 attention_mask=attention_mask,
829 past_key_values=past_key_values,
830 inputs_embeds=inputs_embeds,
831 use_cache=use_cache,
832 output_hidden_states=output_hidden_states,
833 return_dict=return_dict,
834 )
836 hidden_states = transformer_outputs[0]
838 if return_last_logit:

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File ~/paddlenlp/transformers/chatglm_v2/modeling.py:735, in ChatGLMv2Model.forward(self, input_ids, position_ids, attention_mask, full_attention_mask, past_key_values, inputs_embeds, use_cache, output_hidden_states, return_dict)
732 rotary_pos_emb = rotary_pos_emb.transpose([1, 0, 2, 3])
734 # Run encoder.
--> 735 hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
736 inputs_embeds,
737 full_attention_mask,
738 rotary_pos_emb=rotary_pos_emb,
739 kv_caches=past_key_values,
740 use_cache=use_cache,
741 output_hidden_states=output_hidden_states,
742 )
744 if not return_dict:
745 return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File ~/paddlenlp/transformers/chatglm_v2/modeling.py:555, in GLMTransformer.forward(self, hidden_states, attention_mask, rotary_pos_emb, kv_caches, use_cache, output_hidden_states)
546 hidden_states, kv_cache = self.recompute_training(
547 layer,
548 hidden_states,
(...)
552 use_cache=use_cache,
553 )
554 else:
--> 555 hidden_states, kv_cache = layer(
556 hidden_states, attention_mask, rotary_pos_emb, kv_cache=kv_caches[index], use_cache=use_cache
557 )
559 if use_cache:
560 presents = presents + (kv_cache,)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File ~/paddlenlp/transformers/chatglm_v2/modeling.py:459, in GLMBlock.forward(self, hidden_states, attention_mask, rotary_pos_emb, kv_cache, use_cache)
456 layernorm_output = self.post_attention_layernorm(layernorm_input)
458 # MLP.
--> 459 mlp_output = self.mlp(layernorm_output)
461 # Second residual connection.
462 if self.apply_residual_connection_post_layernorm:

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File ~/paddlenlp/transformers/chatglm_v2/modeling.py:389, in MLP.forward(self, hidden_states)
387 def forward(self, hidden_states):
388 # [s, b, 4hp]
--> 389 intermediate_parallel = self.dense_h_to_4h(hidden_states)
390 # Special Slicing to accomodate Tensor Parallel
391 # Even channels is ffc_fc, odd channels is gate
392 ffn_fc = intermediate_parallel[..., 0::2]

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py:1254, in Layer.__call__(self, *inputs, **kwargs)
1245 if (
1246 (not in_declarative_mode())
1247 and (not self._forward_pre_hooks)
(...)
1251 and (not in_profiler_mode())
1252 ):
1253 self._build_once(*inputs, **kwargs)
-> 1254 return self.forward(*inputs, **kwargs)
1255 else:
1256 return self._dygraph_call_func(*inputs, **kwargs)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/common.py:174, in Linear.forward(self, input)
173 def forward(self, input):
--> 174 out = F.linear(
175 x=input, weight=self.weight, bias=self.bias, name=self.name
176 )
177 return out

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/functional/common.py:1842, in linear(x, weight, bias, name)
1790 r"""
1791
1792 Fully-connected linear transformation operator. For each input :math:`X` ,
(...)
1838 # [2.1077576 2.1077576 2.1077576 2.1077576 ]]
1839 """
1840 if in_dynamic_mode():
1841 # TODO(jiabin): using addmm for fast forward route
-> 1842 return _C_ops.linear(x, weight, bias)
1843 else:
1844 helper = LayerHelper('linear', **locals())

OSError: (External) CUBLAS error(13).
[Hint: 'CUBLAS_STATUS_EXECUTION_FAILED'. The GPU program failed to execute. This is often caused by a launch failure of the kernel on the GPU, which can be caused by multiple reasons. To correct: check that the hardware, an appropriate version of the driver, and the cuBLAS library are correctly installed. ] (at ../paddle/phi/kernels/funcs/blas/blas_impl.cu.h:445)
[operator < linear > error]

 

0
收藏
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户