当前位置:网站首页>RuntimeError: CUDA error: CUBLAS_ STATUS_ ALLOC_ Failed when calling `cublascreate (handle) `problem solving

RuntimeError: CUDA error: CUBLAS_ STATUS_ ALLOC_ Failed when calling `cublascreate (handle) `problem solving

2022-07-07 07:05:00 Go crazy first.

One 、 Problem description

Use transformers package call pytorch Framework of the Bert When training the model , Use normal bert-base-cased Run normally on other datasets , But use it Roberta But always report errors :RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

I worked hard for several days and didn't find out what the mistake was , Keep reminding Online batch_size Is it too big to cause , It is amended as follows 16->8->4->2 It's no use .

By comparing with other data sets , Find me in tokenizer Added new special_token, This may lead to the wrong report !

  Two 、 Problem solving

In the original tokenizer Add special_tokens when , Forget to model Of tokenizer Update the vocabulary of Lead to !

The complete update method is :

from transformers import BertTokenizer, BertModel

tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
#  Add special words 
tokenizer.add_special_tokens({'additional_special_tokens':["<S>"]})

model = BertModel.from_pretrained("bert-base-cased")
#  Update the size of the thesaurus in the model !
#  important !
model.resize_token_embeddings(len(tokenizer))

3、 ... and 、 Problem solving

Can pass , Start training !

 

原网站

版权声明
本文为[Go crazy first.]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207070218236955.html