欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

pytorch 加载 本地 roberta 模型

程序员文章站 2022-06-15 19:38:47
在利用Torch模块加载本地roberta模型时总是报OSERROR,如下:OSError: Model name './chinese_roberta_wwm_ext_pytorch' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large.....

在利用Torch模块加载本地roberta模型时总是报OSERROR,如下:

OSError: Model name './chinese_roberta_wwm_ext_pytorch' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './chinese_roberta_wwm_ext_pytorch' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.

解决方式:使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel,

如用RobertaForQuestionAnswering,如下的方式也可以,但是会有warning。

import torch
from transformers import BertTokenizer, BertModel,RobertaTokenizer,RobertaForQuestionAnswering
from transformers import logging

# logging.set_verbosity_warning()
# logging.set_verbosity_error()

tokenizer = BertTokenizer.from_pretrained("./chinese_roberta_wwm_ext_pytorch")
roberta = RobertaForQuestionAnswering.from_pretrained("./chinese_roberta_wwm_ext_pytorch",return_dict=True)

本文地址:https://blog.csdn.net/guyu1003/article/details/109024791