pytorch 加载 本地 roberta 模型
在利用Torch模块加载本地roberta模型时总是报OSERROR,如下:
OSError: Model name './chinese_roberta_wwm_ext_pytorch' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './chinese_roberta_wwm_ext_pytorch' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
解决方式:使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel,
如用RobertaForQuestionAnswering,如下的方式也可以,但是会有warning。
import torch
from transformers import BertTokenizer, BertModel,RobertaTokenizer,RobertaForQuestionAnswering
from transformers import logging
# logging.set_verbosity_warning()
# logging.set_verbosity_error()
tokenizer = BertTokenizer.from_pretrained("./chinese_roberta_wwm_ext_pytorch")
roberta = RobertaForQuestionAnswering.from_pretrained("./chinese_roberta_wwm_ext_pytorch",return_dict=True)
本文地址:https://blog.csdn.net/guyu1003/article/details/109024791
上一篇: FPGA学习笔记——串口回环实验
下一篇: 编码器;pid;电机位置闭环控制