CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

Overview

CPT

This repository contains code and checkpoints for CPT.

CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu

Introduction

Aiming to unify both NLU and NLG tasks, We propose a novel Chinese Pre-trained Un-balanced Transformer (CPT), which is an unbalanced Transformer encoder-decoder pre-trained with MLM and DAE jointly.



The architecture of CPT is a variant of the full Transformer and consists of three parts:

  1. Shared Encoder (S-Enc): a Transformer encoder with fully-connected self-attention, which is designed to capture the common semantic representation for both language understanding and generation.
  2. Understanding Decoder (U-Dec): a shallow Transformer encoder with fully-connected self-attention, which is designed for NLU tasks. The input of U-Dec is the output of S-Enc.
  3. Generation Decoder (G-Dec): a Transformer decoder with masked self-attention, which is designed for generation tasks with auto-regressive fashion. G-Dec utilizes the output of S-Enc with cross-attention.

Pre-Trained Models

We provide the pre-trained weights of CPT and Chinese BART with source code, which can be directly used in Huggingface-Transformers.

  • Chinese BART-base: 6 layers Encoder, 6 layers Decoder, 12 Heads and 768 Model dim.
  • Chinese BART-large: 12 layers Encoder, 12 layers Decoder, 16 Heads and 1024 Model dim.
  • CPT-base: 10 layers S-Enc, 2 layers U-Dec/G-Dec, 12 Heads and 768 Model dim.
  • CPT-large: 20 layers S-Enc, 4 layers U-Dec/G-Dec, 16 Heads and 1024 Model dim.

The pre-trained weights can be downloaded here.

Model MODEL_NAME
Chinese BART-base fnlp/bart-base-chinese
Chinese BART-large fnlp/bart-large-chinese
CPT-base fnlp/cpt-base
CPT-large fnlp/cpt-large

Requirements:

  • pytorch==1.8.1
  • transformers==4.4.1

To use CPT, please import the file modeling_cpt.py (Download Here) that define the architecture of CPT into your project. Then, use the PTMs as the following example, where MODEL_NAME is the corresponding string that refers to the model.

For CPT:

from modeling_cpt import CPTForConditionalGeneration
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = CPTForConditionalGeneration.from_pretrained("MODEL_NAME")
print(model)

For Chinese BART:

from transformers import BertTokenizer, BartForConditionalGeneration
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BartForConditionalGeneration.from_pretrained("MODEL_NAME")
print(model)

After initializing the model, you can use the following lines to generate text.

>>> input_ids = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt')
>>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20)
>>> print(tokenizer.convert_ids_to_tokens(pred_ids[0]))
    ['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]']

Pre-Training

Pre-training code and examples can be find Here.

Fine-Tuning

Fine-tuning code and examples can be find Here.

Contact

If you have any problems, raise an issue or contact [email protected].

Citation

@article{shao2021cpt,
  title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, 
  author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
  journal={arXiv preprint arXiv:2109.05729},
  year={2021}
}
Comments
  • generation/LCSTS数据集上效果没达到

    generation/LCSTS数据集上效果没达到

    您好,我最近在LCSTS数据集上跑了您的代码,结果只有rouge-L:31,论文给的结果是38左右,差很多。

    数据集直接在网上下载然后处理成如下格式:

    {"summarization": "可穿戴技术十大设计原则", "article": "本文总结了十个可穿戴产品的设计原则,而这些原则,同样也是笔者认为是这个行业最吸引人的地方:1.为人们解决重复性问题;2.从人开始,而不是从机器开始;3.要引起注意,但不要刻意;4.提升用户能力,而不是取代人"}
    

    代码只修改了文件路径,其余无改动。 请问问题可能出在哪里呢? run_gen.py中的默认超参数,是否是最优的超参数呢?

    opened by zhoucz97 17
  • 用huggingface代码直接进行BART large fineturning出现繁体字

    用huggingface代码直接进行BART large fineturning出现繁体字

    以下为训练集的数据,训练了1000epoch,可以看到不仅预算变成了預算(繁简),而且A=SM变成了a=sm(大小写),也就是连训练集都没有拟合,训练过程loss是接近于0的

    生成: 题目:《sm公司全面预算管理问题研究》,句式:a,其中a=sm公园公司的全面預算管辖问题探究 label: 题目:《SM 公司全面预算管理问题研究》,句式:A,其中A=SM 公司全面预算管理问题研究

    想问下可能的原因

    opened by yht4work 7
  • ner模型的问题

    ner模型的问题

    按照您提供的运行指令 python -m torch.distributed.launch --nproc_per_node 1 --nnodes 1
    train_msra.py
    --ptm_name fnlp/cpt-base
    --dataset ''
    --use_decoder 0
    --batch_size 16
    --update_every 1 运行以后,会报如下错误: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss.

    opened by suhejian 6
  • CPT多GPU卡finetuning训练报错

    CPT多GPU卡finetuning训练报错

    设置完模型参数后,使用python -m torch.distributed.launch --nproc_per_node 4 run_gen.py报错,local_rank需要作为参数进行传入,若parser.add_argument中增加--local_rank传入参数,整体多GPU训练报错。请问该如何对CPT进行多GPU卡的finetuning。恳求大佬给一份官方的使用说明!!感谢!

    opened by aidejieceng 4
  • CPTForConditionalGeneration使用多GPU报错

    CPTForConditionalGeneration使用多GPU报错

    如题,在做生成任务时,使用多GPU调用该接口会出现如下报错:

    RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. Since `find_unused_parameters=True` is enabled, this likely means that not all `forward` outputs participate in computing loss. You can fix this by making sure all `forward` function outputs participate in calculating loss.
    If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
    Parameter indices which did not receive grad for rank 2: 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388
    

    需要注意的是,一模一样的代码,我将CPTForConditionalGeneration接口换成BartForConditionalGeneration使用相应模型不会出现任何问题,请检查一下

    opened by Biaocsu 4
  • 可否提供run_gen.py的bart版本?

    可否提供run_gen.py的bart版本?

    路径下CPT/blob/master/finetune/generation/run_gen.py是CPT的版本 我自己按照这个改了一个bart版本,但是显示有很多层not used或者not initialized。 Some weights of the model checkpoint at model/bart-base-chinese were not used when initializing BartForConditionalGeneration Some weights of BartForConditionalGeneration were not initialized 不知道这些警告是否有影响,或者能否提供一个run_gen.py的bart版本?

    详细信息如下所示:

    loading weights file model/bart-base-chinese/pytorch_model.bin
    Some weights of the model checkpoint at model/bart-base-chinese were not used when initializing BartForConditionalGeneration: ['encoder.layers.4.fc1.bias',
     'encoder.layers.0.self_attn.k_proj.bias',
     'encoder.layers.3.fc1.bias',
     'encoder.layers.4.fc1.weight',
     'encoder.layers.1.final_layer_norm.bias',
     'encoder.layers.0.fc2.weight',
     'encoder.layers.0.self_attn.out_proj.bias',
     'encoder.layers.1.self_attn.out_proj.weight',
     'encoder.layers.3.self_attn.k_proj.bias',
     'encoder.layernorm_embedding.weight',
     'encoder.layers.1.fc2.weight',
     'encoder.layers.5.self_attn.q_proj.weight',
     'encoder.layers.5.self_attn.q_proj.bias',
     'encoder.layers.0.final_layer_norm.weight',
     'encoder.layers.1.self_attn.v_proj.weight',
     'encoder.layers.4.self_attn.out_proj.weight',
     'encoder.layers.5.self_attn_layer_norm.bias',
     'encoder.layers.0.self_attn_layer_norm.bias',
     'encoder.layers.3.self_attn.k_proj.weight',
     'encoder.embed_tokens.weight',
     'encoder.layers.1.self_attn.v_proj.bias',
     'encoder.layers.5.final_layer_norm.bias',
     'encoder.layers.1.fc1.weight',
     'encoder.layers.5.self_attn_layer_norm.weight',
     'encoder.layers.2.fc1.weight',
     'encoder.layers.0.final_layer_norm.bias',
     'encoder.layers.1.fc2.bias',
     'encoder.layers.3.self_attn.v_proj.weight',
     'encoder.layers.3.final_layer_norm.bias',
     'encoder.layers.2.fc1.bias',
     'encoder.layers.3.self_attn.q_proj.weight',
     'encoder.layers.1.final_layer_norm.weight',
     'encoder.layers.4.fc2.bias',
     'encoder.layers.4.self_attn.out_proj.bias',
     'encoder.layers.2.self_attn.q_proj.weight',
     'encoder.layers.2.final_layer_norm.weight',
     'encoder.embed_positions.weight',
     'encoder.layers.3.self_attn.out_proj.bias',
     'encoder.layers.3.fc1.weight',
     'encoder.layers.1.fc1.bias',
     'encoder.layers.0.self_attn.k_proj.weight',
     'encoder.layers.1.self_attn.k_proj.bias',
     'encoder.layers.0.fc2.bias',
     'encoder.layers.1.self_attn.k_proj.weight',
     'encoder.layers.5.self_attn.v_proj.bias',
     'encoder.layers.1.self_attn.q_proj.weight',
     'encoder.layers.2.final_layer_norm.bias',
     'encoder.layers.4.self_attn_layer_norm.weight',
     'encoder.layers.4.self_attn.v_proj.bias',
     'encoder.layers.2.self_attn_layer_norm.weight',
     'encoder.layers.0.fc1.weight',
     'encoder.layers.4.self_attn.k_proj.bias',
     'encoder.layers.0.self_attn.q_proj.bias',
     'encoder.layers.4.final_layer_norm.bias',
     'encoder.layers.0.self_attn.v_proj.weight',
     'encoder.layers.3.final_layer_norm.weight',
     'encoder.layers.5.self_attn.out_proj.weight',
     'encoder.layers.4.self_attn.q_proj.weight',
     'encoder.layers.0.self_attn_layer_norm.weight',
     'encoder.layers.5.self_attn.v_proj.weight',
     'encoder.layers.2.self_attn.v_proj.weight',
     'encoder.layers.1.self_attn.out_proj.bias',
     'encoder.layers.2.self_attn.k_proj.bias',
     'encoder.layers.2.self_attn.out_proj.weight',
     'encoder.layers.3.self_attn.v_proj.bias',
     'encoder.layers.2.self_attn.q_proj.bias',
     'encoder.layers.2.self_attn.out_proj.bias',
     'encoder.layers.3.fc2.bias',
     'encoder.layers.5.fc1.weight',
     'encoder.layernorm_embedding.bias',
     'encoder.layers.0.fc1.bias',
     'encoder.layers.3.self_attn_layer_norm.bias',
     'encoder.layers.5.self_attn.k_proj.weight',
     'encoder.layers.5.fc1.bias',
     'encoder.layers.3.fc2.weight',
     'encoder.layers.4.fc2.weight',
     'encoder.layers.0.self_attn.v_proj.bias',
     'encoder.layers.0.self_attn.q_proj.weight',
     'encoder.layers.1.self_attn.q_proj.bias',
     'encoder.layers.3.self_attn_layer_norm.weight',
     'encoder.layers.2.self_attn.k_proj.weight',
     'encoder.layers.2.self_attn.v_proj.bias',
     'encoder.layers.5.final_layer_norm.weight',
     'encoder.layers.5.self_attn.out_proj.bias',
     'encoder.layers.0.self_attn.out_proj.weight',
     'encoder.layers.5.fc2.weight',
     'encoder.layers.5.fc2.bias',
     'encoder.layers.1.self_attn_layer_norm.bias',
     'encoder.layers.4.self_attn.k_proj.weight',
     'encoder.layers.5.self_attn.k_proj.bias',
     'encoder.layers.3.self_attn.q_proj.bias',
     'encoder.layers.4.self_attn.q_proj.bias',
     'encoder.layers.1.self_attn_layer_norm.weight',
     'encoder.layers.2.self_attn_layer_norm.bias',
     'encoder.layers.4.final_layer_norm.weight',
     'encoder.layers.4.self_attn.v_proj.weight',
     'encoder.layers.2.fc2.weight',
     'encoder.layers.2.fc2.bias',
     'encoder.layers.4.self_attn_layer_norm.bias',
     'encoder.layers.3.self_attn.out_proj.weight']
    - This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
    - This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at model/bart-base-chinese and are newly initialized: 
    ['encoder.encoder.layer.1.output.dense.bias',
     'encoder.encoder.layer.3.attention.self.key.bias',
     'encoder.encoder.layer.3.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.4.attention.self.value.bias',
     'encoder.encoder.layer.2.attention.output.dense.bias',
     'encoder.encoder.layer.4.output.LayerNorm.bias',
     'encoder.encoder.layer.4.output.LayerNorm.weight',
     'encoder.encoder.layer.4.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.0.intermediate.dense.bias',
     'encoder.encoder.layer.5.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.0.output.LayerNorm.bias',
     'encoder.encoder.layer.5.attention.output.LayerNorm.bias',
     'encoder.encoder.layer.2.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.2.attention.self.key.weight',
     'encoder.embeddings.LayerNorm.weight',
     'encoder.encoder.layer.0.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.1.attention.self.key.bias',
     'encoder.encoder.layer.3.intermediate.dense.weight',
     'encoder.encoder.layer.5.intermediate.dense.weight',
     'encoder.encoder.layer.0.output.dense.weight',
     'encoder.encoder.layer.5.output.LayerNorm.bias',
     'encoder.encoder.layer.1.output.dense.weight',
     'encoder.encoder.layer.5.attention.self.query.weight',
     'encoder.encoder.layer.1.output.LayerNorm.weight',
     'encoder.encoder.layer.4.attention.self.key.bias',
     'encoder.encoder.layer.3.output.LayerNorm.bias',
     'encoder.encoder.layer.5.output.dense.bias',
     'encoder.encoder.layer.4.attention.self.key.weight',
     'encoder.encoder.layer.0.attention.self.key.bias',
     'encoder.encoder.layer.0.attention.self.query.weight',
     'encoder.encoder.layer.0.intermediate.dense.weight',
     'encoder.encoder.layer.3.output.LayerNorm.weight',
     'encoder.encoder.layer.3.attention.output.dense.bias',
     'encoder.encoder.layer.5.output.dense.weight',
     'encoder.embeddings.LayerNorm.bias',
     'encoder.encoder.layer.1.attention.self.value.weight',
     'encoder.encoder.layer.2.output.dense.weight',
     'encoder.encoder.layer.4.intermediate.dense.weight',
     'encoder.encoder.layer.2.attention.self.value.weight',
     'encoder.encoder.layer.0.attention.self.value.weight',
     'encoder.encoder.layer.0.attention.output.dense.bias',
     'encoder.encoder.layer.2.attention.output.LayerNorm.bias',
     'encoder.encoder.layer.3.output.dense.bias',
     'encoder.encoder.layer.5.output.LayerNorm.weight',
     'encoder.encoder.layer.5.attention.output.dense.bias',
     'encoder.encoder.layer.4.attention.self.value.weight',
     'encoder.encoder.layer.3.attention.self.query.bias',
     'encoder.encoder.layer.3.attention.self.value.weight',
     'encoder.encoder.layer.3.attention.self.key.weight',
     'encoder.encoder.layer.0.output.dense.bias',
     'encoder.encoder.layer.1.intermediate.dense.bias',
     'encoder.encoder.layer.0.attention.self.query.bias',
     'encoder.encoder.layer.1.intermediate.dense.weight',
     'encoder.encoder.layer.0.attention.output.dense.weight',
     'encoder.encoder.layer.5.attention.self.value.bias',
     'encoder.embeddings.token_type_embeddings.weight',
     'encoder.encoder.layer.1.attention.output.dense.weight',
     'encoder.encoder.layer.2.attention.self.query.bias',
     'encoder.encoder.layer.2.attention.self.query.weight',
     'encoder.encoder.layer.2.attention.output.dense.weight',
     'encoder.encoder.layer.5.attention.self.query.bias',
     'encoder.embeddings.position_ids',
     'encoder.embeddings.position_embeddings.weight',
     'encoder.encoder.layer.3.attention.self.query.weight',
     'encoder.embeddings.word_embeddings.weight',
     'encoder.encoder.layer.4.output.dense.bias',
     'encoder.encoder.layer.1.attention.output.LayerNorm.weight',
     'encoder.encoder.layer.4.attention.self.query.bias',
     'encoder.encoder.layer.3.attention.self.value.bias',
     'encoder.encoder.layer.5.intermediate.dense.bias',
     'encoder.encoder.layer.1.output.LayerNorm.bias',
     'encoder.encoder.layer.3.attention.output.dense.weight',
     'encoder.encoder.layer.3.attention.output.LayerNorm.bias',
     'encoder.encoder.layer.2.output.LayerNorm.weight',
     'encoder.encoder.layer.4.attention.output.dense.weight',
     'encoder.encoder.layer.4.intermediate.dense.bias',
     'encoder.encoder.layer.2.attention.self.value.bias',
     'encoder.encoder.layer.0.attention.self.key.weight',
     'encoder.encoder.layer.1.attention.self.query.weight',
     'encoder.encoder.layer.2.intermediate.dense.bias',
     'encoder.encoder.layer.2.intermediate.dense.weight',
     'encoder.encoder.layer.5.attention.self.key.bias',
     'encoder.encoder.layer.2.attention.self.key.bias',
     'encoder.encoder.layer.2.output.LayerNorm.bias',
     'encoder.encoder.layer.5.attention.self.key.weight',
     'encoder.encoder.layer.0.attention.output.LayerNorm.bias',
     'encoder.encoder.layer.5.attention.self.value.weight',
     'encoder.encoder.layer.4.attention.output.dense.bias',
     'encoder.encoder.layer.1.attention.output.LayerNorm.bias',
     'encoder.encoder.layer.1.attention.output.dense.bias',
     'encoder.encoder.layer.5.attention.output.dense.weight',
     'encoder.encoder.layer.4.output.dense.weight',
     'encoder.encoder.layer.0.attention.self.value.bias',
     'encoder.encoder.layer.1.attention.self.value.bias',
     'encoder.encoder.layer.0.output.LayerNorm.weight',
     'encoder.encoder.layer.1.attention.self.key.weight',
     'encoder.encoder.layer.3.intermediate.dense.bias',
     'encoder.encoder.layer.1.attention.self.query.bias',
     'encoder.encoder.layer.4.attention.self.query.weight',
     'encoder.encoder.layer.3.output.dense.weight',
     'encoder.encoder.layer.2.output.dense.bias',
     'encoder.encoder.layer.4.attention.output.LayerNorm.bias']
    
    opened by 6666ev 3
  • 有关fnlp/bart-base-chinese模型加载问题

    有关fnlp/bart-base-chinese模型加载问题

    你好: 我参考A Unified Generative Framework for Aspect-Based Sentiment这篇文章,想用这个模型作中文的ABSA,于是我将原文的facebook/bart-base替换成fnlp/bart-base-chinese,但是我这里有以下几个问题: 1:transformers在4.4.1版本加载模型时会报错:RuntimeError: Error(s) in loading state_dict for BartModel: size mismatch for encoder.embed_positions.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for encoder.embed_positions.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([512, 768]). 这主要是在这里:model = BartSeq2SeqModel.build_model(bart_name, tokenizer, label_ids=label_ids, decoder_type=decoder_type,copy_gate=False, use_encoder_mlp=use_encoder_mlp, use_recur_pos=False) 2:facebook提供的batr-base中有一些文件是merges.txt和json形式的vocab,这与您在huggingface上提供的不一致。我将您在 huggingface上提供的有关bart-base-chinese提供的文件用tokenizer.from_pretrained("bart-base-chinese")使用时,pytorch报错: OSError: Can't load tokenizer for 'bart-base-chinese'. Make sure that: - 'bart-base-chinese' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bart-base-chinese' is the correct path to a directory containing relevant tokenizer files 请问这个该怎么解决?

    opened by yedongyu1996 2
  • 使用自定义数据集在bart-base-chinese的继续pretrain

    使用自定义数据集在bart-base-chinese的继续pretrain

    我想要在自己的数据集上使用Huggingface已经开源的bart-base-chinese的继续pretrain流程,但是在training.py中load_checkpoint加载模型步骤遇到了一个问题。 load_checkpoint函数中,需要得到一个tracker file,如果不存在这个文件便会有警告“will not load any checkpoints and will start from random”,但是我希望从bart-base-chinese的基础上进行pretrain,请问这个tracker file应该如何设置?以及后面torch.load是应该直接加载pytorch_model.bin吗?但是它似乎不是代码里提及的model_optim_rng.pt。

    opened by Aureole-1210 2
  • 想咨询run_gen.py如何设置GPU运行?

    想咨询run_gen.py如何设置GPU运行?

    您好!我直接运行 whj_code1/projects/CPT/finetune/generation/run_gen.py 代码,发现是用CPU运行的。我看到日志中输出了 training_args.local_ranktraining_args.devicetraining_args.n_gpu 参数,但是我发现代码中没有提供传参的位置,而且我也无法直接通过args传递这些参数。所以想咨询如何修改代码,使其能够用GPU来运行呢?

    opened by PolarisRisingWar 2
  • max_position_embeddings是1024吗

    max_position_embeddings是1024吗

    我看fnlp/cpt-base里面config.json的max_position_embeddings写的1024,但实际上1024会报错,512没问题。 发现代码里用了BertModel当encoder,但是没设置对应的max_position_embeddings 手动改成1024会导致预训练参数加载不进来。 所以我的理解是config.json写错了,实际只支持512。希望能提供一版max_position_embeddings=1024的模型,和bart对齐一下

    opened by awdrgyjilplij 2
  • why BertTokenizer is used instead of BartTokenizer?

    why BertTokenizer is used instead of BartTokenizer?

    Thank you for your nice work!

    when preprocessing data, I follow your code to use BertTokenizer to load the cpt-base tokenizer. The tokenizer is load successfully, but I get the following warning message:

    """ The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'BartTokenizer'. The class this function is called from is 'BertTokenizer'. """

    Then I tried to use BartTokenizer to load it, but I failed.

    The question is whether I should ignore the warning and still use the BertTokenizer? Thank you.

    opened by Chen-Wang-CUHK 2
Releases(v2.0)
Owner
fastNLP
由复旦大学的自然语言处理(NLP)团队发起的国产自然语言处理开源项目
fastNLP
fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.

fastNLP fastNLP是一款轻量级的自然语言处理(NLP)工具包,目标是快速实现NLP任务以及构建复杂模型。 fastNLP具有如下的特性: 统一的Tabular式数据容器,简化数据预处理过程; 内置多种数据集的Loader和Pipe,省去预处理代码; 各种方便的NLP工具,例如Embedd

fastNLP 2.8k Jan 01, 2023
Awesome Treasure of Transformers Models Collection

💁 Awesome Treasure of Transformers Models for Natural Language processing contains papers, videos, blogs, official repo along with colab Notebooks. 🛫☑️

Ashish Patel 577 Jan 07, 2023
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
Source code for AAAI20 "Generating Persona Consistent Dialogues by Exploiting Natural Language Inference".

Generating Persona Consistent Dialogues by Exploiting Natural Language Inference Source code for RCDG model in AAAI20 Generating Persona Consistent Di

16 Oct 08, 2022
构建一个多源(公众号、RSS)、干净、个性化的阅读环境

2C 构建一个多源(公众号、RSS)、干净、个性化的阅读环境 作为一名微信公众号的重度用户,公众号一直被我设为汲取知识的地方。随着使用程度的增加,相信大家或多或少会有一个比较头疼的问题——广告问题。 假设你关注的公众号有十来个,若一个公众号两周接一次广告,理论上你会面临二十多次广告,实际上会更多,运

howie.hu 678 Dec 28, 2022
Plugin repository for Macast

Macast-plugins Plugin repository for Macast. How to use third-party player plugin Download Macast from GitHub Release. Download the plugin you want fr

109 Jan 04, 2023
This library is testing the ethics of language models by using natural adversarial texts.

prompt2slip This library is testing the ethics of language models by using natural adversarial texts. This tool allows for short and simple code and v

9 Dec 28, 2021
Code for the project carried out fulfilling the course requirements for Fall 2021 NLP at NYU

Introduction Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization,

Sai Himal Allu 1 Apr 25, 2022
A simple implementation of N-gram language model.

About A simple implementation of N-gram language model. Requirements numpy Data preparation Corpus Training data for the N-gram model, a text file lik

4 Nov 24, 2021
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 68 Jan 06, 2023
BeautyNet is an AI powered model which can tell you whether you're beautiful or not.

BeautyNet BeautyNet is an AI powered model which can tell you whether you're beautiful or not. Download Dataset from here:https://www.kaggle.com/gpios

Ansh Gupta 0 May 06, 2022
Blue Brain text mining toolbox for semantic search and structured information extraction

Blue Brain Search Source Code DOI Data & Models DOI Documentation Latest Release Python Versions License Build Status Static Typing Code Style Securit

The Blue Brain Project 29 Dec 01, 2022
Tool to add main subject to items on Wikidata using a WMFs CirrusSearch for named entity recognition or a manually supplied list of QIDs

ItemSubjector Tool made to add main subject statements to items based on the title using a home-brewed CirrusSearch-based Named Entity Recognition alg

Dennis Priskorn 9 Nov 17, 2022
Implementation of paper Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa.

RoBERTaABSA This repo contains the code for NAACL 2021 paper titled Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoB

106 Nov 28, 2022
Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph",

K-BERT Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph", which is implemented based on the UER framework. R

Weijie Liu 834 Jan 09, 2023
中文空间语义理解评测

中文空间语义理解评测 最新消息 2021-04-10 🚩 排行榜发布: Leaderboard 2021-04-05 基线系统发布: SpaCE2021-Baseline 2021-04-05 开放数据提交: 提交结果 2021-04-01 开放报名: 我要报名 2021-04-01 数据集 pa

40 Jan 04, 2023
Large-scale open domain KNOwledge grounded conVERsation system based on PaddlePaddle

Knover Knover is a toolkit for knowledge grounded dialogue generation based on PaddlePaddle. Knover allows researchers and developers to carry out eff

606 Dec 28, 2022
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

Tencent 633 Dec 28, 2022
A natural language modeling framework based on PyTorch

Overview PyText is a deep-learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapi

Meta Research 6.4k Jan 08, 2023
Exploration of BERT-based models on twitter sentiment classifications

twitter-sentiment-analysis Explore the relationship between twitter sentiment of Tesla and its stock price/return. Explore the effect of different BER

Sammy Cui 2 Oct 02, 2022