YACLC - Yet Another Chinese Learner Corpus

Related tags

Text Data & NLPYACLC
Overview

汉语学习者文本多维标注数据集YACLC V1.0

中文 | English

汉语学习者文本多维标注数据集(Yet Another Chinese Learner Corpus,YACLC)由北京语言大学、清华大学、北京师范大学、云南师范大学、东北大学、上海财经大学等高校组成的团队共同发布。主要项目负责人有杨麟儿、杨尔弘、孙茂松、张宝林、胡韧奋、何姗、岳岩、饶高琦、刘正皓、陈云等。

简介

汉语学习者文本多维标注数据集(Yet Another Chinese Learner Corpus,YACLC)是一个大规模的、提供偏误多维标注的汉语学习者文本数据集。我们招募了百余位汉语国际教育、语言学及应用语言学等专业背景的研究生组成标注团队,并采用众包策略分组标注。每个句子由10位标注员进行标注,每位标注员需要给出0或1的句子可接受度评分,以及纠偏标注(Grammatical Error Correction)和流利标注(Fluency based Correction)两个维度的标注结果。纠偏标注是从语法层面对偏误句进行修改,遵循忠实原意、最小改动的原则,将偏误句修改为符合汉语语法规范的句子;流利标注是将句子修改得更为流利和地道,符合母语者的表达习惯。在标注时,若句子可接受度评分为0,则标注员至少需要完成一条纠偏标注,同时可以进行流利标注。若句子可接受度评分为1,则标注员只需给出流利标注。本数据集可用于语法纠错、文本校对等自然语言处理任务,也可为汉语二语教学与习得、语料库语言学等研究领域提供数据支持。

数据规模

训练集规模为8,000条,每条数据包括原始句子及其多种纠偏标注与流利标注。验证集和测试集规模都为1,000条,每条数据包括原始句子及其全部纠偏标注与流利标注。

数据格式

每条数据中包含汉语学习者所写的待标注句子及其id、所属篇章id、所属篇章标题、标注员数量以及多维标注信息。其中,多维度标注信息包括:

  • 标注维度,"1"表示纠偏标注,"0"表示流利标注;
  • 标注后的正确文本;
  • 标注中的修改操作数量;
  • 提供该标注的标注员数量。

注意:测试集数据无标注者和多维标注信息。

数据样例如下:

{
  "sentence_id": 4308, // 句子id
  "sentence_text": "我只可以是坐飞机去的,因为巴西离英国到远极了。", // 学习者原句文本
  "article_id": 7267, // 该句所属的篇章id
  "article_name": "我放假的打算", // 篇章标题
  "total_annotators": 10, // 共多少个标注者参与了该句的标注
  "sentence_annos": [ // 多维标注信息
    {
      "is_grammatical": 1, // 标注维度:1表示纠偏标注,0表示流利标注
      "correction": "我只能坐飞机去,因为巴西离英国远极了。", // 修改后的正确文本
      "edits_count": 3, // 共有几处修改操作
      "annotator_count": 6 // 共有几个标注者修改为了这一结果
    },
    { // 下同
      "is_grammatical": 1,
      "correction": "我只能是坐飞机去的,因为巴西离英国远极了。",
      "edits_count": 2,
      "annotator_count": 1
    },
    {
      "is_grammatical": 1,
      "correction": "我只可以坐飞机去,因为巴西离英国远极了。",
      "edits_count": 3,
      "annotator_count": 2
    },
    {
      "is_grammatical": 0,
      "correction": "我只能坐飞机去,因为巴西离英国太远了。",
      "edits_count": 6,
      "annotator_count": 2
    }
  ]
}

评测代码使用

提交结果为文本文件,每行为一个修改后的句子,并与测试集中的数据逐条对应。

  • 每条测试集中的数据仅需给出一条修改结果;
  • 修改结果需使用THULAC工具包分词,请提交分词后的结果。

对于提交的结果文件output_file,后台将调用eval.py将其同标准答案文件test_gold_m2进行比较:

python eval.py output_file test_gold_m2

评测指标为F_0.5,输出结果示例:

{
  "Precision": 70.63,	
  "Recall": 37.04,
  "F_0.5": 59.79
}

引用

如果您使用了本数据集,请引用以下技术报告:

@article{wang-etal-2021-yaclc,
  title={YACLC: A Chinese Learner Corpus with Multidimensional Annotation}, 
  author={Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, Maosong Sun},
  journal={arXiv preprint arXiv:2112.15043},
  year = {2021}
}

相关资源

“文心·写作”演示系统:https://writer.wenmind.net/

语法改错论文列表: https://github.com/blcuicall/GEC-Reading-List


Introduction

YACLC is a large-scale Chinese learner text dataset, providing multi-dimensional annotations, jointly released by a team composed of Beijing Language and Culture University, Tsinghua University, Beijing Normal University, Yunnan Normal University, Northeastern University, Shanghai University of Finance and Economics.

We recruited more than 100 students majoring in Chinese International Education, Linguistics, and Applied Linguistics for crowdsourcing annotation. Each sentence is annotated by 10 annotators, and each annotator needs to give a sentence acceptability score of 0 or 1, as well as grammatical error correction and fluency-based correction. The grammatical corrections follows the principle of minimum modification to make the learners' text meet the Chinese grammatical standards. The fluency-based correction modifies the sentence to be more fluent and authentic, in line with the expression habits of native speakers. This dataset can be used for multiple NLP tasks such as grammatical error correction and spell checking. It can also provide data support for research fields such as Chinese second language teaching and acquisition, corpus linguistics, etc.

Size

YACLC V1.0 contains the train (8,000 instances), validation (1,000 instances) and test (1,000 instances) sets. Each instance of train set includes 1 sentence written by Chinese learner, various grammatical error corrections and fluent error corrections of the sentence. While, each instance in the validation and test set includes all the corrections provided by the annotators.

Format

Each instance is composed of the informations of the sentence, annotators and the multi-dimensional annotations. The multi-dimensional annotation includes:

  • dimensioning, "1" indicates grammatical error correction, and "0" indicates fluency-based correction;
  • correct sentence after annotation;
  • number of edit operations in this annotation;
  • number of annotators for this annotation.

Here is an example:

{
  "sentence_id": 4308, // 
  "sentence_text": "我只可以是坐飞机去的,因为巴西离英国到远极了。",
  "article_id": 7267, // the article id that this sentence belongs to
  "article_name": "我放假的打算", // the title of this sentence
  "total_annotators": 10, // the number of annotators for this sentence
  "sentence_annos": [ // multi-dimensional annotations
    {
      "is_grammatical": 1, // 1: grammatical, 0: fluent
      "correction": "我只能坐飞机去,因为巴西离英国远极了。", // corrected sentence 
      "edits_count": 3, // the number of edits of this annotation
      "annotator_count": 6 // the number of annotators for this annotation
    },
    { 
      "is_grammatical": 1,
      "correction": "我只能是坐飞机去的,因为巴西离英国远极了。",
      "edits_count": 2,
      "annotator_count": 1
    },
    {
      "is_grammatical": 1,
      "correction": "我只可以坐飞机去,因为巴西离英国远极了。",
      "edits_count": 3,
      "annotator_count": 2
    },
    {
      "is_grammatical": 0,
      "correction": "我只能坐飞机去,因为巴西离英国太远了。",
      "edits_count": 6,
      "annotator_count": 2
    }
  ]
}

Usage of the Evaluation Code

The submission result is a text file, where each line is a corrected sentence and corresponds to the instance in the test set one by one.

  • Only one correction needs to be given for the each instance in the test set.
  • Please submit results after word segmentation using the THULAC toolkit.

For a submitted result file output_file,we will call the script eval.py to compare it with the golden standard test_gold_m2

python eval.py output_file test_gold_m2

The Evaluation Metric is F_0.5:

{
  "Precision": 70.63,
  "Recall": 37.04,
  "F_0.5": 59.79
}

Citation

Please cite our technical report if you use this dataset:

@article{wang-etal-2021-yaclc,
  title={YACLC: A Chinese Learner Corpus with Multidimensional Annotation},
  author={Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, Maosong Sun},
  journal={arXiv preprint arXiv:2112.15043},
  year = {2021}
}
Owner
BLCU-ICALL
ICALL Research Group at Beijing Language and Culture University
BLCU-ICALL
Twitter-Sentiment-Analysis - Twitter sentiment analysis for india's top online retailers(2019 to 2022)

Twitter-Sentiment-Analysis Twitter sentiment analysis for india's top online retailers(2019 to 2022) Project Overview : Sentiment Analysis helps us to

Balaji R 1 Jan 01, 2022
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish Language Models 💃🏻 A repository part of the MarIA project. Corpora 📃 Corpora Number of documents Number of tokens Size (GB) BNE 201,080,084

Plan de Tecnologías del Lenguaje - Gobierno de España 203 Dec 20, 2022
Pytorch NLP library based on FastAI

Quick NLP Quick NLP is a deep learning nlp library inspired by the fast.ai library It follows the same api as fastai and extends it allowing for quick

Agis pof 283 Nov 21, 2022
This repository describes our reproducible framework for assessing self-supervised representation learning from speech

LeBenchmark: a reproducible framework for assessing SSL from speech Self-Supervised Learning (SSL) using huge unlabeled data has been successfully exp

49 Aug 24, 2022
XLNet: Generalized Autoregressive Pretraining for Language Understanding

Introduction XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective.

Zihang Dai 6k Jan 07, 2023
CorNet Correlation Networks for Extreme Multi-label Text Classification

CorNet Correlation Networks for Extreme Multi-label Text Classification Prerequisites python==3.6.3 pytorch==1.2.0 torchgpipe==0.0.5 click==7.0 ruamel

Guangxu Xun 38 Dec 31, 2022
MMDA - multimodal document analysis

MMDA - multimodal document analysis

AI2 75 Jan 04, 2023
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 358 Dec 24, 2022
Long text token classification using LongFormer

Long text token classification using LongFormer

abhishek thakur 161 Aug 07, 2022
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish Language Models 💃🏻 Corpora 📃 Corpora Number of documents Size (GB) BNE 201,080,084 570GB Models 🤖 RoBERTa-base BNE: https://huggingface.co

PlanTL-SANIDAD 203 Dec 20, 2022
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
KoBERT - Korean BERT pre-trained cased (KoBERT)

KoBERT KoBERT Korean BERT pre-trained cased (KoBERT) Why'?' Training Environment Requirements How to install How to use Using with PyTorch Using with

SK T-Brain 1k Jan 02, 2023
Nateve compiler developed with python.

Adam Adam is a Nateve Programming Language compiler developed using Python. Nateve Nateve is a new general domain programming language open source ins

Nateve 7 Jan 15, 2022
Knowledge Oriented Programming Language

KoPL: 面向知识的推理问答编程语言 安装 | 快速开始 | 文档 KoPL全称 Knowledge oriented Programing Language, 是一个为复杂推理问答而设计的编程语言。我们可以将自然语言问题表示为由基本函数组合而成的KoPL程序,程序运行的结果就是问题的答案。目前,

THU-KEG 62 Dec 12, 2022
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
This repository contains helper functions which can help you generate additional data points depending on your NLP task.

NLP Albumentations For Data Augmentation This repository contains helper functions which can help you generate additional data points depending on you

Aflah 6 May 22, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
Code for "Finetuning Pretrained Transformers into Variational Autoencoders"

transformers-into-vaes Code for Finetuning Pretrained Transformers into Variational Autoencoders (our submission to NLP Insights Workshop 2021). Gathe

Seongmin Park 22 Nov 26, 2022
Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation Official Code Repository for the paper "Unsupervised Documen

NLP*CL Laboratory 2 Oct 26, 2021
Smart discord chatbot integrated with Dialogflow

academic-NLP-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022