jiant is an NLP toolkit

Overview

🚨 Update 🚨 : As of 2021/10/17, the jiant project is no longer being actively maintained. This means there will be no plans to add new models, tasks, or features, or update support to new libraries.

jiant is an NLP toolkit

The multitask and transfer learning toolkit for natural language processing research

Generic badge codecov CircleCI Code style: black License: MIT

Why should I use jiant?

A few additional things you might want to know about jiant:

  • jiant is configuration file driven
  • jiant is built with PyTorch
  • jiant integrates with datasets to manage task data
  • jiant integrates with transformers to manage models and tokenizers.

Getting Started

Installation

To import jiant from source (recommended for researchers):

git clone https://github.com/nyu-mll/jiant.git
cd jiant
pip install -r requirements.txt

# Add the following to your .bash_rc or .bash_profile 
export PYTHONPATH=/path/to/jiant:$PYTHONPATH

If you plan to contribute to jiant, install additional dependencies with pip install -r requirements-dev.txt.

To install jiant from source (alternative for researchers):

git clone https://github.com/nyu-mll/jiant.git
cd jiant
pip install . -e

To install jiant from pip (recommended if you just want to train/use a model):

pip install jiant

We recommended that you install jiant in a virtual environment or a conda environment.

To check jiant was correctly installed, run a simple example.

Quick Introduction

The following example fine-tunes a RoBERTa model on the MRPC dataset.

Python version:

from jiant.proj.simple import runscript as run
import jiant.scripts.download_data.runscript as downloader

EXP_DIR = "/path/to/exp"

# Download the Data
downloader.download_data(["mrpc"], f"{EXP_DIR}/tasks")

# Set up the arguments for the Simple API
args = run.RunConfiguration(
   run_name="simple",
   exp_dir=EXP_DIR,
   data_dir=f"{EXP_DIR}/tasks",
   hf_pretrained_model_name_or_path="roberta-base",
   tasks="mrpc",
   train_batch_size=16,
   num_train_epochs=3
)

# Run!
run.run_simple(args)

Bash version:

EXP_DIR=/path/to/exp

python jiant/scripts/download_data/runscript.py \
    download \
    --tasks mrpc \
    --output_path ${EXP_DIR}/tasks
python jiant/proj/simple/runscript.py \
    run \
    --run_name simple \
    --exp_dir ${EXP_DIR}/ \
    --data_dir ${EXP_DIR}/tasks \
    --hf_pretrained_model_name_or_path roberta-base \
    --tasks mrpc \
    --train_batch_size 16 \
    --num_train_epochs 3

Examples of more complex training workflows are found here.

Contributing

The jiant project's contributing guidelines can be found here.

Looking for jiant v1.3.2?

jiant v1.3.2 has been moved to jiant-v1-legacy to support ongoing research with the library. jiant v2.x.x is more modular and scalable than jiant v1.3.2 and has been designed to reflect the needs of the current NLP research community. We strongly recommended any new projects use jiant v2.x.x.

jiant 1.x has been used in in several papers. For instructions on how to reproduce papers by jiant authors that refer readers to this site for documentation (including Tenney et al., Wang et al., Bowman et al., Kim et al., Warstadt et al.), refer to the jiant-v1-legacy README.

Citation

If you use jiant ≥ v2.0.0 in academic work, please cite it directly:

@misc{phang2020jiant,
    author = {Jason Phang and Phil Yeres and Jesse Swanson and Haokun Liu and Ian F. Tenney and Phu Mon Htut and Clara Vania and Alex Wang and Samuel R. Bowman},
    title = {\texttt{jiant} 2.0: A software toolkit for research on general-purpose text understanding models},
    howpublished = {\url{http://jiant.info/}},
    year = {2020}
}

If you use jiant ≤ v1.3.2 in academic work, please use the citation found here.

Acknowledgments

  • This work was made possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program, and by support from Intuit Inc.
  • We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU in this work.
  • Developer Jesse Swanson is supported by the Moore-Sloan Data Science Environment as part of the NYU Data Science Services initiative.

License

jiant is released under the MIT License.

Comments
  • Weird Wiki classification results

    Weird Wiki classification results

    On Wiki103 classification, I only get two modes of results:

    1. solve the task, getting ~99+% train and dev accuracy
    2. overfit to train, getting ~92% train, 58% dev accuracy Over 5-6 runs, I always get a result in one of these two modes. Are we doing anything weird in sampling that would make this too easy (mode 1) or too hard (mode 2)?
    jiant-v1-legacy 
    opened by W4ngatang 35
  •  Implementing Data Parallel

    Implementing Data Parallel

    https://github.com/nyu-mll/jiant/issues/446

    This is tested on BERT models and also GloVE embeddings, with all the major types of tasks.

    Below is the table for BERT models.

    | Task | Batch size with 1 GPU | Batch size with 2 GPUs | Time per 100 steps seconds (1 GPU) | Time per steps seconds (2 GPUs)| | ------------- | ------------- | ------------- | ------------- | ------------- | | ReCoRDl | 11 | 22 | 167.55 | 187.90 | | BoolQ | 14 | 28 | 122.2 | 146.6 |

    These results were on a machine with 2 P40s.

    0.x.0 release on fix jiant-v1-legacy 
    opened by pruksmhc 29
  • Adding Sentence Order Prediction

    Adding Sentence Order Prediction

    Adding Sentence Order Prediction Task for ALBERT What this version of MLM supports: ALBERT embedder. Below are the runs for MLM + SOP + intermediate task (which is how we intend to use SOP) The results (on ALBERT) are: | Experiment description | Intermediate Task Performance | SOP performance| | ------------- |:-------------:| :-------------:| | SST + SOP | 0.85 | 0.95 | | QQP + SOP | 0.95 | 0.94 |

    jiant-v1-legacy 
    opened by pruksmhc 28
  • Using jiant to make BioBERT and SciBERT to run superGLUE tasks

    Using jiant to make BioBERT and SciBERT to run superGLUE tasks

    Hi, As always, thank you for this amazing contribution.

    I am taking prof.bowman's class and attempting to run BioBERT https://github.com/dmis-lab/biobert-pytorch and SciBERT https://github.com/allenai/scibert on jiant. One of our objectives for the course project is to run BioBERT and SciBERT on common NLU tasks.

    As far as I understand, it should be possible to add any Transformers encoder model to jiant, but both of those models will probably require a bit of code?

    I am sketching out what I'd have to do in jiant vs. plain transformers. Will using jiant create more overheads (by trying to support those models) than just following the standard fine-tuning processes in transformers?

    Any suggestions and pointers will be helpful. Thank you!

    opened by shi-kejian 23
  • Make jiant pip installable

    Make jiant pip installable

    This should be considered a work-in-progress, not a complete PR. It passes tests, but might break some untested things, and there are a few things I'll flag for review.

    Because I'm still getting to know jiant, I chose running the tutorial experiment using a pip-installed package as my goal. This changeset allows that (without requiring conda or manual dependency installation). It doesn't directly address setting up an environment or downloading data using the pip package. To try it yourself, you can:

    # create a new python virtual environment (conda not required)
    # set up environment variables per tutorial, including pointing to an existing data directory, e.g.
    export JIANT_PROJECT_PREFIX=~/jiant/tutorial
    export JIANT_DATA_DIR=~/jiant/data
    
    # copy default.conf from repo
    
    # copy tutorial config file content from https://github.com/nyu-mll/jiant/blob/master/tutorials/setup_tutorial.md to tutorial.conf
    
    # install package
    pip install --upgrade --index-url https://test.pypi.org/simple/ jiant==1.1.0a0
    
    # run experiment
    python -m jiant --config_file tutorial.conf
    

    Some items worth reviewing:

    • A maintainer should check the metadata in setup.py - I took a shot at it...
    • Config was moved to inside the package, per setuptools recommendation, but this could break things. I only tried the tutorial and the test suite.
    • Assuming a distributed package should include only what is required to run typical tasks, I only added package dependencies necessary to run the tutorial. Are there other tasks/tests that should be considered?
    • This doesn't expose any shell or python scripts as CLI commands, but we might want some for downloading data or generating a local copy of default.conf, for example.
    • I moved main.main() into jiant.__main__ for convenience of running python -m jiant, and to keep necessary code inside the jiant package. I kept main.py for backward compatibility, and there's some duplicated code, but not too much.
    • tests and other root-level packages were not included in the package. Does this satisfy the need for a pip package without the other packages? If no, it might make sense to reorganize some directories.
    jiant-v1-legacy 
    opened by davidbenton 22
  • QA-SRL

    QA-SRL

    Re: #571

    Starting with a basic implementation of QA-SRL, seeking feedback.

    • There is quite a lot of repeated/branching input sets in QA-SRL. Each sentence has multiple "verbs", each verb corresponds to multiple questions, each question has multiple answers from different annotators, and each answer contains potentially more than one span. Currently I'm doing a naive implementation of enumerating all of these. For a sense of scale, there are about 44k sentences in the training set, but about 600k+ total training examples when expanded.
    • The QA-SRL / Large-Scale QA-SRL papers did not talk too much about scoring, and where they did I'm not sure if it works best for pretraining. I'm currently implementing the following:
    • Training loss: I am currently following the BERT formulation for SQuAD of straightforwardly predicting the span start and ends via a softmax over the number of tokens in the input, with a cross-entropy loss. No correction for ensuring start < end.
    • Evaluation: I am currently using the SQuAD-style macro-F1, which treats the predicted / gold tokens as individual observations, computing the F1 for each example, and then averaging across examples.
    new-task jiant-v1-legacy 
    opened by zphang 20
  • CCG can load incorrectly, get loss of zero almost instantly

    CCG can load incorrectly, get loss of zero almost instantly

    07/27 03:48:08 PM: Update 4001: task ccg, batch 1 (4001): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:48:18 PM: Update 4115: task ccg, batch 115 (4115): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:48:32 PM: Update 4197: task ccg, batch 197 (4197): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:48:42 PM: Update 4310: task ccg, batch 310 (4310): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:48:52 PM: Update 4428: task ccg, batch 428 (4428): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:05 PM: Update 4510: task ccg, batch 510 (4510): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:15 PM: Update 4624: task ccg, batch 624 (4624): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:25 PM: Update 4743: task ccg, batch 743 (4743): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:35 PM: Update 4785: task ccg, batch 785 (4785): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:46 PM: Update 4904: task ccg, batch 904 (4904): accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:49:54 PM: ***** Pass 5000 / Epoch 5 ***** 07/27 03:49:54 PM: ccg: trained on 1000 batches, 0.842 epochs 07/27 03:49:54 PM: Validating... 07/27 03:49:56 PM: Batch 19/157: accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:50:06 PM: Batch 147/157: accuracy: 0.0000, ccg_loss: 0.0000 || 07/27 03:50:06 PM: Best model found for ccg. 07/27 03:50:06 PM: Best model found for micro. 07/27 03:50:06 PM: Best model found for macro. 07/27 03:50:06 PM: Advancing scheduler. 07/27 03:50:06 PM: Best macro_avg: 0.000 07/27 03:50:06 PM: # bad epochs: 4 07/27 03:50:06 PM: Statistic: ccg_loss 07/27 03:50:06 PM: training: 0.000000 07/27 03:50:06 PM: validation: 0.000000 07/27 03:50:06 PM: Statistic: macro_avg 07/27 03:50:06 PM: validation: 0.000000 07/27 03:50:06 PM: Statistic: micro_avg 07/27 03:50:06 PM: validation: 0.000000 07/27 03:50:06 PM: Statistic: ccg_accuracy 07/27 03:50:06 PM: training: 0.000000 07/27 03:50:06 PM: validation: 0.000000 07/27 03:50:06 PM: global_lr: 0.000100 07/27 03:50:17 PM: Saved files to /misc/vlgscratch4/BowmanGroup/sbowman/exp//final/ccg-noelmo

    jiant-v1-legacy 
    opened by sleepinyourhat 20
  • Clean up the write_preds code.

    Clean up the write_preds code.

    Relevant to @dipanjand, @iftenney.

    We need to still be able to write legal GLUE test set predictions, which means preserving the current behavior. We also need to make it easier to write dev set predictions and to write predictions files that include the input sentences.

    jiant-v1-legacy 
    opened by sleepinyourhat 20
  • QQP test evaluation is extremely slow

    QQP test evaluation is extremely slow

    One run has been on QQP Test for 2.5 hours without signs of progress. GPU usage is non-zero but low. This seems to have changed since #121.

    @Jan21 @iftenney, any guesses? Did you verify that QQP test works?

    Dev also appears to be quite slow, but I don't have numbers yet.

    wontfix jiant-v1-legacy 
    opened by sleepinyourhat 19
  • Complete pytorch transformers interface, deprecate old GPT implement

    Complete pytorch transformers interface, deprecate old GPT implement

    • Replacing old GPT implement with the one from huggingface pytorch transformers
    • Add GPT2, Transformer-XL, XLM to pytorch transformer interface
    • Refactor pytorch transformer interface a little bit to reduce duplicated / outdated code & comments
    0.x.0 release on fix jiant-v1-legacy 
    opened by HaokunLiu 18
  • GLUE MTL issue: low cola score, big variation in restarting

    GLUE MTL issue: low cola score, big variation in restarting

    Config: defaults + train_for_eval = 0, val_interval = 9000, scaling = none, train_tasks = glue, eval_tasks = glue Restart with weighting: proportional * 2, power_0.75 * 3

    Issue 1: cola score is low (around 5), in four out of five experiments Issue 2: glue score has large variation (1~2 points) between same experiments

    Results and logs can be found in: GLUE Dev Results -> Experiment: MTL Mixing -> Row 273~277

    Logs: /nfs/jsalt/exp/shuning-worker34/sampling_master/proportional_glue/log.log /nfs/jsalt/exp/shuning-worker31/sampling_master3/proportional_glue/log.log /nfs/jsalt/exp/shuning-worker38/sampling_master2/power_0.75_glue/log.log /nfs/jsalt/exp/shuning-worker32/sampling_master3/power_0.75_glue/log.log /nfs/jsalt/exp/shuning-worker32/sampling_master4/power_0.75_glue/log.log

    jiant-v1-legacy 
    opened by shuningjin 17
  • testing subsample of the test set

    testing subsample of the test set

    Hi, I am doing experiments on different portions of the test dataset. Now it is possible to test the whole test set using your functions. I wanted to know if it is possible to teat on blocks of test set?

    opened by LeonardRanaldi 0
  • Issue in `call_predict`: error while getting predictions from trained models

    Issue in `call_predict`: error while getting predictions from trained models

    Describe the bug We are trying to reproduce the experiments for the paper Comparing Test Sets with Item Response Theory, and we were following the workflow mentioned in this README for the preprocessing of datasets, training models, generating responses, etc. We have successfully trained most of the models at our end, but now facing an issue while getting predictions from those models.

    To Reproduce

    1. jiant version: 2.1.4
    2. Environment: A100 40GB

    We ran the call_predict script. We did not find any sb_predict_results.sbatch file, and changed L21 to call the run_predict_results.sbatch file instead. For example, the exact command we ran for copa and for bert-base-cased model is:

    python jiant/irt_scripts/call_predict.py bert-base-cased copa $(pwd)
    

    This produced the following error:

    pre args =  Namespace(ZZoverrides=['model_path'], ZZsrc=None)
    
    Traceback (most recent call last):
    
      File "/dccstor/irlirteval/irt/jiant/jiant/proj/main/runscript.py", line 256, in <module>
    
        main()
    
      File "/dccstor/irlirteval/irt/jiant/jiant/proj/main/runscript.py", line 246, in main
    
        run_loop(RunConfiguration.default_run_cli(cl_args=cl_args))
    
      File "/dccstor/irlirteval/irt/jiant/jiant/utils/zconf/core.py", line 237, in default_run_cli
    
        return cls.run_cli_json_prepend(cl_args=cl_args, prog=prog, description=description)
    
      File "/dccstor/irlirteval/irt/jiant/jiant/utils/zconf/core.py", line 174, in run_cli_json_prepend
    
        result = cls.run_from_parser_json_prepend(parser=parser, cl_args=cl_args)
    
      File "/dccstor/irlirteval/irt/jiant/jiant/utils/zconf/core.py", line 217, in run_from_parser_json_prepend
    
        assert pre_args.ZZoverrides is None
    
    AssertionError
    

    An attempt to fix We tried to fix the zzoverrides error by modifying the run_predict_results.sbatch as follows:

    python $JIANT_PATH/proj/main/runscript.py run   \
        --hf_pretrained_model_name_or_path ${model} \
        --model_path ${MODELS_DIR}/${SHORT_MODEL_NAME}/model/model.p \
        --model_config_path ${MODELS_DIR}/${SHORT_MODEL_NAME}/model/config.json \
        --jiant_task_container_config_path ${BASE_PATH}/experiments/run_config_dir/taskmaster/${model}/${task}_${config_no}/${task}.json  \
        --model_load_mode all --model_path ${BASE_PATH}/experiments/output_dir/taskmaster_${model}/${task}/config_${config_no}/${model_path}  \
        --output_dir  ${BASE_PATH}/experiments/predict_files/${model}/${task}_config_${config_no}_${model_path} --ZZoverrides model_path model_config_path hf_pretrained_model_name_or_path --write_val_preds \
        --ZZsrc ${MODELS_DIR}/${SHORT_MODEL_NAME}/config.json --do_val --do_save --force_overwrite
    

    instead of this code block. This time it produced empty predictions.

    Expected behavior Prediction files (.p) in experiments/predict_files/ folder, so that we could run the post-processing command python jiant/irt_scripts/postprocess_predictions.py $(pwd).

    opened by swag2198 0
  • Unable to execute run_simple() with different models of the same type

    Unable to execute run_simple() with different models of the same type

    Describe the bug

    When one uses run_simple() with different models of the same type roberta-base and roberta-large the run crashes because the code assumes they are the same model because weights are saved under hf_config.model_type (instead of args.hf_pretrained_model_name_or_path.). As such, the code tries to load incompatible weights and crashes.

    To Reproduce

    1. Install jiant
    2. Run the simple example in README
    3. Change the model in the sample from 'roberta-basetoroberta-large`

    Expected behavior One should be able to run run_simple() with different models of the same type.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Solution: The hf_config.model_type should be used for caching tokenizer / tasks. The args.hf_pretrained_model_name_or_path for the weights.

    opened by TimDettmers 0
  • Swapped labels in CommitmentBank

    Swapped labels in CommitmentBank

    Describe the bug The downloaded files for the CB task have swapped labels. This introduces a nasty silent bug because all the metrics calculated using these data seem correct but the fine-tuned model is actually predicting nonsense.

    contradiction is mapped to entailment entailment to neutral neutral to contradiction

    To Reproduce I used Jiant 2.2.0 available via pip3 install jiant.

    import json
    from collections import Counter
    import jiant.scripts.download_data.runscript as downloader
    
    # Download Jiant CB
    downloader.download_data(["cb"], "tasks")
    with open("tasks/data/cb/train.jsonl") as f:
        jiant_freqs = Counter([json.loads(line)["label"] for line in f.readlines()])
    print(jiant_freqs.most_common())
    
    >>> [('entailment', 119), ('neutral', 115), ('contradiction', 16)]
    
    # Download official CB from SuperGLUE
    !wget "https://dl.fbaipublicfiles.com/glue/superglue/data/v2/CB.zip"
    !unzip CB.zip
    with open("CB/train.jsonl") as f:
        official_freqs = Counter([json.loads(line)["label"] for line in f.readlines()])
    print(official_freqs.most_common())
    
    >>> [('contradiction', 119), ('entailment', 115), ('neutral', 16)]
    
    opened by davda54 1
  • Can't instantiate abstract class JiantBartModel with abstract methods normalize_tokenizations

    Can't instantiate abstract class JiantBartModel with abstract methods normalize_tokenizations

    This error is reported with BART, but not with BERT

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce

    1. Tell use which version of jiant you're using
    2. Describe the environment where you're using jiant, e.g, "2 P40 GPUs"
    3. Provide the experiment config artifact (e.g., defaults.conf)

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

    opened by Chang-Hongyang 0
Releases(v2.2.0)
  • v2.2.0(May 10, 2021)

    Making it easy to add a Hugging Face-style Transformers model

    We refactored jiant to make it easier to add a Transformers-style model to the library. Please see the guide to add a model for more details. We added DeBERTa V2 as part of these changes.

    Breaking Changes

    The simple API now uses hf_pretrained_model_name_or_path instead of model_type as an argument. hf_pretrained_model_name_or_path is used as an input to Hugging Face's Auto Classes.

    Features

    de5437a Merge easy_add_model feature branch (#1309) 56ceae5 Updating notebooks, removing model_type (#1270) 723786a Switch export_model to use AutoModel and AutoTokenizer (#1260) 84f2f5a Adding Acceptability judgment and SentEval tasks (#1271) f796e5a improve robustness of the simple runscript (#1307)

    Tests

    4d0f6a9 Add test matrix (#1308)

    Bugfixes

    ee65662 Update README.md 65888b4 Benchmark script fixes (#1301) b4b5de0 Assert spans <= max_seq_len (#1302) 5ba72f7 axg->axb fix (#1300) 235f646 MNLI diagnostic example bug (#1294)

    Maintenance

    4ab0c08 Bump lxml from 4.6.2 to 4.6.3 (#1304) 741ab09 Documentation + cleanup (#1282) dbbb4e6 export_model tweak (#1277)

    Source code(tar.gz)
    Source code(zip)
  • v2.1.4(Jan 8, 2021)

    Tasks

    e9d6c68 Adding download code for ARCT, MCTest, MCTACO, MuTual, and QuAIL (#1258)

    Examples

    a88956a Add edge probing notebook (#1261)

    Bugfixes

    d5e3b2e Prevents computing loss in multigpu when not available (#1257) 9cfe644 Truncate MCQ inputs from the start and not end (#1256) 33c4a26 Bump lxml from 4.5.1 to 4.6.2 (#1263)

    Tests

    9a45712 Add test for export_model (#1259)

    Source code(tar.gz)
    Source code(zip)
  • v2.1.3(Jan 1, 2021)

    Tasks

    1ab34a4 Adding ROPES, RACE tasks (#1234)

    Bugfixes

    ce62495 Add simple save model test + fix (#1227) e5fbea4 fix create examples wsc (#1247) 605d794 committing quail fix (#1249) 192d6b5 Namespace default cache dir by model_type. (#1246)

    Source code(tar.gz)
    Source code(zip)
  • v2.1.2(Dec 3, 2020)

    Bugfixes

    838cdd2 SQuAD tokenization update (#1232) 5329c7e Winogrande Task Property (#1229) 50b0116 Further fix for encoder_only (#1242) 1f66050 Allow force_overwrite to override "done" condition (#1241) 59438ed change checkpoint save default (#1233) e2e85c9 guids_fix (#1225) 74c6ba0 fix notebooks path (#1231)

    Features

    18c41fc Load only encoder weights (#1240)

    Documentation

    711c6c8 Introduction filename correction (#1239) f340d04 minor typo fix (#1238)

    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Nov 8, 2020)

    Added Tasks

    e4f1c4b Winogrande (#1203) e7eefc6 Fever NLI task and data downloader (#1215) cb601cf Quail (#1208) c00360f MCTest and MCTACO (#1197) 76e2826 Mcscript Task Property (#1219)

    Documentation

    9892766 Add docs for adding tasks to data downloader (#1221)

    Bugfixes

    cb7ee4a Fix save-last behavior (#1220)

    Cleanup

    0cc8cbb use task_name instead of task.name (#1224)

    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Oct 27, 2020)

    Added Tasks

    442a2b0 - piqa (#1216) (William Huang) c535e78 - Natural Questions (MRQA), NewsQA, Quoref (#1207) (Jason Phang) d1b14c1 - mcscript (#1152) (William Huang) da7550d - Adding arc_easy, arc_challenge, mutual, mutual_plus (#1206) (yzpang) f4bca4e - add arct task doc documentation (#1154) (jeswan) b23c0f7 - arct (#1151) (William Huang) 58beb8f - anli download (#1147) (Jason Phang)

    Features

    0b3dff5 - Adding ability to resume run in Simple API (#1205) (Jason Phang)

    Notebooks

    b81254b - Fix git clone in example notebooks (#1155) (jeswan)

    Bugfixes

    aa4d111 - Bugfix for single-task configurator (#1143) (Jason Phang) 14fac1c - Fix colab link in README (#1142) (Jonathan Chang) 02bb070 - setup.py fix (#1141) (Jason Phang) 7d1cc29 - Adding SingleTaskConfigurator, some cleanup (#1135) (Jason Phang)

    Maintenance

    bump torch>=1.5.0. bump transformers==3.1.0. notebook installation switched to local pip install. (#1218) (jeswan) b20f30a - resolve_is_lower_case fix (#1204) (Jason Phang) 5724fee - Adjust case for span prediction (#1201) (Jason Phang) c3387a3 - nlp to datasets (#1137) (Jason Phang) 04bbb39 - update issue numbers from jiant-dev to jiant transfer (#1196) (jeswan) 392976c - Task tweaks (#1149) (Jason Phang) 82ed396 - use hidden_size (#1148) (Jason Phang)

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Oct 8, 2020)

  • v1.3.2(Apr 24, 2020)

    Highlighted changes:

    New Tasks

    • Masked Language Modeling (for RoBERTa and ALBERT) (#1030) (@pruksmhc and @phu-pmh)
    • Sentence Order Prediction (for ALBERT) (#1061) (@pruksmhc and @phu-pmh)

    Minor changes and fixes

    • Fixed target training data fraction bug where target training data fraction was not reflected in logging and scheduler (#1071) (@HaokunLiu)
    • Fixed target train data fraction overwriting pretrain data fraction bug (#1070) (@pyeres)
    • Added CONTRIBUTING.md (#1036, #1038) (@pyeres)

    Dependency changes

    • transformers 2.3.0 → transformers 2.6.0 (#1059) (@zphang)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Mar 10, 2020)

    Minor changes and fixes

    • Fixed QAMR and QASRL tasks (#1019) (@pyeres)
    • Changed tasks names using underscores to use hyphens (#1016) (@HaokunLiu)
    • Fixed cola inference script (#1023) (@zphang)
    • Re-ordered GPT-style inputs for consistency with GPT paper (#1031) (@HaokunLiu)
    • Fixed edge probing and Senteval tasks (#1025) (@pruksmhc)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Feb 26, 2020)

    Highlighted changes:

    New Tasks

    • QA-SRL (#716) (@zphang)
    • QAMR (#932) (@zphang)
    • Abductive NLI (aNLI) (#922) (@zphang)
    • SocialIQA (#924) (@pruksmhc)
    • SentEval Probing (#926) (@pruksmhc)
    • SciTail (#943) (@phu-pmh)
    • CommonsenseQA (#942) (@HaokunLiu)
    • HellaSwag (#942) (@HaokunLiu)
    • Acceptability probing (#949) (@HaokunLiu)
    • Adversarial NLI (#966) (@pyeres)
    • Two-class MNLI variant (#976) (@sleepinyourhat)
    • WinoGrande (#996) (@HaokunLiu)

    New Models

    • ALBERT (#990) (@HaokunLiu)

    New Features

    • Faster retokenization (#935) (@pruksmhc)
    • Gradient accumulation option (#980) (@pyeres)
    • Full/stable data-parallel multi-GPU support (#992) (@pruksmhc)

    Minor changes and fixes

    • Fixed bug in restoring checkpoints in multi-GPU mode (#928) (@pruksmhc)
    • Fixed bugs in RoBERTa retokenization (#982) (@HaokunLiu) and ids (#959) (@njjiang)
    • Fixed load_target_train_checkpoint with mixing setting (#960) (@pruksmhc)
    • Fixed bug in CCG loss function that artificially reduced accuracy (#948) (@HaokunLiu)
    • Fixed label parsing for QQP (#956) (@zphang)
    • Updated CoLA inference script (#931) (@zphang)

    Dependency changes

    • PyTorch 1.0.0 → 1.1.0 (#965) (@pyeres)
    • Numpy 1.14.5 → 1.15.0 (#965) (@pyeres)
    • pytorch-transformers 1.2.0 → transformers 2.3.0 (#990) (@HaokunLiu)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Sep 23, 2019)

  • v1.2.0(Sep 16, 2019)

    Highlighted changes:

    • Add support for RoBERTa, XLM, and GPT-2 via pytorch_transformers 1.2.
    • Add support for pip installation (and moved the body of main.py and the config directory to accomodate that change).
    • Fix a bug that produced invalid micro/macro average scores during validation.

    Minor changes:

    • Refactor old GPT (v1) implementation to use pytorch_transformers.
    • Make the code that adds git status information to logs more robust.
    • Minor cleanup to data loading and to MNLI data handling logic.
    • Fix a short-lived bug invalidating hypothesis-only MNLI results.
    • Restore (partial) support for sequence-to-sequence tasks, though with no fully supported demonstration tasks in place yet.

    Dependency changes:

    • Updated requirement pytorch_transformers to 1.2.0.
    • Updated requirement to NLTK 3.4.5 to avoid a potential security issue.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 15, 2019)

    We expect another release within a week or two that will add support for RoBERTa (see #890), but this is a quick intermediate release now that XLNet support is stable/working.

    Highlighted changes:

    • Full support for XLNet and the whole-word-masking variants of BERT.
    • Many small improvements to Google Cloud Platform/Kubernetes/Docker support.
    • Add small but handy option to automatically delete checkpoints when a job finishes.
    • max_vals is now used when computing warmup time with optimizers that use warmup.
    • New auto option for tokenizer chooses an appropriate tokenizer for any given input module.
    • Some internal changes to how <SOS>/<EOS>/[SEP]/[CLS] tokens are handled during task preprocessing. This will require small changes to custom task code along the lines of what is seen in #845.

    Dependency changes:

    • AllenNLP 0.8.4 now required
    • pytorch_transformers 1.0 now required when using BERT or XLNet.

    Warnings:

    • Upgrading to 1.1 will break existing checkpoints for BERT-based models.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jul 12, 2019)

  • v1.0.0(Jul 10, 2019)

    The first stable release of jiant.

    Highlighted changes:

    • Support for the SuperGLUE v2.0 set of tasks, including all the baselines discussed in the SuperGLUE paper.
    • A simpler and more standard code structure.
    • Cleaner, more-readable logs.
    • Simplified logic for checkpointing and evaluation, with fewer differences between pretraining and target task training.
    • Fewer deprecated/unsupported modules.
    • Many small bug fixes and improvements to errors and warnings.

    Dependency changes:

    • Upgrade to AllenNLP 0.8.4, which adds the option to use the GitHub development version of pytorch-pretrained-bert, and with it, the whole-word-masking variants of BERT.

    Warnings:

    • Upgrading from 0.9 to 1.0 will break most older model checkpoints and cached preprocessed data.
    Source code(tar.gz)
    Source code(zip)
  • r2(Jun 10, 2019)

  • v0.9.1(May 23, 2019)

  • v0.9.0(May 6, 2019)

    The initial work-in-progress release coinciding with the launch of SuperGLUE.

    Highlights:

    We currently support two-phase training (pretraining and target task training) using various shared encoders, including:

    • BERT
    • OpenAI GPT
    • Plain Transformer
    • Ordered Neurons (ON-LSTM) Grammar Induction Model
    • PRPN Grammar Induction Model

    We also have support for SuperGLUE baselines, sentence encoder probing experiments, and STILTS-style training.

    Examples

    They can be found in https://github.com/nyu-mll/jiant/tree/master/config/examples

    Source code(tar.gz)
    Source code(zip)
Owner
ML² AT CILVR
The Machine Learning for Language Group at NYU CILVR
ML² AT CILVR
NeoDays-based tileset for the roguelike CDDA (Cataclysm Dark Days Ahead)

NeoDaysPlus Reduced contrast, expanded, and continuously developed version of the CDDA tileset NeoDays that's being completed with new sprites for mis

0 Nov 12, 2022
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation This repository contains the implementation of the following paper: Live Speech

OldSix 575 Dec 31, 2022
DaCy: The State of the Art Danish NLP pipeline using SpaCy

DaCy: A SpaCy NLP Pipeline for Danish DaCy is a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Ar

Kenneth Enevoldsen 71 Jan 06, 2023
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)

ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python) 日本語は以下に続きます (Japanese follows) English: This book is written in Japanese and primaril

Ryuichi Yamamoto 189 Dec 29, 2022
Under the hood working of transformers, fine-tuning GPT-3 models, DeBERTa, vision models, and the start of Metaverse, using a variety of NLP platforms: Hugging Face, OpenAI API, Trax, and AllenNLP

Transformers-for-NLP-2nd-Edition @copyright 2022, Packt Publishing, Denis Rothman Contact me for any question you have on LinkedIn Get the book on Ama

Denis Rothman 150 Dec 23, 2022
Grapheme-to-phoneme (G2P) conversion is the process of generating pronunciation for words based on their written form.

Neural G2P to portuguese language Grapheme-to-phoneme (G2P) conversion is the process of generating pronunciation for words based on their written for

fluz 11 Nov 16, 2022
An IVR Chatbot which can exponentially reduce the burden of companies as well as can improve the consumer/end user experience.

IVR-Chatbot Achievements 🏆 Team Uhtred won the Maverick 2.0 Bot-a-thon 2021 organized by AbInbev India. ❓ Problem Statement As we all know that, lot

ARYAMAAN PANDEY 9 Dec 08, 2022
Generating Korean Slogans with phonetic and structural repetition

LexPOS_ko Generating Korean Slogans with phonetic and structural repetition Generating Slogans with Linguistic Features LexPOS is a sequence-to-sequen

Yeoun Yi 3 May 23, 2022
jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese.

jel: Japanese Entity Linker jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese. Usage Currently, link and question methods

izuna385 10 Jan 06, 2023
Journey is a NLP-Powered Developer assistant

Journey Journey is a NLP-Powered Developer assistant Using on the powerful Natural Language Processing library Mindmeld, this projects aims to assist

Christian Eilers 21 Dec 11, 2022
Fake news detector filters - Smart filter project allow to classify the quality of information and web pages

fake-news-detector-1.0 Lists, lists and more lists... Spam filter list, quality keyword list, stoplist list, top-domains urls list, news agencies webs

Memo Sim 1 Jan 04, 2022
Finally, some decent sample sentences

tts-dataset-prompts This repository aims to be a decent set of sentences for people looking to clone their own voices (e.g. using Tacotron 2). Each se

hecko 19 Dec 13, 2022
Simple text to phones converter for multiple languages

Phonemizer -- foʊnmaɪzɚ The phonemizer allows simple phonemization of words and texts in many languages. Provides both the phonemize command-line tool

CoML 762 Dec 29, 2022
Korean Simple Contrastive Learning of Sentence Embeddings using SKT KoBERT and kakaobrain KorNLU dataset

KoSimCSE Korean Simple Contrastive Learning of Sentence Embeddings implementation using pytorch SimCSE Installation git clone https://github.com/BM-K/

34 Nov 24, 2022
SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognit

SpeechBrain 5.1k Jan 09, 2023
ProteinBERT is a universal protein language model pretrained on ~106M proteins from the UniRef90 dataset.

ProteinBERT is a universal protein language model pretrained on ~106M proteins from the UniRef90 dataset. Through its Python API, the pretrained model can be fine-tuned on any protein-related task in

241 Jan 04, 2023
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

13.2k Jul 07, 2021
Use Google's BERT for named entity recognition (CoNLL-2003 as the dataset).

For better performance, you can try NLPGNN, see NLPGNN for more details. BERT-NER Version 2 Use Google's BERT for named entity recognition (CoNLL-2003

Kaiyinzhou 1.2k Dec 26, 2022
This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text.

Text Summarizer This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text. Team Members This mini-project was

1 Nov 16, 2021