A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).

Overview

Rebiber: A tool for normalizing bibtex with official info.

We often cite papers using their arXiv versions without noting that they are already PUBLISHED in some conferences. These unofficial bib entries might violate rules about submissions or camera-ready versions for some conferences. We introduce Rebiber, a simple tool in Python to fix them automatically. It is based on the official conference information from the DBLP or the ACL anthology (for NLP confernces)! You can check the list of supported conferences here. Apart from handling outdated arXiv citations, Rebiber also normalizes citations in a unified way (DBLP-style), supporting abbreviation and value selection.

You can use this google colab notebook as a simple web demo.

Changelogs

  • 2021.02.08 We now support multiple useful feaures: 1) turning off some certain values, e.g., "-r url,pages,address" for removing the values from the output, 2) using abbr. to shorten the booktile values, e.g., Proceedings of the .* Annual Meeting of the Association for Computational Linguistics --> Proc. of ACL. More examples are here.
  • 2021.01.30 We build a colab notebook as a simple web demo. link

Installation

pip install rebiber -U
rebiber --update  # update the bib data and the abbr. info 

OR

git clone https://github.com/yuchenlin/rebiber.git
cd rebiber/
pip install -e .

If you would like to use the latest github version with more bug fixes, please use the second installation method.

Usage(v1.1.1)

Normalize your bibtex file with the official converence information:

rebiber -i /path/to/input.bib -o /path/to/output.bib

You can find a pair of example input and output files in rebiber/example_input.bib and rebiber/example_output.bib.

argument usage
-i or --input_bib. The path to the input bib file that you want to update
-o or --output_bib. The path to the output bib file that you want to save. If you don't specify any -o then it will be the same as the -i.
-r or --remove. A comma-seperated list of value names that you want to remove, such as "-r pages,editor,volume,month,url,biburl,address,publisher,bibsource,timestamp,doi". Empty by default.
-s or --shorten. A bool argument that is "False" by default, used for replacing booktitle with abbreviation in -a. Used as -s True.
-d or --deduplicate. A bool argument that is "True" by default, used for removing the duplicate bib entries sharing the same key. Used as -d True.
-l or --bib_list. The path to the list of the bib json files to be loaded. Check rebiber/bib_list.txt for the default file. Usually you don't need to set this argument.
-a or --abbr_tsv. The list of conference abbreviation data. Check rebiber/abbr.tsv for the default file. Usually you don't need to set this argument.
-u or --update. Update the local bib-related data with the lateset Github version.
-v or --version. Print the version of current Rebiber.

Example Input and Output

An example input entry with the arXiv information (from Google Scholar or somewhere):

@article{lin2020birds,
	title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
	author={Lin, Bill Yuchen and Lee, Seyeon and Khanna, Rahul and Ren, Xiang},
	journal={arXiv preprint arXiv:2005.00683},
	year={2020}
}

An example normalized output entry with the official information:

@inproceedings{lin2020birds,
    title = "{B}irds have four legs?! {N}umer{S}ense: {P}robing {N}umerical {C}ommonsense {K}nowledge of {P}re-{T}rained {L}anguage {M}odels",
    author = "Lin, Bill Yuchen  and
      Lee, Seyeon  and
      Khanna, Rahul  and
      Ren, Xiang",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-main.557",
    doi = "10.18653/v1/2020.emnlp-main.557",
    pages = "6862--6868",
}

Supported Conferences

The bib_list.txt contains a list of converted json files of the official bib data. In this repo, we now support the full ACL anthology, i.e., all papers that are published at *CL conferences (ACL, EMNLP, NAACL, etc.) as well as workshops. Also, we support any conference proceedings that can be downloaded from DBLP, for example, ICLR2020.

The following conferences are supported and their bib/json files are in our data folder. You can turn each item on/off in bib_list.txt. Please feel free to create PR for adding new conferences following this!

Name Years
ACL Anthology (until 2021-01)
AAAI 2010 -- 2020
AISTATS 2013 -- 2020
ALENEX 2010 -- 2020
ASONAM 2010 -- 2019
BigDataConf 2013 -- 2019
BMVC 2010 -- 2020
CHI 2010 -- 2020
CIDR 2009 -- 2020
CIKM 2010 -- 2020
COLT 2000 -- 2020
CVPR 2000 -- 2020
ICASSP 2015 -- 2020
ICCV 2003 -- 2019
ICLR 2013 -- 2020
ICML 2000 -- 2020
IJCAI 2011 -- 2020
KDD 2010 -- 2020
MLSys 2019 -- 2020
MM 2016 -- 2020
NeurIPS 2000 -- 2020
RECSYS 2010 -- 2020
SDM 2010 -- 2020
SIGIR 2010 -- 2020
SIGMOD 2010 -- 2020
SODA 2010 -- 2020
STOC 2010 -- 2020
UAI 2010 -- 2020
WSDM 2008 -- 2020
WWW (The Web Conf) 2001 -- 2020

Thanks for Anton Tsitsulin's great work on collecting such a complete set bib files!

Adding a new conference

You can manually add any conferences from DBLP by downloading their bib files to our raw_data folder, and run a prepared script add_conf.sh.

Take ICLR2020 and ICLR2019 as an example:

  • Step 1: Go to DBLP
  • Step 2: Download the bib files, and put them here as raw_data/iclr2020.bib and raw_data/iclr2019.bib (name should be in the format as {conf_name}{year}.bib)
  • Step 3: Run script
bash add_conf.sh iclr 2019 2020

Contact

Please email [email protected] or create Github issues here if you have any questions or suggestions.

Comments
  • Some references are filtered by `load_bib_file`

    Some references are filtered by `load_bib_file`

    It's a great tools, but when I try to transfer my .bib file, which is generated by an application BibDesk, the references are filtered, here is a minimal example of my bib file.

    @inproceedings{zhang2019heterogeneous,
            author = {Zhang, Chuxu and Song, Dongjin and Huang, Chao and Swami, Ananthram and Chawla, Nitesh V},
            booktitle = {Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
            date-added = {2021-04-03 01:39:20 +0800},
            date-modified = {2021-04-03 01:44:13 +0800},
            keywords = {Recommender system, Graph Neural Network},
            pages = {793--803},
            title = {Heterogeneous graph neural network},
            year = {2019},
            Bdsk-Url-1 = {https://doi.org/10.1145/3292500.3330961}}
    

    I think this is due to load_bib_file. The last line of this reference contains {, so load_bib_file skipped this reference.

    However, in the BibtexParser, this kind of bib file can be recognized.

    opened by AyanoClarke 8
  • Deleted entry after using rebiber

    Deleted entry after using rebiber

    Hi,

    Thanks for the great tool! I faced an issue where an entry was deleted after using rebiber though. It's this one:

    @article{loon,
      title={Autonomous navigation of stratospheric balloons using reinforcement learning.},
      author={Marc G. Bellemare and Salvatore Candido and P. S. Castro and J. Gong and Marlos C. Machado and Subhodeep Moitra and Sameera S. Ponda and Ziyu Wang},
      journal={Nature},
      year={2020},
      volume={588 7836},
      pages={
              77-82
            }
    }
    

    Do you have any idea what could be wrong?

    opened by RaghuSpaceRajan 4
  • Comments in bib file are transformed into `@comments`

    Comments in bib file are transformed into `@comments`

    I come across two issues here:

    • [ ] Somehow the tool transforms my comments (ones starting with %) in bib file into @comment{} and places them at the head of the file;
    • [ ] All the bibs are by default organized in an alphabetic manner, is there a way (option) I can remain the order of bibs (and thus keep the comments where they are) .

    Great tool by the way :)

    opened by boredtylin 4
  • Adding arxiv URL when available

    Adding arxiv URL when available

    When a bib_entry is not found now the scripts checks if it is an arxiv entry. If that is the case the new script adds the field url = {https://arxiv.org/abs/<ID>} to the bibitex entry.

    help wanted 
    opened by nicola-decao 3
  • Handle @string

    Handle @string

    Nice tool! It seems that currently it doesn't handle @string in BibTeX. Any plan to add this feature?

    Example:

    @string{emnlp = "Empirical Methods in Natural Language Processing (EMNLP)"}
    
    @inproceedings{li2020efficient,
     title={Efficient One-Pass End-to-End Entity Linking for Questions},
     author={Li, Belinda Z. and Min, Sewon and Iyer, Srinivasan and Mehdad, Yashar and Yih, Wen-tau},
     booktitle=emnlp,
     year={2020}
    }
    
    enhancement normalization 
    opened by scottyih 3
  • Incomplete bib entry for conference

    Incomplete bib entry for conference

    Hello, I find that some papers accepted by some conferences (e.g. AAAI 2020) cannot be indexed. The reason might be that we can only download the first 1000 entries when the accepted papers are more than 1,000 from DBLP. Is there any way to address such problem? Thanks very much!

    opened by xiaosen-wang 2
  • Whether to consider providing  Python API ?

    Whether to consider providing Python API ?

    Although a scripting approach is provided, would you consider providing a Python API ?

    for example

    import rebiber
    str = '@article{lin2020birds,
    	title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
    	author={Lin, Bill Yuchen and Lee, Seyeon and Khanna, Rahul and Ren, Xiang},
    	journal={arXiv preprint arXiv:2005.00683},
    	year={2020}
    }'
    res = rebiber.trans(str)
    print(res)
    
    opened by SinclairCoder 2
  • Modified the demo Colab for copy and paste bib

    Modified the demo Colab for copy and paste bib

    This is notebook implemented a few modifications to the original Colab demo: (1) takes pasted BibTex as a string input (i.e. copying BibText from Google Scholar); (2) prints the processed BibTex on screen, which can be copied and pasted into reference management software; (3) modified the upload cell; (4) added a download cell to download processed outputs.

    opened by Herais 1
  • Update data from recent ICML/ICLR/NeurIPS/AAAI

    Update data from recent ICML/ICLR/NeurIPS/AAAI

    I've added data from NeurIPS 2020/2021, ICML 2021, ICLR 2021, and also missing entries from ICML 2020 and AAAI2019/2020, because they each have more than 1000 papers but currently each bib only contains the first 1000 entries.

    opened by shizhouxing 1
  • Fix bug with parsing arXiv entries with multiline fields

    Fix bug with parsing arXiv entries with multiline fields

    The bug is as follows. When the input is something like:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashant and Bulusu, Saikiran and Rajawat, Ketan and Varshney, Pramod K.}, title = {Parallel Restarted Spider -- Communication Efficient Distributed Nonconvex Optimization With Optimal Computation Complexity}, journal = {arXiv preprint arXiv:1912.06036}, year = 2019, url = {http://arxiv.org/abs/1912.06036v2}, archivePrefix = {arXiv}, primaryClass = {math.OC}, }

    It is changed to:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashan}, journal = {ArXiv preprint}, title = {Parallel Restarted Spider -- Communication Efficien}, url = {https://arxiv.org/abs/1912.06036}, volume = {abs/1912.06036}, year = {2019}

    Author information is lost and the title is shortened to just the first line. This happens because if the entry is an unmatched article from arXiv, the normalizer re-parses the bibtex entry manually looking for the title, author, and arXiv id information. This manual parser fails when any of the entries (title/author/etc.) spans multiple lines. Moreover, the external bibtex parsing library used elsewhere already works quite well. This commit just uses the external parser which can handle multiline fields just fine. Under this, we have the correct output:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashant and Bulusu, Saikiran and Rajawat, Ketan and Varshney, Pramod K.}, journal = {ArXiv preprint}, title = {Parallel Restarted Spider -- Communication Efficient Distributed Nonconvex Optimization With Optimal Computation Complexity}, url = {https://arxiv.org/abs/1912.06036}, volume = {abs/1912.06036}, year = {2019}

    opened by rka97 1
  • The way `abbr.tsv` is loaded removes entries from the file

    The way `abbr.tsv` is loaded removes entries from the file

    abbr.tsv has two entries for ICML:

    Proc. of ICML | Proceedings of the .* International Conference on Machine Learning
    Proc. of ICML | Machine Learning, Proceedings of the .* International Conference
    

    but they are not both loaded, because in load_abbr_tsv() a dictionary is used such that the second entry overwrites the first one:

    ls = line.split("|")
    if len(ls) == 2:
        abbr_dict[ls[0].strip()] = ls[1].strip()
    

    I see two solutions here: either don't load the file into a dictionary (but just a list of tuples), or allow specifying or regex patterns (i.e., (pattern1|pattern2)) which would require using a different character than | to separate the left- and right-hand sides in abbr.tsv.

    bug 
    opened by thomkeh 1
  • Question about month

    Question about month

    Hi Yuchen, It seems you try to ignore the month field in a bib entry in is_contain_var() and build_json(). Can you please explain why is that necessary? You also ignore '@string' entry. Why not just let bibtexparser parse the entire bib file? Thank you!

    opened by christophe-gu 0
  • Add new conference files

    Add new conference files

    Add entries for the following conferences (mostly machine learning conferences):

    ICML 2022 AISTATS 2022 COLT 2021 2022 ICLR 2022 MLSYS 2021 2022 NeurIPS 2021 UAI 2021

    opened by rka97 0
  • Hope add a command for batch files execution

    Hope add a command for batch files execution

    I have multiple bib files for several research fields and hope convert their information in one-click. I've written a bat file to automatically execute bib files in work directory:

    @echo off
    for %%i in (*.bib) do echo "%%i"
    for %%i in (*.bib) do rebiber -i %%i -o Pub%%i
    pause
    exit
    

    But a build-in command would be easier to use. Would you like to add this?

    opened by Saltsmart 0
  • The booktitle contains too much information

    The booktitle contains too much information

    I found that the booktitle of many papers in DBLP has too many names and information.

    For example:

    @inproceedings{seo-etal-2016-bidirectional,
     author = {Min Joon Seo and
    Aniruddha Kembhavi and
    Ali Farhadi and
    Hannaneh Hajishirzi},
     bibsource = {dblp computer science bibliography, https://dblp.org},
     biburl = {https://dblp.org/rec/conf/iclr/SeoKFH17.bib},
     booktitle = {5th International Conference on Learning Representations, {ICLR} 2017,
    Toulon, France, April 24-26, 2017, Conference Track Proceedings},
     publisher = {OpenReview.net},
     timestamp = {Thu, 25 Jul 2019 01:00:00 +0200},
     title = {Bidirectional Attention Flow for Machine Comprehension},
     url = {https://openreview.net/forum?id=HJ0UKP9ge},
     year = {2017}
    }
    

    The booktitle here contains the full name and abbreviation of ICLR, as well as their location. Can you keep only the first one of this information?

    For example: booktitle =“5th International Conference on Learning Representations”

    opened by AliceNEET 1
  • Confusing behavior with some author names

    Confusing behavior with some author names

    The ImageNet paper has its last author listed as Li Fei-Fei, which is how she publishes in general, both on the paper and in the IEEE metadata; their .bib has her as Li Fei-Fei in the author.

    The DBLP record lists her as Li Fei{-}Fei.

    And yet rebiber/data/cvpr2009.bib.json has her as Fei{-}Fei Li, and so running either through rebiber incorrectly changes it to that ordering.

    The same is true for most (but not all) of her papers in the database. No idea why this would be, since DBLP consistently has her as Li Fei{-}Fei.

    cc @pranav-ust

    opened by djsutherland 1
  • Add LREC + Automatically sync

    Add LREC + Automatically sync

    The LREC Sign Language workshop has this website - https://www.sign-lang.uni-hamburg.de/lrec/index.html

    Which links to two bib files: without abstracts: https://www.sign-lang.uni-hamburg.de/lrec/sign-lang_lrec.bib with abstracts: https://www.sign-lang.uni-hamburg.de/lrec/sign-lang_lrec_a.bib

    While one can add them manually to this repo, I was wondering if there is a setting somewhere to just put this link, and whenever someone runs an "update" script it will re-fetch the bib file and process it?

    opened by AmitMY 0
Releases(v1.1.3)
Owner
(Bill) Yuchen Lin
CS PhD student @ USC; NLP/AI/ML
(Bill) Yuchen Lin
Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup.

LibreTranslate Try it online! | API Docs | Community Forum Free and Open Source Machine Translation API, entirely self-hosted. Unlike other APIs, it d

3.4k Dec 27, 2022
Code to reprudece NeurIPS paper: Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Accelerated Sparse Neural Training: A Provable and Efficient Method to FindN:M Transposable Masks Recently, researchers proposed pruning deep neural n

itay hubara 4 Feb 23, 2022
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT This repo provides the code for reproducing the experiments in CodeBERT: A Pre-Trained Model for Programming and Natural Languages. CodeBERT

Microsoft 1k Jan 03, 2023
Text classification on IMDB dataset using Keras and Bi-LSTM network

Text classification on IMDB dataset using Keras and Bi-LSTM Text classification on IMDB dataset using Keras and Bi-LSTM network. Usage python3 main.py

Hamza Rashid 2 Sep 27, 2022
Kurumi ChatBot

KurumiChatBot Just another Telegram AI chat bot written in Python using Pyrogram. A public running instance can be found on telegram as @TokisakiChatB

Yoga Pranata 3 Jun 28, 2022
用Resnet101+GPT搭建一个玩王者荣耀的AI

基于pytorch框架用resnet101加GPT搭建AI玩王者荣耀 本源码模型主要用了SamLynnEvans Transformer 的源码的解码部分。以及pytorch自带的预训练模型"resnet101-5d3b4d8f.pth"

冯泉荔 2.2k Jan 03, 2023
SurvTRACE: Transformers for Survival Analysis with Competing Events

⭐ SurvTRACE: Transformers for Survival Analysis with Competing Events This repo provides the implementation of SurvTRACE for survival analysis. It is

Zifeng 13 Oct 06, 2022
Practical Machine Learning with Python

Master the essential skills needed to recognize and solve complex real-world problems with Machine Learning and Deep Learning by leveraging the highly popular Python Machine Learning Eco-system.

Dipanjan (DJ) Sarkar 2k Jan 08, 2023
Google's Meena transformer chatbot implementation

Here's my attempt at recreating Meena, a state of the art chatbot developed by Google Research and described in the paper Towards a Human-like Open-Domain Chatbot.

Francesco Pham 94 Dec 25, 2022
Natural Language Processing with transformers

we want to create a repo to illustrate usage of transformers in chinese

Datawhale 763 Dec 27, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Pretrained Language Model This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei N

HUAWEI Noah's Ark Lab 2.6k Jan 08, 2023
NLP, Machine learning

Netflix-recommendation-system NLP, Machine learning About Recommendation algorithms are at the core of the Netflix product. It provides their members

Harshith VH 6 Jan 12, 2022
🚀Clone a voice in 5 seconds to generate arbitrary speech in real-time

English | 中文 Features 🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc. ?

Vega 25.6k Dec 31, 2022
Extract rooms type, door, neibour rooms, rooms corners nad bounding boxes, and generate graph from rplan dataset

Housegan-data-reader House-GAN++ (data-reader) Code and instructions for converting rplan dataset (raster images) to housegan++ data format. House-GAN

Sepid Hosseini 13 Nov 24, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Meta Research 125 Dec 25, 2022
ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost LOVE is accpeted by ACL22 main conference as a long pape

Lihu Chen 32 Jan 03, 2023
Summarization module based on KoBART

KoBART-summarization Install KoBART pip install git+https://github.com/SKT-AI/KoBART#egg=kobart Requirements pytorch==1.7.0 transformers==4.0.0 pytor

seujung hwan, Jung 148 Dec 28, 2022