Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Overview

BertViz

BertViz is a tool for visualizing attention in the Transformer model, supporting all models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace.

Resources

🕹️ Colab tutorial

✍️ Blog post

📖 Paper

Overview

Head View

The head view visualizes the attention patterns produced by one or more attention heads in a given transformer layer. It is based on the excellent Tensor2Tensor visualization tool by Llion Jones.

🕹 Try out this interactive Colab Notebook with the head view pre-loaded.

head view

The head view supports all models from the Transformers library, including:
BERT: [Notebook] [Colab]
GPT-2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook] (and others)

Model View

The model view provides a birds-eye view of attention across all of the model’s layers and heads.

🕹 Try out this interactive Colab Notebook with the model view pre-loaded.

model view

The model view supports all models from the Transformers library, including:
BERT: [Notebook] [Colab]
GPT2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook] (and others)

Neuron View

The neuron view visualizes the individual neurons in the query and key vectors and shows how they are used to compute attention.

🕹 Try out this interactive Colab Notebook with the neuron view pre-loaded (requires Chrome).

neuron view

The neuron view supports the following three models:
BERT: [Notebook] [Colab]
GPT-2 [Notebook] [Colab]
RoBERTa [Notebook]

Installation

pip install bertviz

You must also have Jupyter Notebook installed.

Execution

First start Jupyter Notebook:

jupyter notebook

Click New to start a Jupter notebook, then follow the instructions below.

Head view / model view

First load a Huggingface model, either a pre-trained model as shown below, or your own fine-tuned model. Be sure to set output_attention=True.

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased", output_attentions=True)
inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1]  # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0]) 

Then display the returned attention weights using the BertViz head_view or model_view function:

from bertviz import head_view
head_view(attention, tokens)

For more advanced use cases, e.g., specifying a two-sentence input to the model, please refer to the sample notebooks.

Neuron view

The neuron view is invoked differently than the head view or model view, due to requiring access to the model's query/key vectors, which are not returned through the Huggingface API. It is currently limited to BERT, GPT-2, and RoBERTa.

# Import specialized versions of models (that return query/key vectors)
from bertviz.transformers_neuron_view import BertModel, BertTokenizer

from bertviz.neuron_view import show

model = BertModel.from_pretrained(model_version, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
model_type = 'bert'
show(model, model_type, tokenizer, sentence_a, sentence_b, layer=2, head=0)

Running a sample notebook

git clone https://github.com/jessevig/bertviz.git
cd bertviz
jupyter notebook

Click on any of the sample notebooks. You can view a notebook's cached output visualizations by selecting File > Trust Notebook (and confirming in dialog) or you can run the notebook yourself. Note that the sample notebooks do not cover all Huggingface models, but the code should be similar for those not included.

Advanced options

Pre-selecting layer/head(s)

For the head view, you may pre-select a specific layer and collection of heads, e.g.:

head_view(attention, tokens, layer=2, heads=[3,5])

You may also pre-select a specific layer and single head for the neuron view.

Dark/light mode

The model view and neuron view support dark (default) and light modes. You may turn off dark mode in these views using the display_mode parameter:

model_view(attention, tokens, display_mode="light")

Non-huggingface models

The head_view and model_view functions may technically be used to visualize self-attention for any Transformer model, as long as the attention weights are available and follow the format specified in model_view and head_view (which is the format returned from Huggingface models). In some case, Tensorflow checkpoints may be loaded as Huggingface models as described in the Huggingface docs.

Limitations

Tool

  • The visualizations works best with shorter inputs (e.g. a single sentence) and may run slowly if the input text is very long, especially for the model view.
  • When running on Colab, some of the visualizations will fail (runtime disconnection) when the input text is long.
  • The neuron view only supports BERT, GPT-2, and RoBERTa models. This view needs access to the query and key vectors, which required modifying the model code (see transformers_neuron_view directory), which has only been done for these three models. Also, only one neuron view may be included per notebook.

Attention as "explanation"

Visualizing attention weights illuminates a particular mechanism within the model architecture but does not necessarily provide a direct explanation for model predictions. See [1], [2], [3].

Authors

Jesse Vig

Citation

When referencing BertViz, please cite this paper.

@inproceedings{vig-2019-multiscale,
    title = "A Multiscale Visualization of Attention in the Transformer Model",
    author = "Vig, Jesse",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-3007",
    doi = "10.18653/v1/P19-3007",
    pages = "37--42",
}

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

Acknowledgments

We are grateful to the authors of the following projects, which are incorporated into this repo:

Comments
  • How can use bertviz for Bert Questioning Answering??

    How can use bertviz for Bert Questioning Answering??

    Is there any way to see the attention visualization for Bert Questioning and Answering model ?? Because I couldn't see BertForQuestionAnswering class in bertviz.pytorch_transformers_attn? I have fine-tuned over a QA dataset using hugging-face transformers and wanted to see the visualization for it. Can you suggest any way of doing it ??

    opened by bvy007 25
  • encode_plus is not in GPT2 Tokenizer

    encode_plus is not in GPT2 Tokenizer

    It seems you removed encode_plus, what is the successor? All the notebook includes inputs = tokenizer.encode_plus(text, return_tensors='pt', add_special_tokens=True) which is wrong and raise an error.

    opened by mojivalipour 18
  • BertForSequenceClassification.from_pretrained

    BertForSequenceClassification.from_pretrained

    Hi, Thank you for this great work. can I use this code to plot my model(I am useing BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)

    model_type = 'bert' model_version = 'bert-base-uncased' do_lower_case = True model = model #(this my model) #tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case) sentence_a = sentences[0] sentence_b = sentences[1] call_html() show(model, model_type, tokenizer, sentence_a, sentence_b) I changed only the model with my model, and the sentences and I got this error??!!please help or share any blog that explain how to plot my model AttributeError: 'BertTokenizer' object has no attribute 'cls_token'

    Thank you in advance

    opened by alshahrani2030 17
  • Classification words importance

    Classification words importance

    Is there any way to use bertviz to visualise the importance of the different words respect to a given prediction of a classification task (BertClassifier)? Similar to this: https://docs.fast.ai/text.interpret.html#interpret

    Thank you

    enhancement 
    opened by lspataro 14
  • layer and attention are empty.

    layer and attention are empty.

    I'm using colab but it doesn't work. Help.

    %%javascript require.config({ paths: { d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min', jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min', } });

    def` show_head_view(model, tokenizer, sentence_a, sentence_b=None):

    inputs = tokenizer.encode_plus(sentence_a, sentence_b, return_tensors='pt', 
    
    add_special_tokens=True)
    
    input_ids = inputs['input_ids']
    if sentence_b:
        token_type_ids = inputs['token_type_ids']
        attention = model(input_ids, token_type_ids=token_type_ids)[-1]
        sentence_b_start = token_type_ids[0].tolist().index(1)
    else:
        attention = model(input_ids)[-1]
        sentence_b_start = None
    input_id_list = input_ids[0].tolist() # Batch index 0
    tokens = tokenizer.convert_ids_to_tokens(input_id_list)    
    head_view(attention, tokens, sentence_b_start)
    

    model_version = 'bert-base-uncased' do_lower_case = True

    model = BertModel.from_pretrained(model_version, output_attentions=True) tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)

    sentence_a = "the cat sat on the mat" sentence_b = "the cat lay on the rug"

    show_head_view(model, tokenizer, sentence_a, sentence_b)

    capture

    opened by gogokre 13
  • Cannot visualize enough input length on T5

    Cannot visualize enough input length on T5

    Hi,

    Thank you for this fascinating work.

    I tried to visualize T5 attentions on a high-Ram Colab Notebook with TPU. It runs perfectly when the input is short. However, when the input length is more than a few sentences, Colab notebook seems to crash. It's required in my research project that at most several paragraphs be visualized. Do you know if there is a way to make this work?

    Thank you! Yifu (Charles)

    opened by chen-yifu 11
  • Neuron_view Asafaya pretrained model

    Neuron_view Asafaya pretrained model

    Hello,

    We appreciate your assistance with this helpful visualization for Bert. This issue occurs when I use the Asafaya pretrained model for the Arabic language, but not when I use the bert-base-multilingual-cased model.

    image

    Any suggestions!

    best,

    opened by hinnaweali 9
  • Visualise attention for translation

    Visualise attention for translation

    Hi, first of all, thank you for a great tool.

    My question is how to visualize attention for translation? I would like to see how much particular input word attended the choice of output word. I am using Seq2Seq model and its output has three types od attention values: encoder, decoder and cross attentions. https://huggingface.co/transformers/main_classes/output.html#seq2seqlmoutput

    So to plot it, It would mean that words on the right side of head_view come from the translated sequence.

    Is it how seq2seq attention can be visualised?

    Thank you even for directing me in advance. Below is a code I used to get Seq2SeqLMOutput

    from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
    from bertviz import model_view
    
    tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-pl-en")
    model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-pl-en")
    
    text = "Kot nie przekroczył drogi bo była za szeroka"
    
    inputs = tokenizer.encode(text, return_tensors="pt")
    outputs = model.generate(inputs)
    
    tokens = tokenizer.convert_ids_to_tokens(inputs[0])
    output = model(inputs, decoder_input_ids=inputs, output_attentions=True)
    
    opened by RadoslawB 9
  • Issues in visualizing a fine tuned model

    Issues in visualizing a fine tuned model

    BertModel finetuned for a sequence classification task does not give expected results on visualisation. Ideally, the pretrained model should be loaded into BertForSequenceClassification, but that model does not return attentions scores for visualisation. When loaded into BertModel (0 to 11 layers), I assume the 11th layer (right before classification layer in BertForSequenceClassification) is the right layer to check attention distribution. But every word is equally attentive to every other word. I am wondering what can be the possible reasons and how I can fix it. Thanks. Screenshot 2019-07-30 at 11 19 46 AM

    opened by chikubee 7
  • Showing nothing while no exception caught

    Showing nothing while no exception caught

    Hi,

    I'm trying to visualize the attention on my fine-tuned model, while the demo case works fine with us but when I try to feed attention results output by my fine-tuned model it's just showing blank but no exception was caught. I've been tried to slice the attention shape to the same as the one in demo case with [1,12,8,8] but still not working.

    opened by Anbrose 6
  • How do I use this tool for my own model?

    How do I use this tool for my own model?

    Hi, I have trained an XLM model that translates from English to Spanish. A model for this language pair is not available on huggingface's repo. Is there any way to load my saved model?

    opened by akshaysadanand 6
  • There is no result or figure output when running model_view of BART.

    There is no result or figure output when running model_view of BART.

    I didn't see any figure when running the code below. Is there something that I missed? Help me, pls.

    from transformers import AutoTokenizer, AutoModel, utils
    from bertviz import model_view
    
    utils.logging.set_verbosity_error()  # Remove line to see warnings
    
    # Initialize tokenizer and model. Be sure to set output_attentions=True.
    # Load BART fine-tuned for summarization on CNN/Daily Mail dataset
    model_name = "facebook/bart-large-cnn"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModel.from_pretrained(model_name, output_attentions=True)
    
    # get encoded input vectors
    encoder_input_ids = tokenizer(utterances, return_tensors="pt", add_special_tokens=True).input_ids
    
    # create ids of encoded input vectors
    decoder_input_ids = tokenizer("Jane made a 9 PM reservation for 6 people tonight at Vegano Resto .", return_tensors="pt", add_special_tokens=True).input_ids
    
    outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids)
    
    encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0])
    decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0])
    
    model_view(
        encoder_attention=outputs.encoder_attentions,
        decoder_attention=outputs.decoder_attentions,
        cross_attention=outputs.cross_attentions,
        encoder_tokens= encoder_text,
        decoder_tokens=decoder_text
    )
    
    opened by Junpliu 3
  • Missing head_view_bart.ipynb

    Missing head_view_bart.ipynb

    Hi! During looking through issues I found out that previously head_view_bart.ipynb example was existing in this repo, but now it only can be found through history in deleted branch: https://github.com/jessevig/bertviz/blob/b088f44dd169957dbe89019b81243ef5cf5e9dcb/notebooks/head_view_bart.ipynb

    Could you tell us why this example (along with many others) was removed? It works perfectly fine now

    opened by Serbernari 2
  • Horizontal head view feature

    Horizontal head view feature

    Hi, thanks for the great visualization tool!

    I'm just wondering whether we can have a feature which renders head view in horizontal direction? The reason is that it's more suitable to show the sequence of tokens in the horizontal direction for language like Chinese, Japanese or Korean.

    image

    In the above example, typical sentences in Chinese take about 6,70 characters but it already uses a lot of space showing 10 of them in the current head view.

    Thanks again for the great tool!

    enhancement 
    opened by leemengtaiwan 1
  • Saving visualizations

    Saving visualizations

    Thanks for the great tool!

    It would be nice to be able to save the visualizations for specific layers/heads as images. I have not been able to find a spot in the model/head/neuron_view.js file to add a saving function.

    Do you maybe have a suggestion on how to save the visualizations as images?

    Thanks!

    enhancement 
    opened by e-tornike 7
Releases(v1.4.0)
  • v1.4.0(Apr 2, 2022)

  • v1.3.0(Feb 5, 2022)

    • Add axis labels to Model View
    • Make design more consistent between collapsed and expanded Neuron View
    • When filtering layers in Model View, show color of original index
    • Other minor design changes
    • README updates
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jul 31, 2021)

    • Support displaying subset of layers / head to improve performance through include_layers parameter
    • Fix bug in model view where thumbnail didn't properly render if taller than detail view
    • Fix bug in neuron view where wasn't displaying all layers in some case
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(May 8, 2021)

Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 7k Jan 03, 2023
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Dec 30, 2022
Logging MXNet data for visualization in TensorBoard.

Logging MXNet Data for Visualization in TensorBoard Overview MXBoard provides a set of APIs for logging MXNet data for visualization in TensorBoard. T

Amazon Web Services - Labs 327 Dec 05, 2022
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet

Neural-Backed Decision Trees · Site · Paper · Blog · Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah

Alvin Wan 556 Dec 20, 2022
⬛ Python Individual Conditional Expectation Plot Toolbox

⬛ PyCEbox Python Individual Conditional Expectation Plot Toolbox A Python implementation of individual conditional expecation plots inspired by R's IC

Austin Rochford 140 Dec 30, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other languages)

🤪 TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we c

3k Jan 04, 2023
Python Library for Model Interpretation/Explanations

Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system

Oracle 1k Dec 27, 2022
tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0

Tzu-Wei Huang 7.5k Jan 07, 2023
FairML - is a python toolbox auditing the machine learning models for bias.

======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive

Julius Adebayo 338 Nov 09, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

56 Jan 03, 2023
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Jesse Vig 4.7k Jan 01, 2023
A collection of infrastructure and tools for research in neural network interpretability.

Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!

4.5k Jan 07, 2023
PyTorch implementation of DeepDream algorithm

neural-dream This is a PyTorch implementation of DeepDream. The code is based on neural-style-pt. Here we DeepDream a photograph of the Golden Gate Br

121 Nov 05, 2022
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 143 Jan 07, 2023
ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments.

ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi

ModelChimp 124 Dec 21, 2022
Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve 73 Dec 12, 2022
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
pytorch implementation of "Distilling a Neural Network Into a Soft Decision Tree"

Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published

Kim Heecheol 262 Dec 04, 2022