Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Overview



PyPI Package latest release Supported versions

Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Ecco provides multiple interfaces to aid the explanation and intuition of Transformer-based language models. Read: Interfaces for Explaining Transformer Language Models.

Ecco runs inside Jupyter notebooks. It is built on top of pytorch and transformers.

Ecco is not concerned with training or fine-tuning models. Only exploring and understanding existing pre-trained models. The library is currently an alpha release of a research project. You're welcome to contribute to make it better!

Documentation: ecco.readthedocs.io

Features

  • Support for a wide variety of language models (GPT2, BERT, RoBERTA, T5, T0, and others).
  • Ability to add your own local models (if they're based on Hugging Face pytorch models).
  • Feature attribution (IntegratedGradients, Saliency, InputXGradient, DeepLift, DeepLiftShap, GuidedBackprop, GuidedGradCam, Deconvolution, and LRP via Captum)
  • Capture neuron activations in the FFNN layer in the Transformer block
  • Identify and visualize neuron activation patterns (via Non-negative Matrix Factorization)
  • Examine neuron activations via comparisons of activations spaces using SVCCA, PWCCA, and CKA
  • Visualizations for:
    • Evolution of processing a token through the layers of the model (Logit lens)
    • Candidate output tokens and their probabilities (at each layer in the model)

Examples:

What is the sentiment of this film review?

Use a large language model (T5 in this case) to detect text sentiment. In addition to the sentiment, see the tokens the model broke the text into (which can help debug some edge cases).

Which words in this review lead the model to classify its sentiment as "negative"?

Feature attribution using Integrated Gradients helps you explore model decisions. In this case, switching "weakness" to "inclination" allows the model to correctly switch the prediction to positive.

Explore the world knowledge of GPT models by posing fill-in-the blank questions.

Asking GPT2 where heathrow airport is

Does GPT2 know where Heathrow Airport is? Yes. It does.

What other cities/words did the model consider in addition to London?

The model also considered Birmingham and Manchester

Visualize the candidate output tokens and their probability scores.

Which input words lead it to think of London?

Asking GPT2 where heathrow airport is

At which layers did the model gather confidence that London is the right answer?

The order of the token in each layer, layer 11 makes it number 1

The model chose London by making the highest probability token (ranking it #1) after the last layer in the model. How much did each layer contribute to increasing the ranking of London? This is a logit lens visualizations that helps explore the activity of different model layers.

What are the patterns in BERT neuron activation when it processes a piece of text?

Colored line graphs on the left, a piece of text on the right. The line graphs indicate the activation of BERT neuron groups in response to the text

A group of neurons in BERT tend to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.

Read the paper:

Ecco: An Open Source Library for the Explainability of Transformer Language Models Association for Computational Linguistics (ACL) System Demonstrations, 2021

Tutorials

How-to Guides

API Reference

The API reference and the architecture page explain Ecco's components and how they work together.

Gallery & Examples

Predicted Tokens: View the model's prediction for the next token (with probability scores). See how the predictions evolved through the model's layers. [Notebook] [Colab]


Rankings across layers: After the model picks an output token, Look back at how each layer ranked that token. [Notebook] [Colab]


Layer Predictions:Compare the rankings of multiple tokens as candidates for a certain position in the sequence. [Notebook] [Colab]


Primary Attributions: How much did each input token contribute to producing the output token? [Notebook] [Colab]


Detailed Primary Attributions: See more precise input attributions values using the detailed view. [Notebook] [Colab]


Neuron Activation Analysis: Examine underlying patterns in neuron activations using non-negative matrix factorization. [Notebook] [Colab]

Getting Help

Having trouble?

  • The Discussion board might have some relevant information. If not, you can post your questions there.
  • Report bugs at Ecco's issue tracker

Bibtex for citations:

@inproceedings{alammar-2021-ecco,
    title = "Ecco: An Open Source Library for the Explainability of Transformer Language Models",
    author = "Alammar, J",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
    year = "2021",
    publisher = "Association for Computational Linguistics",
}
Comments
  • Support for T5-like Seq2SeqLM

    Support for T5-like Seq2SeqLM

    Hello, I was wondering if there are any plans for explicit encoder-decoder models like T5. Although T5 was not pre-trained with auto-regressive LM objective it is a pretty good candidate for ecco's generate method. I tried running t5 as it was listed in model-config.yaml but soon ran into issues because the current implementation is very much suited to gpt like models.

    I made some changes on a fork to get attribution working, but not sure if I did it correctly https://colab.research.google.com/drive/1zahIWgOCySoQXQkAaEAORZ5DID11qpkH?usp=sharing https://github.com/chiragjn/ecco/tree/t5_exp

    I would love to contribute to add support with some help, especially on the overall implementation design

    opened by chiragjn 8
  • Adds a model config field use_causal_lm and config entries for gpt-neo

    Adds a model config field use_causal_lm and config entries for gpt-neo

    Adding gpt-neo models to model-config.yaml failed because the model needs to be loaded using AutoModelForCausalLM, but init identified such models by looking for gpt2 in the name. A TODO comment in init mentioned using config instead. I refactored config loading slightly to enable this - not sure if that is the direction you intended or not.

    opened by stprior 8
  • Add a `conda` install option for `ecco`

    Add a `conda` install option for `ecco`

    A conda install option for ecco could be helpful for two reasons:

    1. Easy installation with version management with conda.
    2. For other libraries, which if depend on ecco, if you want them on conda-forge channel as well, ecco must be available on conda-forge.

    :bulb: I have already have started work on this. PR: https://github.com/conda-forge/staged-recipes/pull/17388

    Once, the PR gets merged, you will be able to install ecco as:

    conda install -c conda-forge ecco
    

    I will send a PR to update your documentation, once the PR gets merged.

    opened by sugatoray 7
  • Add support for PEGASUS model

    Add support for PEGASUS model

    I would like to add the support of PEGASUS in model-config.yaml.

    PEGASUS model is an encoder-decoder type and the implementation is completely inherited from BartForConditionalGeneration. So the config is similar to the BART model.

    Notes: This is my first time making a pull request on an open-source project, but hope this helps!

    opened by thomas-chong 6
  • Add support for Integrated Gradients explainability method

    Add support for Integrated Gradients explainability method

    In this PR, me and @SSamDav add support for the IG algorithm and make use of the same visualization plots used for input saliency. Besides, we also fix a saliency visualization bug for enc-dec models that was not addressed in the previous PR.

    Notes:

    • The generate method became even slower with the IG method. We added an option to choose which attribution method to calculate, but it can be further improved. Maybe the visualization could be coupled with the generation itself.
    • The IG score has a convergence delta error that could be shown in the plot or, for example, be used to change the IG default parameters when a minimum error is not met.
    opened by JoaoLages 5
  • attention head

    attention head

    Hi @jalammar, I tested some examples with Ecco, and I wanted to know if it is possible to change the head to view the activations for each one and for each layer?

    opened by afcarvallo 5
  • Add support for more attribution methods

    Add support for more attribution methods

    Hi, Currently, the project seems to be relying on grad-norm and grad-x-input to obtain the attributions. However, there are other arguably better (as discussed in recent work) methods to obtain saliency maps. Integrating them in this project would also provide a good way to compare them on the same input examples.

    Some of these methods from the top of my head are- integrated gradients, gradient shapley, and LIME. Perhaps support for visualizing the attention map from the model being interpreted itself could also be added. Methods based on feature ablation are also possible but they might need more work to integrate.

    There is support for these aforementioned methods on Captum, but it takes effort to get them working for NLP tasks, especially those based on language modeling. Thus, I feel this would be a useful addition here.

    enhancement help wanted 
    opened by RachitBansal 5
  • token prefix in roberta model?

    token prefix in roberta model?

    Trying to use a custom trained Roberta model by loading the config file but getting the error the token prefix is not present in the config. Any idea how to fix it? Screenshot 2022-02-02 at 3 30 31 PM

    opened by sarthusarth 4
  • output.saliency() displays nothing

    output.saliency() displays nothing

    I am trying to visualize saliency maps from a custom GPT model. Since I am concerned only about saliency maps, I just do the following:

    out = OutputSeq(token_ids = input_ids, n_input_tokens = n_input_tokens, tokens = tokens, attribution = attr)
    out.saliency()
    

    I get no errors and nothing is displayed in the jupyter notebook, but when I open Chrome's Javascript console, I see the following thing.

    
    (unknown) Ecco initialize.
    
      | l | @ | storage.googleapis.c…ust=1610606118793:1
    -- | -- | -- | --
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | autoTextColor | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | each | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | style | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | enter | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | join | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | setupTokenBoxes | @ | storage.googleapis.c…ust=1610606118793:1
      | init | @ | storage.googleapis.c…ust=1610606118793:1
      | eval
      | execCb | @ | require.js:1693
      | check | @ | require.js:881
      | enable | @ | require.js:1173
      | init | @ | require.js:786
      | (anonymous) | @ | require.js:1457
    
    DevTools failed to load SourceMap: Could not load content for http://localhost:8888/static/notebook/js/main.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    DevTools failed to load SourceMap: Could not load content for https://storage.googleapis.com/wandb-cdn/production/d4e2434e6/raven.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    

    How do I resolve this issue? Btw, I am running this notebook by sshing into my institute's remote machine.

    opened by VirajBagal 4
  • Tell pip to install from setup.py

    Tell pip to install from setup.py

    Forces pip install -r requirements.txt to install the same package versions specified in setup.py.

    For details, see this comment.

    Confirmed that tests pass locally after merging this and #13 . (Since #13 fixes tests, they won't pass until it is merged.)

    opened by nostalgebraist 4
  • Memory management and tweaks

    Memory management and tweaks

    Hello Jay, thanks for all your work on GPT interpretation!

    This PR contains changes I made in a personal fork while attempting to use ecco with a 1.5B-size GPT-2 model. There are 3 kinds of changes:

    1. Attempts to plug memory leaks / otherwise reduce memory footprint
    2. Bug fixes
    3. Usability tweaks and new features

    In retrospect, I wish I had made distinct branches for these 3 types of change, as together they now make up a pretty large PR. I can still go back and do that, if (say) you want to merge the bug fixes without the other ones.


    Context: I am using ecco on a 1.5B-size GPT-2 model, using a Tesla T4 GPU (~15GB memory) on Colab.

    I am using version 3.4.0 of transformers, which is the max version consistent with ecco's setup.py and hence the one I got on installation.

    1. Memory management

    Running lm.generate with this large model, I ran out of GPU memory. This surprised me, because memory has not been an issue for me using the same model in tensorflow.

    After looking into it, I found a few places where use of GPU memory could be lowered:

    • past, which we don't use here, was still being computed on each step.
      • More importantly, python garbage collection was not (as far as I could tell) freeing the values of past produced on previous steps, so generating N tokens required enough memory to store the N pasts emitted from steps 1, 2, ..., N.
      • Mitigation: pass use_cache=False to the model's forward pass, so it doesn't return pasts
    • Saliency calculations all used retain_graph=True, so the backward graphs were never cleared.
      • Mitigation: when we do several gradient calculations per step, pass retain_graph=False to the last one
    • hidden_states were stored on the GPU during generation.
      • They don't need to be on the GPU at that time (because they aren't used in generation).
      • And, since we have a low CPU memory footprint otherwise, we have plenty of CPU memory to store them in.
      • Mitigation: call .cpu() on hidden states emitted from each step. If we want to calculate with them later on, move them back to self.device.
    • (Minor) Memory allocated for logit matrices from each step was not freed after sampling
      • Mitigation: output['logits']=None after rolling a sample

    With these changes, I can run lm.generate for many 100s of steps, where previously I could only manage a small number, maybe ~10.


    2. Bug fixes

    • activations_dict_to_array would fail in the edge case where we only have a single token in the prompt.
      • Issue: np.squeeze would wrongly eliminate the position axis (because its size was 1).
      • Mitigation: use np.concatenate, which doesn't add an unwanted singleton dimension, so we don't have to squeeze
    • top-p sampling did not work
      • Issue: top_k_top_p_filtering apparently expects a position axis in its input, even if that axis only has length 1
      • Mitigation: replace [-1, :] with [-1: ,:] and then squeeze after rolling a sample

    3. Usability tweaks and new features

    • Added an option to not track hidden_states. This feels consistent with the way you can choose whether or not to track other things (activations, attn).
      • To help this work properly, switched from position-based indexing into the CausalLMOutputWithPast objects to key lookup, so we're robust to changes in the length/order of these objects.
    • Added the option to only track hidden states for a user-defined subset of layers, through the new kwarg collect_activations_layer_nums.
      • This is valuable with a large model where you may be only interested in a specific layer, and storing activations from all layers has high memory cost.
      • NMF now takes this kwarg and (if not None) uses it to map between row indices in activations and actual layer numbers. For example, if we are tracking layers 7 and 23, we will have an activation matrix with 2 rows. If passed from_layer=7, to_layer=8, we should retrieve the row slice [:1, :], not [7:8, :].

    I realize this PR is unwieldy -- I just wanted to get my changes up in some form, since at least some of them seemed unambiguously helpful (bug fixes).

    Let me know if you want me to break it down into smaller pieces, or if it needs other work, or if it is generally unhelpful for your goals, or whatever.

    Did not run tox tests because I could not get them to run properly on my machine, even after downloading the tox.ini from one of the CI-related branches.

    opened by nostalgebraist 4
  • AttributeError: 'OutputSeq' object has no attribute 'saliency'

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    captum 0.5.0 torch 1.13.0+cu117

    Language_Models_and_Ecco_PyData_Khobar.ipynb

    text= "The countries of the European Union are:\n1. Austria\n2. Belgium\n3. Bulgaria\n4."
    output_3 = lm.generate(text, generate=20, do_sample=True)
    output_3.saliency()
    

    AttributeError Traceback (most recent call last) Cell In [13], line 1 ----> 1 output_3.saliency()

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    opened by Claus1 1
  • Rankings_watch displaying wrong sequence

    Rankings_watch displaying wrong sequence

    Hello, I have a problem with the rankings_watch() function. I used a predefined GPT2 model and gave it the input "Today, the weather is". However, in the visualization, only the first token is shown although the model creates the output correctly: image

    Thank you for your help :D

    bug 
    opened by MiriUll 1
  • Running Eccomap for Pre Trained BertForMaskedLM

    Running Eccomap for Pre Trained BertForMaskedLM

    Hi, I was trying to run my pretrained model for which i had used BERTForMaskedLM model class from hugging face but its giving me this error. Plese help me in resolving this error. Thanks in advance. image

    opened by iamakshay1 1
  • Remove `tokenizer_config` usage from the library

    Remove `tokenizer_config` usage from the library

    This config parameter was made to easily package config to send to the Javascript components. Ecco now handles all tokenization on the Python side to separate the concerns between the python and JS components. Subsequently, this needs to be removed.

    opened by jalammar 0
  • Tokenizer has partial token suffix instead of prefix

    Tokenizer has partial token suffix instead of prefix

    Following your guide for identifying model configuration

    MODEL_ID = "vinai/bertweet-base"
    
    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, normalization=True, use_fast=False)
    
    ids= tokenizer('tokenization')
    ids
    

    returns:

    {'input_ids': [0, 969, 6186, 6680, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
    

    Then

    tokenizer.convert_ids_to_tokens(ids['input_ids'])
    

    returns:

    ['<s>', 'to@@', 'ken@@', 'ization', '</s>']
    

    Here I noticed that the tokenizer adds a partial token suffix instead of partial token prefix. Having a suffix instead of prefix is not configurable in the config.

    opened by guustfranssensEY 1
Releases(v0.1.2)
Owner
Jay Alammar
ML Research Engineer. Focused on NLP language models and visualization. @cohere-ai. Ex ML content dev @ Udacity.
Jay Alammar
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
In this project, we compared Spanish BERT and Multilingual BERT in the Sentiment Analysis task.

Applying BERT Fine Tuning to Sentiment Classification on Amazon Reviews Abstract Sentiment analysis has made great progress in recent years, due to th

Alexander Leonardo Lique Lamas 5 Jan 03, 2022
State-of-the-art NLP through transformer models in a modular design and consistent APIs.

Trapper (Transformers wRAPPER) Trapper is an NLP library that aims to make it easier to train transformer based models on downstream tasks. It wraps h

Open Business Software Solutions 42 Sep 21, 2022
LightSeq: A High-Performance Inference Library for Sequence Processing and Generation

LightSeq is a high performance inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP models such as BERT, GPT2, Transform

Bytedance Inc. 2.5k Jan 03, 2023
ASCEND Chinese-English code-switching dataset

ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong.

CAiRE 11 Dec 09, 2022
Abhijith Neil Abraham 2 Nov 05, 2021
Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries.

VirtualAssistant Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries. Third Party Libraries us

Logadheep 1 Nov 27, 2021
Stand-alone language identification system

langid.py readme Introduction langid.py is a standalone Language Identification (LangID) tool. The design principles are as follows: Fast Pre-trained

2k Jan 04, 2023
This project aims to conduct a text information retrieval and text mining on medical research publication regarding Covid19 - treatments and vaccinations.

Project: Text Analysis - This project aims to conduct a text information retrieval and text mining on medical research publication regarding Covid19 -

1 Mar 14, 2022
Mlcode - Continuous ML API Integrations

mlcode Basic APIs for ML applications. Django REST Application Contains REST API

Sujith S 1 Jan 01, 2022
Simple program that translates the name of files into English

Simple program that translates the name of files into English. Useful for when editing/inspecting programs that were developed in a foreign language.

0 Dec 22, 2021
Fast, DB Backed pretrained word embeddings for natural language processing.

Embeddings Embeddings is a python package that provides pretrained word embeddings for natural language processing and machine learning. Instead of lo

Victor Zhong 212 Nov 21, 2022
A simple version of DeTR

DeTR-Lite A simple version of DeTR Before you enjoy this DeTR-Lite The purpose of this project is to allow you to learn the basic knowledge of DeTR. P

Jianhua Yang 11 Jun 13, 2022
A simple chatbot based on chatterbot that you can use for anything has basic features

Chatbotium A simple chatbot based on chatterbot that you can use for anything has basic features. I have some errors Read the paragraph below: Known b

Herman 1 Feb 16, 2022
In this project, we aim to achieve the task of predicting emojis from tweets. We aim to investigate the relationship between words and emojis.

Making Emojis More Predictable by Karan Abrol, Karanjot Singh and Pritish Wadhwa, Natural Language Processing (CSE546) under the guidance of Dr. Shad

Karanjot Singh 2 Jan 17, 2022
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
Kerberoast with ACL abuse capabilities

targetedKerberoast targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts

Shutdown 213 Dec 22, 2022
PyJPBoatRace: Python-based Japanese boatrace tools 🚤

pyjpboatrace :speedboat: provides you with useful tools for data analysis and auto-betting for boatrace.

5 Oct 29, 2022
UniSpeech - Large Scale Self-Supervised Learning for Speech

UniSpeech The family of UniSpeech: WavLM (arXiv): WavLM: Large-Scale Self-Supervised Pre-training for Full Stack Speech Processing UniSpeech (ICML 202

Microsoft 281 Dec 15, 2022
PyWorld3 is a Python implementation of the World3 model

The World3 model revisited in Python Install & Hello World3 How to tune your own simulation Licence How to cite PyWorld3 with Bibtex References & ackn

Charles Vanwynsberghe 248 Dec 14, 2022