Lucid library adapted for PyTorch

Related tags

Deep Learninglucent
Overview

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

Comments
  • use custom model?

    use custom model?

    Hi, I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

    opened by dvschultz 5
  • Add activation grids notebook

    Add activation grids notebook

    Issue #3, reproducing activation grids (https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/ActivationGrid.ipynb)

    It's possible to try it here: https://colab.research.google.com/drive/1pEe-KmXeDJcWQYLOHwcMubS69wVVCHLe#scrollTo=xidm-QrXvL2X

    Here are the results so far with inceptionv1 and layer mixed4d:

    • reproduced: https://imgur.com/twmizR4
    • original: https://imgur.com/eaYwEWR

    Some remarks and a question:

    • I added channel_reducer as is from the original repo
    • default transforms in transform.py produce a different size (due to random scaling) each time it is called, then resampling to 224 is done after that to have a fixed size. Is that the same in lucid? I need to debug a little bit in the original repo to be sure to answer that, but please let me know if you have the answer. The reason I ask is that in this specific notebook, the cells in the grid are much smaller than 224. I added an argument "fixed_image_size" to handle this specific case where we want a fixed image size (after resampling) which is not 224.
    • Since all layers are computed and with this commit we can accept smaller images, this means that it's possible to have an exception on higher layers because the image size is not big enough, but it should be fine as long as the layer we are interested in is computed, I handled this exception
    opened by mehdidc 5
  • ValueError in Render

    ValueError in Render

    Hi there,

    I am trying to run the tutorial and am running into the following error:

    >>> import torch
    >>> from lucent.optvis import render, param, transform, objectives
    >>> from lucent.modelzoo import inceptionv1
    >>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    >>> model = inceptionv1(pretrained=True)
    >>> _ = model.to(device).eval()
    >>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
    
      0%|                                                                                       | 0/512 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
        optimizer.step(closure)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
        return func(*args, **kwargs)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
        loss = closure()
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
        model(transform_f(image_f()))
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
        x = transform(x)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
        M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
        raise ValueError("Input scale must be a B tensor. Got {}"
    ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])
    

    I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

    Any help regarding this error would be much appreciated!

    opened by tatkeller 4
  • Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Dear author,

    Thanks so much for implement lucid in PyTorch! I really enjoyed using it in my projects of leveraging deep neural networks as a way to understand real neurons in visual cortices. In my usage, I want to activate multiple channels together to match the selectivity of the biological neuron or units in other networks. We can achieve this by adding up the original channel objective or neuron objective. But it becomes very inefficient in back prop.

    So here are my 2 cents, in this commit I

    • Add linearly weighted activations of the channel, neuron, neuron group as objective. Using tensor operations.
    • Add a function in util.py to output the module names to ease the usage for custom networks.
    opened by Animadversio 4
  • Generating a batch of optimal stimuli, one for each unit in a layer

    Generating a batch of optimal stimuli, one for each unit in a layer

    Hi, I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function. Here is a toy example of what I want and my approach: Units to be visualized = [10,20,30] Layer = 'readout_fc' tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2) param_f = lambda:param.image(135,batch=3) imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

    The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below: neurons = [10,20,30] tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0) Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel. Thanks in advance.

    opened by arnaghosh 3
  • Q: Do you use the same architecture and weights as Clarity does?

    Q: Do you use the same architecture and weights as Clarity does?

    Hi,

    I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses. Reading your code, I've found these lines:

    model_urls = {
        # InceptionV1 model used in Lucid examples, converted by ProGamerGov
        'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
    }
    

    Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

    Thanks!

    opened by gergopool 3
  • Add direction and direction_neuron objectives

    Add direction and direction_neuron objectives

    Hi @greentfrapp,

    Thanks so much for making this repo! It has been a great help for me. For my use-case I needed the direction and direction_neuron objectives so I added them into lucent. I also included two demo files but let me know if they should be rolled into a docstring instead. This PR also lays the groundwork to reproduce the activation atlas notebooks from lucid. Would love to hear your thoughts :)

    opened by ndey96 3
  • Activation Grid Notebook

    Activation Grid Notebook

    Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

    The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

    Help wanted for this!

    help wanted good first issue 
    opened by greentfrapp 3
  • get raw_activations

    get raw_activations

    Hi, thanks for this great library! I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

    In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

    I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct. Any suggestion would be appreciated.

    opened by 2nayk 2
  • Temporarily freeze kornia at 0.4.0 to prevent breaking change

    Temporarily freeze kornia at 0.4.0 to prevent breaking change

    kornia 0.4.1 released recently and it includes a breaking change to get_rotation_matrix2d

    I've opened an Issue to hopefully address this soon: https://github.com/kornia/kornia/issues/742 but in the meantime, I suggest freezing kornia at 0.4.0 so that the random_rotate transform continues working as before

    opened by ivanzvonkov 2
  • Suggestion for `lucent.optvis.render.hook_model`

    Suggestion for `lucent.optvis.render.hook_model`

    First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

    1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
    2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

    Suggested replacement for this function: https://github.com/greentfrapp/lucent/blob/a2b015ce95f29460a329f750428077bcde5e4e94/lucent/optvis/render.py#L194

    def hook(layer):
            if layer == "input":
                out = image_f()
            elif layer == "labels":
                out = list(features.values())[-1].features
            else:
                assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
                out = features[layer].features
            assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
            return out
    

    *I ran it on resnet18. Gorgeous and worked out of the box btw.

    download-4 download-5 download-6 download-7

    opened by alvinwan 2
  • Code Breaks as GPU Index > 0

    Code Breaks as GPU Index > 0

    When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

    For example, if you choos torch.device('cuda:1'), then when you run the code demo

    import torch
    
    from lucent.optvis import render
    from lucent.modelzoo import inceptionv1
    
    # Let's use cuda:1
    device = torch.device("cuda:1")
    model = inceptionv1(pretrained=True)
    model.to(device).eval()
    
    render.render_vis(model, "mixed4a:476")
    

    you will see an error like

    ..........
    File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
        204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
        205     out = features[layer].features
    --> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
        207 return out
    
    AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.
    
    opened by Haoxiang-Wang 0
  • Lucent handles greyscale images in function view incorrectly

    Lucent handles greyscale images in function view incorrectly

    When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error: TypeError: Cannot handle this data type: (1, 1, 1), |u1 The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values. This Stackoverflow answer provides more information.

    Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

    opened by neoglez 0
  • activation grid for hierarchical custom model

    activation grid for hierarchical custom model

    Hi, is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute? E.g. when I call get_model_layers(), you can see the following output for this custom model: image

    I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q). For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?) I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string. Can you help me with this problem? Many thanks!

    opened by An-nay-marks 1
  • Support batches for CPPN image representation

    Support batches for CPPN image representation

    When representing the optimized image using CPPN network, current implementation allows for optimizing for a single image per run. This limitation prevents using, e.g., "diversity" objectives during optimization. This PR adds support for batching for cppn image representation by creating a batch of networks.

    here's an example of generating a diverse batch=2 images for objective "mixed4d_3x3_bottleneck_pre_relu_conv:139" of inception network.

    image

    opened by shaibagon 0
  • Low GPU utilization

    Low GPU utilization

    I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized: Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that. This happens both for cppn prior as well as fourier image representation.

    Any suggestions?

    opened by shaibagon 0
Releases(v0.1.8)
Owner
Lim Swee Kiat
Lim Swee Kiat
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Deep Learning Slide Captcha

滑动验证码深度学习识别 本项目使用深度学习 YOLOV3 模型来识别滑动验证码缺口,基于 https://github.com/eriklindernoren/PyTorch-YOLOv3 修改。 只需要几百张缺口标注图片即可训练出精度高的识别模型,识别效果样例: 克隆项目 运行命令: git cl

Python3WebSpider 55 Jan 02, 2023
Individual Tree Crown classification on WorldView-2 Images using Autoencoder -- Group 9 Weak learners - Final Project (Machine Learning 2020 Course)

Created by Olga Sutyrina, Sarah Elemili, Abduragim Shtanchaev and Artur Bille Individual Tree Crown classification on WorldView-2 Images using Autoenc

2 Dec 08, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
This repository is for Competition for ML_data class

This repository is for Competition for ML_data class. Based on mmsegmentatoin,mainly using swin transformer to completed the competition.

jianlong 2 Oct 23, 2022
A Python library for Deep Probabilistic Modeling

Abstract DeeProb-kit is a Python library that implements deep probabilistic models such as various kinds of Sum-Product Networks, Normalizing Flows an

DeeProb-org 46 Dec 26, 2022
DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection

DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection Code for our Paper DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Obje

Steven Lang 58 Dec 19, 2022
Info and sample codes for "NTU RGB+D Action Recognition Dataset"

"NTU RGB+D" Action Recognition Dataset "NTU RGB+D 120" Action Recognition Dataset "NTU RGB+D" is a large-scale dataset for human action recognition. I

Amir Shahroudy 578 Dec 30, 2022
Highly comparative time-series analysis

〰️ hctsa 〰️ : highly comparative time-series analysis hctsa is a software package for running highly comparative time-series analysis using Matlab (fu

Ben Fulcher 569 Dec 21, 2022
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
Unofficial implementation of Proxy Anchor Loss for Deep Metric Learning

Proxy Anchor Loss for Deep Metric Learning Unofficial pytorch, tensorflow and mxnet implementations of Proxy Anchor Loss for Deep Metric Learning. Not

Geonmo Gu 3 Jun 09, 2021
Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation".

I2V-GAN This repository is the official Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation". Traffic

69 Dec 31, 2022
Code accompanying the paper "Knowledge Base Completion Meets Transfer Learning"

Knowledge Base Completion Meets Transfer Learning This code accompanies the paper Knowledge Base Completion Meets Transfer Learning published at EMNLP

14 Nov 27, 2022
load .txt to train YOLOX, same as Yolo others

YOLOX train your data you need generate data.txt like follow format (per line- one image). prepare one data.txt like this: img_path1 x1,y1,x2,y2,clas

LiMingf 18 Aug 18, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Subdivision-based Mesh Convolutional Networks

Subdivision-based Mesh Convolutional Networks The official implementation of SubdivNet in our paper, Subdivion-based Mesh Convolutional Networks Requi

Zheng-Ning Liu 181 Dec 28, 2022
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022