OpenAI CLIP text encoders for multiple languages!

Overview

Multilingual-CLIP

OpenAI CLIP text encoders for any language

Colab Notebook · Pre-trained Models · Report Bug

Open In Colab

Overview

Alt text

OpenAI recently released the paper Learning Transferable Visual Models From Natural Language Supervision in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a visual encoder and a text encoder. These were trained on a wooping 400 Million images and corresponding captions. OpenAI has since released a set of their smaller CLIP models, which can be found on the official CLIP Github.

We propose a fine-tuning to replace the original English text encoder with a pre-trained text model in any language. This method makes it possible to adapt the powerful CLIP model to any language in roughly 24 GPU hours.

This repository contains

  • Pytorch inference code
  • Tensorflow training code
  • Pre-trained CLIP-Text encoders for multiple languages
  • Training data and pre-computed CLIP text encodings for a large porton of the the image captions of GCC + MSCOCO + VizWiz

Requirements

While it is possible that other versions works equally fine, we have worked with the following:

  • Python = 3.6.9
  • Transformers = 4.1.1
  • Model Weights

Usage

Download CLIP Model
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git

Replace cudatoolkit=11.0 above with the appropriate CUDA version on your machine or cpuonly when installing on a machine without a GPU. For more information please see the official CLIP repostitory.

Download Linear Weights
# Linear Model Weights
$ bash get-weights.sh

Inference

from src import multilingual_clip

print(multilingual_clip.AVAILABLE_MODELS.keys())

model = multilingual_clip.load_model('M-BERT-Distil-40')

embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])

For a more elaborate example, comparing the textual embeddings to the CLIP image embeddings see this colab notebook.

Pre-trained Models

Every text encoder is a Huggingface available transformer, with an additional linear layer on top. Neither of the models have been extensively tested, but for more information and qualitative test results for a specific model, click the Model Name to see its model card.

*** Make sure to update to the most recent version of the repostitory when downloading a new model, and re-run the shell script to download the Linear Weights. ***

M-BERT-Base-ViT-B

Name Model Base Vision Model Pre-trained Languages Target Languages #Parameters
Multilingual
M-BERT Distil 40 M-BERT Distil RN50x4 101 Languages 40 Languages 66 M
M-BERT Base 69 M-BERT Base RN50x4 101 Languages 68 Languages 110 M
M-BERT Base ViT-B M-BERT Base ViT-B/32 101 Languages 68 Languages 110 M
Monolingual
Swe-CLIP 500k KB-BERT RN50x4 Swedish Swedish 110 M
Swe-CLIP 2M KB-BERT RN50x4 Swedish Swedish 110 M

Training a new model

This folder contains the code used for training the above models. If you wsh to train your own model you must do the following things:

  • Prepare a set of translated sentence pairs from English -> Your Language(s)
  • Compute regular CLIP-Text embeddings for the English sentences.
  • Edit Training.py to load your data.
  • Train a new CLIP-Text encoder via Teacher Learning

Pre-computed CLIP Embeddings & Translaton Data

This Google Drive folder contains both pre-computed CLIP-Text Embeddings for a large porton of the the image captions of GCC + MSCOCO + VizWiz.

The Google Drive folder also contains the translation data used to train the currently available models. Good Luck

Contribution

If you have trained a CLIP Text encoder specific to your language, or another model covering a language not supported here, Please feel free to contact us and we will either upload your model and credit you, or simply link to your already uploaded model.

Contact

If you have questions regarding the code or otherwise related to this Github page, please open an issue.

For other purposes, feel free to contact me directly at: [email protected]

Acknowledgements

License

Distributed under the MIT License. See LICENSE for more information.

Comments
  • 1024 dim embedding model needed

    1024 dim embedding model needed

    Dear authors , Thx 4 your Great work ! But I'm working with AudioClip of which the embeddings are 1024 dims, But the models U've released have most 768 dims, Could U pls kindly release a model that can produce 1024 dims embedding ? Here is AudioClip : https://github.com/AndreyGuzhov/AudioCLIP With my best wish ! Looking forward to hearing from U !

    opened by ithanwu 4
  • Release 1.0.0

    Release 1.0.0

    Merge this only after doing this:

    when you create a "Release x.y.z" it will release

    you need to add a secret called PYPI_PASSWORD in the github repo and put inside a token you create at https://pypi.org/manage/account/token/

    https://github.com/FreddeFrallan/Multilingual-CLIP/settings/secrets/actions/new

    Choose the option "Squash and merge" on github when merging to create a single commit

    opened by rom1504 4
  • Bibtex Citation

    Bibtex Citation

    Amazing repo! I'd love to cite it. Do you have a desired bibtex by chance?

    Perhaps:

    @misc{multilingual-clip, author = {Carlsson, Fredrik}, title = {Multilingual CLIP}, year = 2021, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/SajjjadAyobi/CLIPfa}}, }

    opened by Zasder3 1
  • Training a model for ViT-L/14 image embeddings

    Training a model for ViT-L/14 image embeddings

    Hey, Thanks for providing this awesome multilingual clip-aligned text encoder. We used it to filter the 3 billions of (image, text) pairs of laion5B https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/ and it worked well. I'm also using this model to provide a multilingual search in https://rom1504.github.io/clip-retrieval/. For laion400m we used the ViT-B/32 model of openai to produce the index, but for laion5B we went with ViT-L/14 which is much more powerful. To provide the same multilingual search feature, it would be really helpful if I had a clip ViT-L/14 aligned multilingual text encoder.

    Would you advise running https://github.com/FreddeFrallan/Multilingual-CLIP#training-a-new-model (and now I'm writing it, I guess I could use a subset of the multilingual set of laion5B for this) to align such a text encoder ?

    opened by rom1504 1
  • XLM-Roberta Feature Request

    XLM-Roberta Feature Request

    Hi,

    Great repo! Are you planning to release a model with XLMR (with ViT-B) anytime soon? It was better for small-resource-languages than the multilingual BERT.

    opened by mezig351 1
  • Redo packaging

    Redo packaging

    when you create a "Release x.y.z" it will release

    you need to add a secret called PYPI_PASSWORD in the github repo and put inside a token you create at https://pypi.org/manage/account/token/

    https://github.com/FreddeFrallan/Multilingual-CLIP/settings/secrets/actions/new

    opened by rom1504 0
  • Data leak

    Data leak

    Hello! According to XTD-10 repo, the test set contains 800 images from MSCOCO train set. During training you also use MSCOCO train set – it seems you have data leak. Or may be I don't understand something.

    opened by kimihailv 1
  • model_type 'M-CLIP' is not in CONFIG_MAPPING

    model_type 'M-CLIP' is not in CONFIG_MAPPING

    from transformers import AutoConfig
    
    kwargs = {'_from_auto': True}
    pretrained_model_name_or_path = 'M-CLIP/XLM-Roberta-Large-Vit-L-14'
    config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
    

    Hi, I installed required transformers==4.8.1, and run the above code to get following error.

        config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
      File "/anaconda3/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 448, in from_pretrained
        config_class = CONFIG_MAPPING[config_dict["model_type"]]
    KeyError: 'M-CLIP'
    

    seems like model_type 'M-CLIP' is not in the CONFIG_MAPPING, can anyone help to figure it out?

    opened by wxywb 1
  • Issue in M-Bert-Base-ViT-B clip head linear layer size

    Issue in M-Bert-Base-ViT-B clip head linear layer size

    I tried the following piece of code present in the repo at location https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/src/multilingual_clip.py

    The only changes I made is that I added print statements in between.


    ` import pickle

    import torch import transformers

    AVAILABLE_MODELS = { 'M-BERT-Distil-40': { 'model_name': 'M-CLIP/M-BERT-Distil-40', 'tokenizer_name': 'M-CLIP/M-BERT-Distil-40', 'head_name': 'M-BERT Distil 40 Linear Weights.pkl' },

    'M-BERT-Base-69': {
        'model_name': 'M-CLIP/M-BERT-Base-69',
        'tokenizer_name': 'M-CLIP/M-BERT-Base-69',
        'head_name': 'M-BERT-Base-69 Linear Weights.pkl'
    },
    
    'Swe-CLIP-500k': {
        'model_name': 'M-CLIP/Swedish-500k',
        'tokenizer_name': 'M-CLIP/Swedish-500k',
        'head_name': 'Swedish-500k Linear Weights.pkl'
    },
    
    'Swe-CLIP-2M': {
        'model_name': 'M-CLIP/Swedish-2M',
        'tokenizer_name': 'M-CLIP/Swedish-2M',
        'head_name': 'Swedish-2M Linear Weights.pkl'
    },
    
    'M-BERT-Base-ViT-B': {
        'model_name': 'M-CLIP/M-BERT-Base-ViT-B',
        'tokenizer_name': 'M-CLIP/M-BERT-Base-ViT-B',
        'head_name': 'M-BERT-Base-69-ViT Linear Weights.pkl'
    },
    

    }

    class MultilingualClip2(torch.nn.Module): def init(self, model_name, tokenizer_name, head_name, weights_dir='data/weights/'): super().init() self.model_name = model_name self.tokenizer_name = tokenizer_name self.head_path = weights_dir + head_name

        self.tokenizer = transformers.AutoTokenizer.from_pretrained(tokenizer_name)
        self.transformer = transformers.AutoModel.from_pretrained(model_name)
        self.clip_head = torch.nn.Linear(in_features=768, out_features=640)
        self._load_head()
    
    def forward(self, txt):
        txt_tok = self.tokenizer(txt, padding=True, return_tensors='pt').to(device)
        embs = self.transformer(**txt_tok)[0]
        print('embs_text')
        print(embs.size())
        att = txt_tok['attention_mask']
        print('att_text')
        print(att.size())
        embs = (embs * att.unsqueeze(2)).sum(dim=1) / att.sum(dim=1)[:, None]
        print('embs_text')
        print(embs.size())
        p =  self.clip_head(embs)
        print('clip head obj')
        print(self.clip_head)
        print('cliphed_text')
        print(p.size())
        return p
    
    def _load_head(self):
        with open(self.head_path, 'rb') as f:
            lin_weights = pickle.loads(f.read())
        self.clip_head.weight = torch.nn.Parameter(torch.tensor(lin_weights[0]).float().t())
        self.clip_head.bias = torch.nn.Parameter(torch.tensor(lin_weights[1]).float())
        print('ok')
        print(self.clip_head.weight.size())
        print(self.clip_head.bias.size())
    

    def load_model2(name): config = AVAILABLE_MODELS[name] return MultilingualClip2(**config)

    mod = load_model2('M-BERT-Base-ViT-B') z = mod(Query[0]) `

    Output for this code : ok torch.Size([512, 768]) torch.Size([512]) embs_text torch.Size([1, 6, 768]) att_text torch.Size([1, 6]) embs_text torch.Size([1, 768]) clip head obj Linear(in_features=768, out_features=640, bias=True) cliphed_text torch.Size([1, 512])


    This output suggest that the file 'M-BERT-Base-69-ViT Linear Weights.pkl' doesn't have the size of 640 X 768 but 512 X 768

    Is there any issue with the config then ?

    opened by shreyajain4 2
  • some questisons about finetune

    some questisons about finetune

    i have finetune the text_encode use 300000 texts and its embedding,but i find the result is so bad ,could you give me some advertise to improve the result

    opened by Soulscb 1
  • some confuse for

    some confuse for "Pre-trained CLIP-Text encoders for multiple languages"

    if i have <other language text , image, label> pair data, can i directly use 'distilbert-base-multilingual-cased' to pre-train clip model? Why re-train a model for englist to other languages ?

    opened by moluchase 1
Owner
Fredrik Carlsson
Fredrik Carlsson
Code for producing Japanese GPT-2 provided by rinna Co., Ltd.

japanese-gpt2 This repository provides the code for training Japanese GPT-2 models. This code has been used for producing japanese-gpt2-medium release

rinna Co.,Ltd. 491 Jan 07, 2023
Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Machel Reid 82 Dec 19, 2022
Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation Official Code Repository for the paper "Unsupervised Documen

NLP*CL Laboratory 2 Oct 26, 2021
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Jan 08, 2023
This project consists of data analysis and data visualization (done using python)of all IPL seasons from 2008 to 2019 and answering the most asked questions about the IPL.

IPL-data-analysis This project consists of data analysis and data visualization of all IPL seasons from 2008 to 2019 and answering the most asked ques

Sivateja A T 2 Feb 08, 2022
Nystromformer: A Nystrom-based Algorithm for Approximating Self-Attention

Nystromformer: A Nystrom-based Algorithm for Approximating Self-Attention April 6, 2021 We extended segment-means to compute landmarks without requiri

Zhanpeng Zeng 322 Jan 01, 2023
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022
PG-19 Language Modelling Benchmark

PG-19 Language Modelling Benchmark This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Proje

DeepMind 161 Oct 30, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
Practical Machine Learning with Python

Master the essential skills needed to recognize and solve complex real-world problems with Machine Learning and Deep Learning by leveraging the highly popular Python Machine Learning Eco-system.

Dipanjan (DJ) Sarkar 2k Jan 08, 2023
Paddlespeech Streaming ASR GUI

Paddlespeech-Streaming-ASR-GUI Introduction A paddlespeech Streaming ASR GUI. Us

Niek Zhen 3 Jan 05, 2022
Making text a first-class citizen in TensorFlow.

TensorFlow Text - Text processing in Tensorflow IMPORTANT: When installing TF Text with pip install, please note the version of TensorFlow you are run

1k Dec 26, 2022
Conversational-AI-ChatBot - Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users!

Conversational AI ChatBot Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users! In this project? Thi

Rajkumar Lakshmanamoorthy 6 Nov 30, 2022
Japanese NLP Library

Japanese NLP Library Back to Home Contents 1 Requirements 1.1 Links 1.2 Install 1.3 History 2 Libraries and Modules 2.1 Tokenize jTokenize.py 2.2 Cabo

Pulkit Kathuria 144 Dec 27, 2022
Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the

Galois Autocompleter 91 Sep 23, 2022
A paper list for aspect based sentiment analysis.

Aspect-Based-Sentiment-Analysis A paper list for aspect based sentiment analysis. Survey [IEEE-TAC-20]: Issues and Challenges of Aspect-based Sentimen

jiangqn 419 Dec 20, 2022
AIDynamicTextReader - A simple dynamic text reader based on Artificial intelligence

AI Dynamic Text Reader: This is a simple dynamic text reader based on Artificial

Md. Rakibul Islam 1 Jan 18, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries.

VirtualAssistant Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries. Third Party Libraries us

Logadheep 1 Nov 27, 2021
Unsupervised intent recognition

INTENT author: steeve LAQUITAINE description: deployment pattern: currently batch only Setup & run git clone https://github.com/slq0/intent.git bash

sl 1 Apr 08, 2022