💬 Python scripts to parse Messenger, Hangouts, WhatsApp and Telegram chat logs into DataFrames.

Overview

Chatistics

Python 3 scripts to convert chat logs from various messaging platforms into Pandas DataFrames. Can also generate histograms and word clouds from the chat logs.

Changelog

10 Jan 2020: UPDATED ALL THE THINGS! Thanks to mar-muel and manueth, pretty much everything has been updated and improved, and WhatsApp is now supported!

21 Oct 2018: Updated Facebook Messenger and Google Hangouts parsers to make them work with the new exported file formats.

9 Feb 2018: Telegram support added thanks to bmwant.

24 Oct 2016: Initial release supporting Facebook Messenger and Google Hangouts.

Support Matrix

Platform Direct Chat Group Chat
Facebook Messenger ✔ ✘
Google Hangouts ✔ ✘
Telegram ✔ ✘
WhatsApp ✔ ✔

Exported data

Data exported for each message regardless of the platform:

Column Content
timestamp UNIX timestamp (in seconds)
conversationId A conversation ID, unique by platform
conversationWithName Name of the other people in a direct conversation, or name of the group conversation
senderName Name of the sender
outgoing Boolean value whether the message is outgoing/coming from owner
text Text of the message
language Language of the conversation as inferred by langdetect
platform Platform (see support matrix above)

Exporting your chat logs

1. Download your chat logs

Google Hangouts

Warning: Google Hangouts archives can take a long time to be ready for download - up to one hour in our experience.

  1. Go to Google Takeout: https://takeout.google.com/settings/takeout
  2. Request an archive containing your Hangouts chat logs
  3. Download the archive, then extract the file called Hangouts.json
  4. Move it to ./raw_data/hangouts/

Facebook Messenger

Warning: Facebook archives can take a very long time to be ready for download - up to 12 hours! They can weight several gigabytes. Start with an archive containing just a few months of data if you want to quickly get started, this shouldn't take more than a few minutes to complete.

  1. Go to the page "Your Facebook Information": https://www.facebook.com/settings?tab=your_facebook_information
  2. Click on "Download Your Information"
  3. Select the date range you want. The format must be JSON. Media won't be used, so you can set the quality to "Low" to speed things up.
  4. Click on "Deselect All", then scroll down to select "Messages" only
  5. Click on "Create File" at the top of the list. It will take Facebook a while to generate your archive.
  6. Once the archive is ready, download and extract it, then move the content of the messages folder into ./raw_data/messenger/

WhatsApp

Unfortunately, WhatsApp only lets you export your conversations from your phone and one by one.

  1. On your phone, open the chat conversation you want to export
  2. On Android, tap on â‹® > More > Export chat. On iOS, tap on the interlocutor's name > Export chat
  3. Choose "Without Media"
  4. Send chat to yourself eg via Email
  5. Unpack the archive and add the individual .txt files to the folder ./raw_data/whatsapp/

Telegram

The Telegram API works differently: you will first need to setup Chatistics, then query your chat logs programmatically. This process is documented below. Exporting Telegram chat logs is very fast.

2. Setup Chatistics

First, install the required Python packages using conda:

conda env create -f environment.yml
conda activate chatistics

You can now parse the messages by using the command python parse.py .

By default the parsers will try to infer your own name (i.e. your username) from the data. If this fails you can provide your own name to the parser by providing the --own-name argument. The name should match your name exactly as used on that chat platform.

# Google Hangouts
python parse.py hangouts

# Facebook Messenger
python parse.py messenger

# WhatsApp
python parse.py whatsapp

Telegram

  1. Create your Telegram application to access chat logs (instructions). You will need api_id and api_hash which we will now set as environment variables.
  2. Run cp secrets.sh.example secrets.sh and fill in the values for the environment variables TELEGRAM_API_ID, TELEGRAMP_API_HASH and TELEGRAM_PHONE (your phone number including country code).
  3. Run source secrets.sh
  4. Execute the parser script using python parse.py telegram

The pickle files will now be ready for analysis in the data folder!

For more options use the -h argument on the parsers (e.g. python parse.py telegram --help).

3. All done! Play with your data

Chatistics can print the chat logs as raw text. It can also create histograms, showing how many messages each interlocutor sent, or generate word clouds based on word density and a base image.

Export

You can view the data in stdout (default) or export it to csv, json, or as a Dataframe pickle.

python export.py

You can use the same filter options as described above in combination with an output format option:

  -f {stdout,json,csv,pkl}, --format {stdout,json,csv,pkl}
                        Output format (default: stdout)

Histograms

Plot all messages with:

python visualize.py breakdown

Among other options you can filter messages as needed (also see python visualize.py breakdown --help):

  --platforms {telegram,whatsapp,messenger,hangouts}
                        Use data only from certain platforms (default: ['telegram', 'whatsapp', 'messenger', 'hangouts'])
  --filter-conversation
                        Limit by conversations with this person/group (default: [])
  --filter-sender
                        Limit to messages sent by this person/group (default: [])
  --remove-conversation
                        Remove messages by these senders/groups (default: [])
  --remove-sender
                        Remove all messages by this sender (default: [])
  --contains-keyword
                        Filter by messages which contain certain keywords (default: [])
  --outgoing-only       
                        Limit by outgoing messages (default: False)
  --incoming-only       
                        Limit by incoming messages (default: False)

Eg to see all the messages sent between you and Jane Doe:

python visualize.py breakdown --filter-conversation "Jane Doe"

To see the messages sent to you by the top 10 people with whom you talk the most:

python visualize.py breakdown -n 10 --incoming-only

You can also plot the conversation densities using the --as-density flag.

Word Cloud

You will need a mask file to render the word cloud. The white bits of the image will be left empty, the rest will be filled with words using the color of the image. See the WordCloud library documentation for more information.

python visualize.py cloud -m raw_outlines/users.jpg

You can filter which messages to use using the same flags as with histograms.

Development

Install dev environment using

conda env create -f environment_dev.yml

Run tests from project root using

python -m pytest

Improvement ideas

  • Parsers for more chat platforms: Discord? Signal? Pidgin? ...
  • Handle group chats on more platforms.
  • See open issues for more ideas.

Pull requests are welcome!

Social medias

Projects using Chatistics

Meet your Artificial Self: Generate text that sounds like you workshop

Credits

Owner
Florian
🤖 Machine Learning
Florian
Geospatial data-science analysis on reasons behind delay in Grab ride-share services

Grab x Pulis Detailed analysis done to investigate possible reasons for delay in Grab services for NUS Data Analytics Competition 2022, to be found in

Keng Hwee 6 Jun 07, 2022
Wafer Fault Detection - Wafer circleci with python

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

Avnish Yadav 14 Nov 21, 2022
EOD Historical Data Python Library (Unofficial)

EOD Historical Data Python Library (Unofficial) https://eodhistoricaldata.com Installation python3 -m pip install eodhistoricaldata Note Demo API key

Michael Whittle 20 Dec 22, 2022
Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Andrés Suárez 3 Nov 08, 2022
A set of functions and analysis classes for solvation structure analysis

SolvationAnalysis The macroscopic behavior of a liquid is determined by its microscopic structure. For ionic systems, like batteries and many enzymes,

MDAnalysis 19 Nov 24, 2022
This is a repo documenting the best practices in PySpark.

Spark-Syntax This is a public repo documenting all of the "best practices" of writing PySpark code from what I have learnt from working with PySpark f

Eric Xiao 447 Dec 25, 2022
Very useful and necessary functions that simplify working with data

Additional-function-for-pandas Very useful and necessary functions that simplify working with data random_fill_nan(module_name, nan) - Replaces all sp

Alexander Goldian 2 Dec 02, 2021
Demonstrate a Dataflow pipeline that saves data from an API into BigQuery table

Overview dataflow-mvp provides a basic example pipeline that pulls data from an API and writes it to a BigQuery table using GCP's Dataflow (i.e., Apac

Chris Carbonell 1 Dec 03, 2021
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
Jupyter notebooks for the book "The Elements of Statistical Learning".

This repository contains Jupyter notebooks implementing the algorithms found in the book and summary of the textbook.

Madiyar 369 Dec 30, 2022
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 1.6k Dec 29, 2022
The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

Bell Eapen 14 Jan 02, 2023
Python for Data Analysis, 2nd Edition

Python for Data Analysis, 2nd Edition Materials and IPython notebooks for "Python for Data Analysis" by Wes McKinney, published by O'Reilly Media Buy

Wes McKinney 18.6k Jan 08, 2023
PyClustering is a Python, C++ data mining library.

pyclustering is a Python, C++ data mining library (clustering algorithm, oscillatory networks, neural networks). The library provides Python and C++ implementations (C++ pyclustering library) of each

Andrei Novikov 1k Jan 05, 2023
Automated Exploration Data Analysis on a financial dataset

Automated EDA on financial dataset Just a simple way to get automated Exploration Data Analysis from financial dataset (OHLCV) using Streamlit and ta.

Darío López Padial 28 Nov 27, 2022
A data analysis using python and pandas to showcase trends in school performance.

A data analysis using python and pandas to showcase trends in school performance. A data analysis to showcase trends in school performance using Panda

Jimmy Faccioli 0 Sep 07, 2021
Pipeline to convert a haploid assembly into diploid

HapDup (haplotype duplicator) is a pipeline to convert a haploid long read assembly into a dual diploid assembly. The reconstructed haplotypes

Mikhail Kolmogorov 50 Jan 05, 2023
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

DataHerb 4 Feb 11, 2022
Important dataframe statistics with a single command

quick_eda Receiving dataframe statistics with one command Project description A python package for Data Scientists, Students, ML Engineers and anyone

Sven Eschlbeck 2 Dec 19, 2021