Code for testing various M1 Chip benchmarks with TensorFlow.

Overview

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison

This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) against various other pieces of hardware.

It also has steps below to setup your M1, M1 Pro and M1 Max (steps should also for work Intel) Mac to run the code.

Who is this repo for?

You: have a new M1, M1 Pro, M1 Max machine and would like to get started doing machine learning and data science on it.

This repo: teaches you how to install the most common machine learning and data science packages (software) on your machine and make sure they run using sample code.

Machine Learning Experiments Conducted

All experiments were run with the same code. For Apple devices, TensorFlow environments were created with the steps below.

Notebook Number Experiment
00 TinyVGG model trained on CIFAR10 dataset with TensorFlow code.
01 EfficientNetB0 Feature Extractor on Food101 dataset with TensorFlow code.
02 RandomForestClassifier from Scikit-Learn trained with random search cross-validation on California Housing dataset.

Results

See the results directory.

Steps (how to test your M1 machine)

  1. Create an environment and install dependencies (see below)
  2. Clone this repo
  3. Run various notebooks (results come at the end of the notebooks)

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (shorter version)

If you're experienced with making environments and using the command line, follow this version. If not, see the longer version below.

  1. Download and install Homebrew from https://brew.sh. Follow the steps it prompts you to go through after installation.
  2. Download Miniforge3 (Conda installer) for macOS arm64 chips (M1, M1 Pro, M1 Max).
  3. Install Miniforge3 into home directory.
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
  1. Restart terminal.
  2. Create a directory to setup TensorFlow environment.
mkdir tensorflow-test
cd tensorflow-test
  1. Make and activate Conda environment. Note: Python 3.8 is the most stable for using the following setup.
conda create --prefix ./env python=3.8
conda activate ./env
  1. Install TensorFlow dependencies from Apple Conda channel.
conda install -c apple tensorflow-deps
  1. Install base TensorFlow (Apple's fork of TensorFlow is called tensorflow-macos).
python -m pip install tensorflow-macos
  1. Install Apple's tensorflow-metal to leverage Apple Metal (Apple's GPU framework) for M1, M1 Pro, M1 Max GPU acceleration.
python -m pip install tensorflow-metal
  1. (Optional) Install TensorFlow Datasets to run benchmarks included in this repo.
python -m pip install tensorflow-datasets
  1. Install common data science packages.
conda install jupyter pandas numpy matplotlib scikit-learn
  1. Start Jupyter Notebook.
jupyter notebook
  1. Import dependencies and check TensorFlow version/GPU access.
import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}")

# See TensorFlow version
print(f"TensorFlow version: {tf.__version__}")

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.8.0

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (longer version)

If you're new to creating environments, using a new M1, M1 Pro, M1 Max machine and would like to get started running TensorFlow and other data science libraries, follow the below steps.

Note: You're going to see the term "package manager" a lot below. Think of it like this: a package manager is a piece of software that helps you install other pieces (packages) of software.

Installing package managers (Homebrew and Miniforge)

  1. Download and install Homebrew from https://brew.sh. Homebrew is a package manager that sets up a lot of useful things on your machine, including Command Line Tools for Xcode, you'll need this to run things like git. The command to install Homebrew will look something like:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

It will explain what it's doing and what you need to do as you go.

  1. Download the most compatible version of Miniforge (minimal installer for Conda specific to conda-forge, Conda is another package manager and conda-forge is a Conda channel) from GitHub.

If you're using an M1 variant Mac, it's "Miniforge3-MacOSX-arm64" <- click for direct download.

Clicking the link above will download a shell file called Miniforge3-MacOSX-arm64.sh to your Downloads folder (unless otherwise specified).

  1. Open Terminal.

  2. We've now got a shell file capable of installing Miniforge, but to do so we'll have to modify it's permissions to make it executable.

To do so, we'll run the command chmod -x FILE_NAME which stands for "change mode of FILE_NAME to -executable".

We'll then execute (run) the program using sh.

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
  1. This should install Miniforge3 into your home directory (~/ stands for "Home" on Mac).

To check this, we can try to activate the (base) environment, we can do so using the source command.

source ~/miniforge3/bin/activate

If it worked, you should see something like the following in your terminal window.

(base) [email protected] ~ %
  1. We've just installed some new software and for it to fully work, we'll need to restart terminal.

Creating a TensorFlow environment

Now we've got the package managers we need, it's time to install TensorFlow.

Let's setup a folder called tensorflow-test (you can call this anything you want) and install everything in there to make sure it's working.

Note: An environment is like a virtual room on your computer. For example, you use the kitchen in your house for cooking because it's got all the tools you need. It would be strange to have an oven in your bedroom. The same thing on your computer. If you're going to be working on specific software, you'll want it all in one place and not scattered everywhere else.

  1. Make a directory called tensorflow-test. This is the directory we're going to be storing our environment. And inside the environment will be the software tools we need to run TensorFlow.

We can do so with the mkdir command which stands for "make directory".

mkdir tensorflow-test
  1. Change into tensorflow-test. For the rest of the commands we'll be running them inside the directory tensorflow-test so we need to change into it.

We can do this with the cd command which stands for "change directory".

cd tensorflow-test
  1. Now we're inside the tensorflow-test directory, let's create a new Conda environment using the conda command (this command was installed when we installed Miniforge above).

We do so using conda create --prefix ./env which stands for "conda create an environment with the name file/path/to/this/folder/env". The . stands for "everything before".

For example, if I didn't use the ./env, my filepath looks like: /Users/daniel/tensorflow-test/env

conda create --prefix ./env
  1. Activate the environment. If conda created the environment correctly, you should be able to activate it using conda activate path/to/environment.

Short version:

conda activate ./env

Long version:

conda activate /Users/daniel/tensorflow-test/env

Note: It's important to activate your environment every time you'd like to work on projects that use the software you install into that environment. For example, you might have one environment for every different project you work on. And all of the different tools for that specific project are stored in its specific environment.

If activating your environment went correctly, your terminal window prompt should look something like:

(/Users/daniel/tensorflow-test/env) [email protected] tensorflow-test %
  1. Now we've got a Conda environment setup, it's time to install the software we need.

Let's start by installing various TensorFlow dependencies (TensorFlow is a large piece of software and depends on many other pieces of software).

Rather than list these all out, Apple have setup a quick command so you can install almost everything TensorFlow needs in one line.

conda install -c apple tensorflow-deps

The above stands for "hey conda install all of the TensorFlow dependencies from the Apple Conda channel" (-c stands for channel).

If it worked, you should see a bunch of stuff being downloaded and installed for you.

  1. Now all of the TensorFlow dependencies have been installed, it's time install base TensorFlow.

Apple have created a fork (copy) of TensorFlow specifically for Apple Macs. It has all the features of TensorFlow with some extra functionality to make it work on Apple hardware.

This Apple fork of TensorFlow is called tensorflow-macos and is the version we'll be installing:

python -m pip install tensorflow-macos

Depending on your internet connection the above may take a few minutes since TensorFlow is quite a large piece of software.

  1. Now we've got base TensorFlow installed, it's time to install tensorflow-metal.

Why?

Machine learning models often benefit from GPU acceleration. And the M1, M1 Pro and M1 Max chips have quite powerful GPUs.

TensorFlow allows for automatic GPU acceleration if the right software is installed.

And Metal is Apple's framework for GPU computing.

So Apple have created a plugin for TensorFlow (also referred to as a TensorFlow PluggableDevice) called tensorflow-metal to run TensorFlow on Mac GPUs.

We can install it using:

python -m pip install tensorflow-metal

If the above works, we should now be able to leverage our Mac's GPU cores to speed up model training with TensorFlow.

  1. (Optional) Install TensorFlow Datasets. Doing the above is enough to run TensorFlow on your machine. But if you'd like to run the benchmarks included in this repo, you'll need TensorFlow Datasets.

TensorFlow Datasets provides a collection of common machine learning datasets to test out various machine learning code.

python -m pip install tensorflow-datasets
  1. Install common data science packages. If you'd like to run the benchmarks above or work on other various data science and machine learning projects, you're likely going to need Jupyter Notebooks, pandas for data manipulation, NumPy for numeric computing, matplotlib for plotting and Scikit-Learn for traditional machine learning algorithms and processing functions.

To install those in the current environment run:

conda install jupyter pandas numpy matplotlib scikit-learn
  1. Test it out. To see if everything worked, try starting a Jupyter Notebook and importing the installed packages.
# Start a Jupyter notebook
jupyter notebook

Once the notebook is started, in the first cell:

import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(tf.config.list_physical_devices())

# See TensorFlow version
print(tf.__version__)

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.5.0
  1. To see if it really worked, try running one of the notebooks above end to end!

And then compare your results to the benchmarks above.

Owner
Daniel Bourke
Machine Learning Engineer live on YouTube.
Daniel Bourke
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity

[ICLR 2022] Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity by Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elen

VITA 18 Dec 31, 2022
When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset of 53,000+ Legal Holdings

When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset of 53,000+ Legal Holdings This is the repository for t

RegLab 39 Jan 07, 2023
Deep Hedging Demo - An Example of Using Machine Learning for Derivative Pricing.

Deep Hedging Demo Pricing Derivatives using Machine Learning 1) Jupyter version: Run ./colab/deep_hedging_colab.ipynb on Colab. 2) Gui version: Run py

Yu Man Tam 102 Jan 06, 2023
CS_Final_Metal_surface_detection - This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021.

CS_Final_Metal_surface_detection This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021. The project is based on the dataset

Cuong Vo 1 Dec 29, 2021
SMIS - Semantically Multi-modal Image Synthesis(CVPR 2020)

Semantically Multi-modal Image Synthesis Project page / Paper / Demo Semantically Multi-modal Image Synthesis(CVPR2020). Zhen Zhu, Zhiliang Xu, Anshen

316 Dec 01, 2022
PyTorch implementation for paper Neural Marching Cubes.

NMC PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang. Paper | Supplementary Material (to be updated) Citation If you fin

Zhiqin Chen 109 Dec 27, 2022
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

42 Dec 27, 2022
End-To-End Crowdsourcing

End-To-End Crowdsourcing Comparison of traditional crowdsourcing approaches to a state-of-the-art end-to-end crowdsourcing approach LTNet on sentiment

Andreas Koch 1 Mar 06, 2022
Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators This is our Pytorch implementation for t

RUCAIBox 12 Jul 22, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 261 Jan 09, 2023
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
The dynamics of representation learning in shallow, non-linear autoencoders

The dynamics of representation learning in shallow, non-linear autoencoders The package is written in python and uses the pytorch implementation to ML

Maria Refinetti 4 Jun 08, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 09, 2023
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
Face recognize system

FRS Face_recognize_system This project contains my work that target on solving some problems of FRS: Face detection: Retinaface Face anti-spoofing: Fo

Tran Anh Tuan 4 Nov 18, 2021
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
SpecAugmentPyTorch - A Pytorch (support batch and channel) implementation of GoogleBrain's SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

SpecAugment An implementation of SpecAugment for Pytorch How to use Install pytorch, version=1.9.0 (new feature (torch.Tensor.take_along_dim) is used

IMLHF 3 Oct 11, 2022
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022