Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

Overview

The Ultimate PyTorch Source-Build Template

GitHub stars GitHub issues GitHub forks GitHub license Twitter

Translations: 한국어

TL;DR

PyTorch built from source can be x4 faster than a naïve PyTorch install. This repository provides a template for building PyTorch pip wheel binaries from source for any PyTorch version on any CUDA version on any environment. These can be used in any project environment, including on local conda environments, on any CUDA GPU.

In addition, a new MLOps paradigm for deep learning development using Docker Compose is also proposed here. Hopefully, this method will become best practice in both academia and industry.

Preamble

Recent years have seen tremendous academic effort go into the design and implementation of efficient neural networks to cope with the ever-increasing amount of data on ever-smaller and more efficient devices. Yet, as of the time of writing, most deep learning practitioners are unaware of even the most basic GPU acceleration techniques. Especially in academia, many do not even use Automatic Mixed Precision (AMP), which can reduce memory requirements to 1/4 and increase speeds by x4~5. This is the case even though AMP can be enabled without much hassle using the HuggingFace Accelerate or PyTorch Lightning libraries. The Accelerate library in particular can be integrated into any pre-existing PyTorch project with only a few lines of code.

Even the novice who has only just dipped their toes into the mysteries of deep learning knows that more compute is a key ingredient for success. No matter how brilliant the scientist, outperforming a rival with x10 more compute is no mean feat. This template was created with the aim of enabling researchers and engineers without much knowledge of GPUs, CUDA, Docker, etc. to squeeze every last drop of performance from their GPUs using the same hardware and neural networks.

Although Docker images with PyTorch source builds are already available in the official PyTorch Docker Hub repository and the NVIDIA NGC repository, these images have a multitude of other packages installed with them, making it difficult to integrate them into pre-existing projects. Moreover, many practitioners prefer local environments over Docker images.

This project is different from any other. It has no additional libraries to work with except for those installed by the user. Even better, the generated wheels can be extracted for use in any environment with no need to use Docker, though the second part of this project provides a docker-compose.yaml file to make using Docker much easier.

If you are among those who could but only yearn for a quicker end to the long hours endured staring at Tensorboard as your models inched past the epochs, this project may be the answer to your woes. When using a source build of PyTorch with the latest version of CUDA, combined with AMP, one may achieve compute times x10 faster than a naïve PyTorch environment.

I sincerely hope that my project will be of service to practitioners in both academia and industry. Users who find my work beneficial are more than welcome to show their appreciation by starring this repository.

Warning

Before using this template, first check whether you are actually using your GPU!

In most scenarios, slow training is caused by an inefficient Extract, Transform, Load (ETL) pipeline. Training is slow because the data is not getting to the GPU fast enough, not because the GPU is running slowly. Run watch nvidia-smi to check whether GPU utilization is high enough to justify compute optimizations. If GPU utilization is low or peaks sporadically, first design an efficient ETL pipeline. Otherwise, faster compute will not help very much as it will not be the bottleneck.

See https://www.tensorflow.org/guide/data_performance for a guide on designing an efficient ETL pipeline.

The NVIDIA DALI library may also be helpful. The DALI PyTorch plugin provides an API for efficient ETL pipelines in PyTorch.

Introduction

To use this template for a new project, press the green Use this template button on the top. This is more convenient than forking or cloning this repository. Delete any unnecessary files and start making your project.

The first part of the README will explain the purpose of the Dockerfile and the advantages of using a custom source build of PyTorch. The second part proposes a new paradigm for deep learning development using Docker Compose.

PyTorch built from source can be much faster than PyTorch installed from pip/conda but building from source is an arduous and bug-prone process.

This repository is a highly modular template to build any version of PyTorch from source on any version of CUDA. It provides an easy-to-use Dockerfile that can be integrated into any Linux-based image or project.

For researchers unfamiliar with Docker, the generated wheel files, located in /tmp/dist/, can be extracted to install PyTorch on their local environments. Windows users may also use this project via WSL.

A Makefile is provided both as an interface for easy use and as a tutorial for building custom images. A docker-compose.yaml file is also provided as a simple MLOps system. It provides a convenient interactive development experience using Docker. See here to get started with Docker Compose on your system.

The speed gains from this template come from the following factors:

  1. Using the latest version of CUDA and associated libraries (cuDNN, cuBLAS, etc.).
  2. Using a source build made specifically for the target machine with the latest software customizations instead of a build that must be compatible with different hardware and software environments.
  3. Using the latest version of PyTorch and subsidiary libraries. Many users do not update their PyTorch version because of compatibility issues with their pre-existing environment.
  4. Informing users on where to look for solutions to their speed problems (this may be the most important factor).

Combined with techniques such as AMP and cuDNN benchmarking, computational throughput can be increased dramatically (possibly x10) on the same hardware.

Even if you do not wish to use Docker in your project, you may still find this template useful.

The wheel files generated by the build can be used in any Python environment with no dependency on Docker.

This project can thus be used to generate custom wheel files, providing dramatic compute speedups for any environment (conda, pip, etc.).

Quickstart

This project is a template, and users are expected to customize it to fit their needs. Users are free to customize the train stage of the Dockerfile as they please. However, do not change the build stages unless absolutely necessary as this will cause a build cache miss. If a new package must be built, add a new build layer.

The code is assumed to be running on a Linux host with the necessary NVIDIA Drivers and a recent version of Docker & Docker Compose V2 pre-installed. If this is not the case, install these first. Older versions may not be compatible with this project. The NVIDIA drivers are especially prone to error. Please check the compatibility matrix to verify that your driver version is compatible with your GPU hardware and the CUDA version of the image.

To build a training image, first edit the Dockerfile train stage and requirements.txt file to include desired packages from apt/conda/pip.

Then, visit https://developer.nvidia.com/cuda-gpus to find the Compute Capability (CC) of the target GPU device.

Finally, run make all CC=TARGET_CC(s).

Examples

(1) make all CC="8.6" for RTX 3090, (2) make all CC="7.5 8.6" for both RTX 2080Ti and RTX 3090 (building for many GPU CCs will lengthen build times).

This will result in an image, pytorch_source:train, which can be used for training. Note that CCs for devices not available during the build can be used to build the image. For example, if the image must be used on an RTX 2080Ti machine but the user only has an RTX 3090, the user can set CC="7.5" to enable the image to operate on the RTX 2080Ti GPU. See https://pytorch.org/docs/stable/cpp_extension.html for an in-depth guide on how to set TORCH_CUDA_ARCH_LIST, which is specified by CC in the Makefile.

Makefile Explanation

The provided Makefile is designed to simplify the user experience. Many practitioners use custom shell scripts for their environment setup. However, this often leads to a clutter of script files that only the author knows how to use, and which even the author forgets after a while. The Makefile gathers all instructions and environment variables into a single file, making project management much simpler.

The first image to be created is pytorch_source:build_install, which contains all packages necessary for the build. The installation image is created separately to cache downloads.

The second image is pytorch_source:build_torch-$(PYTORCH_VERSION_TAG) (by default), which contains the wheels for PyTorch, TorchVision, TorchText, and TorchAudio. The second image exists merely to cache the build process artifacts. No programs or environment variables will be available, only the artifacts. Any attempt to run python or any other program in this image will therefore fail.

If you do not wish to use Docker and would like to only extract the .whl wheel files for a pip install on your environment, the generated wheel files can be found in the /tmp/dist directory.

The final image is pytorch_source:train, which is the image to be used for actual training. It relies on the previous stages only for the build artifacts (wheels, etc.) and nothing else. This makes it very simple to create separate training images optimized for different environments and GPU devices.

Because PyTorch has already been built, the training image only needs to download the remaining apt/conda/pip packages. Caching is also implemented to speed up even this process.

Timezone Settings

International users may find this section helpful.

The train image has its timezone set by the TZ variable using the tzdata package. The default timezone is Asia/Seoul but this can be changed by specifying the TZ variable when calling make. Use IANA timezone names to specify the desired timezone.

Example: make all CC="8.6" TZ=America/Los_Angeles uses L.A. time on the training image.

N.B. Only the training image has timezone settings. The installation and build images do not use timezone information.

In addition, the training image has apt and pip installation URLs updated for Korean users. If you wish to speed up your installs, please find URLs optimized for your location, though the installation caches may make this unnecessary.

Specific PyTorch Version

PyTorch subsidiary libraries only work with matching versions of PyTorch.

To change the version of PyTorch, set the PYTORCH_VERSION_TAG, TORCHVISION_VERSION_TAG, TORCHTEXT_VERSION_TAG, and TORCHAUDIO_VERSION_TAG variables to matching versions.

The *_TAG variables must be GitHub tags or branch names of those repositories. Visit the GitHub repositories of each library to find the appropriate tags.

Example: To build on an RTX 3090 GPU with PyTorch 1.9.1, use the following command:

make all CC="8.6" PYTORCH_VERSION_TAG=v1.9.1 TORCHVISION_VERSION_TAG=v0.10.1 TORCHTEXT_VERSION_TAG=v0.10.1 TORCHAUDIO_VERSION_TAG=v0.9.1.

The resulting image, pytorch_source:train, can be used for training with PyTorch 1.9.1 on GPUs with Compute Capability 8.6.

Multiple Training Images

To use multiple training images on the same host, give a different name to TRAIN_NAME, which has a default value of train.

New training images can be created without having to rebuild PyTorch if the same build image is used for different training images. Creating new training images takes only a few minutes.

This is useful for the following use cases.

  1. Allowing different users with different UID/GIDs to use separate training images.
  2. Using different versions of the final training image with different library installations and configurations.
  3. Using this template for multiple PyTorch projects, each with different libraries and settings.

For example, if pytorch_source:build_torch-v1.9.1 has already been built, Alice and Bob would use the following commands to create separate images.

Alice: make build-train CC="8.6" TORCH_NAME=build_torch-v1.9.1 PYTORCH_VERSION_TAG=v1.9.1 TORCHVISION_VERSION_TAG=v0.10.1 TORCHTEXT_VERSION_TAG=v0.10.1 TORCHAUDIO_VERSION_TAG=v0.9.1 TRAIN_NAME=train_alice

Bob: make build-train CC="8.6" TORCH_NAME=build_torch-v1.9.1 PYTORCH_VERSION_TAG=v1.9.1 TORCHVISION_VERSION_TAG=v0.10.1 TORCHTEXT_VERSION_TAG=v0.10.1 TORCHAUDIO_VERSION_TAG=v0.9.1 TRAIN_NAME=train_bob

This way, Alice's image would have her UID/GID while Bob's image would have his UID/GID. This procedure is necessary because training images have their users set during the build. Also, different users may install different libraries in their training images. Their environment variables and other settings may also be different.

Word of Caution

When using build images such as pytorch_source:build_torch-v1.9.1 as a build cache for creating new training images, the user must re-specify all build arguments (variables specified by ARG and ENV using --build-arg) of all previous layers.

Otherwise, the default values for these arguments will be given to the Dockerfile and a cache miss will occur because of the different input values. This will both waste time rebuilding previous layers and, more importantly, cause inconsistency in the training images due to environment mismatch.

This includes the docker-compose.yaml file as well. All arguments given to the Dockerfile during the build must be respecified. This includes default values present in the Makefile but not present in the Dockerfile, such as the version tags.

If Docker starts to rebuild layers that you have already built, suspect that build arguments have been specified incorrectly.

See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache for more information.

Users must set BUILDKIT_INLINE_CACHE=1 during the image build to use it as a cache later. See https://docs.docker.com/engine/reference/commandline/build/#specifying-external-cache-sources for more information.

Advanced Usage

The Makefile provides the *-full commands for advanced usage.

make all-full CC=YOUR_GPU_CC TRAIN_NAME=full will create pytorch_source:build_install-ubuntu18.04-cuda10.2-cudnn8-py3.9, pytorch_source:build_torch-$(PYTORCH_VERSION_TAG)-ubuntu18.04-cuda10.2-cudnn8-py3.9, and pytorch_source:full by default.

The default images shown above can be used for training/deployment on CUDA 10 devices such as the GTX 1080Ti.

Also, the *-clean commands are provided to check for cache reliance on previous builds.

Specific CUDA Version

Set CUDA_VERSION, CUDNN_VERSION, and MAGMA_VERSION to change CUDA versions. PYTHON_VERSION may also be changed if necessary.

This will create a build image that can be used as a cache to create training images with the build-train command.

Also, the extensive use of caching in the Dockerfile means that the second build is much faster than the first build. This may be advantageous if many images must be created for multiple PyTorch/CUDA versions.

Specific Linux Distro

CentOS and UBI images can be created with only minor edits to the Dockerfile. Read the Dockerfile for full instructions.

Set the LINUX_DISTRO and DISTRO_VERSION arguments afterwards.

Windows

Windows users may use this template by updating to Windows 11 and installing Windows Subsystem for Linux (WSL). WSL on Windows 11 gives a similar experience to using native Linux.

This project has been tested on Windows 11 WSL with the Windows CUDA driver and Docker Desktop for Windows. There is no need to install a separate WSL CUDA driver or Docker for Linux inside WSL.

N.B. Windows Security real-time protection causes significant slowdown if enabled. Disable any active antivirus programs on Windows for best performance. However, this will create obvious security risks.

Interactive Development & MLOps with Docker Compose

Raison d'Être

The purpose of this section is to introduce a new paradigm for deep learning development. I hope that, eventually, using Docker Compose for deep learning development will become best practice.

Developing in local environments with conda or pip is commonplace in the deep learning community. However, this risks rendering the development environment, and the code meant to run on it, unreproducible. This is a serious detriment to scientific progress that many readers of this article will have experienced at first-hand.

Docker containers are the standard method for providing reproducible programs across different computing environments. They create isolated environments where programs can run without interference from the host or from one another. See https://www.docker.com/resources/what-container for details.

But in practice, Docker containers are often misused. Containers are meant to be transient. Best practice dictates that a new container be created for each run. This, however, is very inconvenient for development, especially for deep learning applications, where new libraries must constantly be installed and bugs are often only evident at runtime. This leads many researchers to develop inside interactive containers. Docker users often have run.sh files with commands such as docker run -v my_data:/mnt/data -p 8080:22 -t my_container my_image:latest /bin/bash (look familiar, anyone?) and use SSH to connect to running containers. VSCode also provides a remote development mode to code inside containers.

The problem with this approach is that these interactive containers become just as unreproducible as local development environments. A running container cannot connect to a new port or attach a new volume. But if the computing environment within the container was created over several months of installs and builds, the only way to keep it is to save the container as an image and create a new container from the saved image. After a few iterations of this process, the resulting images become bloated and no less scrambled than the local environments that they were meant to replace.

Problems become even more evident when preparing for deployment. MLOps, defined as a set of practices that aims to deploy and maintain machine learning models reliably and efficiently, has gained enormous popularity of late as many practitioners have come to realize the importance of continuously maintaining ML systems long after the initial development phase ends.

However, bad practices such as those mentioned above mean that much coffee has been spilled turning research code into anything resembling a production-ready product. Often, even the original developers cannot retrain the same model after a few months. Many firms thus have entire teams dedicated to model translation, a huge expenditure.

To alleviate these problems, I propose the use of Docker Compose as a simple MLOps solution. Using Docker and Docker Compose, the entire training environment can be reproduced. Compose has not yet caught on in the deep learning community, possibly because it is usually advertised as a multi-container solution. This is a misunderstanding as it can be used for single-container development just as well.

A docker-compose.yaml file is provided for easy management of containers. Using the provided docker-compose.yaml file will create an interactive environment, providing a programming experience very similar to using a terminal on a remote server. Integrations with popular IDEs (PyCharm, VSCode) are also available. Moreover, it also allows the user to specify settings for both build and run, removing the need to manage the environment with custom shell scripts. Connecting a new volume is as simple as removing the current container, adding a line in the docker-compose.yaml/Dockerfile file, then creating a new container from the same image. Build caches allow new images to be built very quickly, removing another barrier to Docker adoption, the long initial build time. For more information on Compose, visit the documentation.

Docker Compose can also be used directly for deployment with swarm mode, which is useful for small-scale deployments. See https://docs.docker.com/engine/swarm for documentation. If and when large-scale deployments using Kubernetes becomes necessary, using Docker from the very beginning will accelerate the development process and smooth the path to MLOps adoption. Accelerating time-to-market by streamlining the development process is a competitive edge for any firm, whether lean startup or tech titan.

With luck, the techniques I propose here will enable the deep learning community to "write once, train anywhere". But even if I fail in persuading the majority of users of the merits of my method, I may still spare many a hapless grad student from the sisyphean labor of setting up their conda environment, only to have it crash and burn right before their paper submission is due.

Usage

Docker images created by the Makefile are fully compatible with the docker-compose.yaml file. Do not erase them when using Docker Compose.

Initial Setup

If this is your first time using this project, follow these steps:

  1. Install Docker Compose V2 for Linux as described in https://docs.docker.com/compose/cli-command/#install-on-linux. Visit the website for the latest installation information. Installation does not require root permissions. Please check the version and architecture tags in the URL before installing. The following commands will install Docker Compose V2 (v2.1.0, Linux x86_64) for a single user.
mkdir -p ~/.docker/cli-plugins/
curl -SL https://github.com/docker/compose/releases/download/v2.1.0/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose

The instructions above are for Linux hosts. WSL users should instead enable "Use Docker Compose V2" on Docker Desktop for Windows.

  1. Run make env on the terminal to create a basic .env file. Environment variables can be saved in a .env file placed on the project root, allowing different projects and different users to set their own variables as required. To create a basic .env file with the UID and GID, run make env. Then read the docker-compose.yaml file to fill in extra variables. Also edit docker-compose.yaml as necessary for your project. Feel free to change session names, hostnames, etc. for different projects and configurations.

  2. Run docker compose up -d --build train or docker compose up -d --build full. The train service corresponds to the default make all ... build while the full service corresponds to the make all-full ... build. If you have already run make all ... or make all-full ..., check that the docker-compose.yaml file has the same configurations as the make command used to create the Docker images. Otherwise, a cache miss will occur, rebuilding the image with the new configurations.

  3. After docker compose up -d --build SERVICE_NAME has finished and if you have not yet run make all(-full) ..., run the make build with the same settings as the docker-compose.yaml and .env file settings. This will save the build cache as images, preventing them from being cleared by the system later on. If no cache miss occurs, this will only take a few minutes.

  4. Run docker compose exec SERVICE_NAME zsh and start coding.

General Usage

Using Docker Compose V2 (see https://docs.docker.com/compose/cli-command), run the following two commands, where train is the default service name in the provided docker-compose.yaml file.

  1. Read docker-compose.yaml and set variables in the .env file (first time only).
  2. docker compose up -d train
  3. docker compose exec train zsh

This will open an interactive shell with settings specified by the train service in the docker-compose.yaml file.

Example .env file for RTX 3090 GPUs:

UID=1000
GID=1000
CC=8.6

This is extremely convenient for managing reproducible development environments. For example, if a new pip or apt package must be installed for the project, users can simply edit the train layer of the Dockerfile by adding the package to the apt-get install or pip install commands, then run the following command:

docker compose up -d --build train.

This will remove the current train session, rebuild the image, and start a new train session. It will not, however, rebuild PyTorch (assuming no cache miss occurs). Users thus need only wait a few minutes for the additional downloads, which are accelerated by caching and fast mirror URLs.

To stop and restart a service after editing the Dockerfile or docker-compose.yaml file, simply run docker compose up -d --build train again.

To stop services and remove containers, use the following command:

docker compose down.

Users with remote servers may use Docker contexts (see https://docs.docker.com/engine/context/working-with-contexts) to access their containers from their local environments. For more information on Docker Compose, see https://github.com/compose-spec/compose-spec/blob/master/spec.md. For more information on Docker Compose CLI commands, see https://docs.docker.com/compose/reference.

Also, if an error occurs because BuildKit is not available, add COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 to any docker compose commands being used.

Tip

The .env file does not work with the Makefile by default. However, typing in the configurations for each run can be tedious. To use the .env file for the make commands, use the following technique to give all the variables in the .env file to the make command.

make COMMAND $(tr '\n' ' ' < .env)

Example: make all-full $(tr '\n' ' ' < .env).

Compose as Best Practice

Docker Compose is a far superior option to using custom shell scripts for each environment. Not only does it gather all variables and commands for both build and run into a single file, but its native integration with Docker means that it makes complicated Docker build/run setups simple to implement.

I wish to emphasize that using Docker Compose this way is a general-purpose technique that does not depend on anything about this project. As an example, an image from the NVIDIA NGC PyTorch repository has been used as the base image in ngc.Dockerfile. The NVIDIA NGC PyTorch images contain many optimizations for the latest GPU architectures and provide a multitude of pre-installed machine learning libraries. For those starting new projects, and thus with no dependencies, using the latest NGC image is recommended.

To use the NGC images, use the following commands:

  1. docker compose up -d ngc
  2. docker compose exec ngc zsh

The only difference with the previous example is the session name.

Using Compose with PyCharm and VSCode

The Docker Compose container environment can be used with popular Python IDEs, not just in the terminal. PyCharm and Visual Studio Code, both very popular in the deep learning community, are compatible with Docker Compose.

  1. If you are using a remote server, first create a Docker context to connect your local Docker with the remote Docker.

  2. PyCharm (Professional only): Both Docker and Docker Compose are natively available as Python interpreters. See tutorials for Docker and Compose for details. JetBrains Gateway can also be used to connect to running containers. JetBrains Fleet IDE, with much more advanced features, will become available in early 2022. N.B. PyCharm Professional and other JetBrains IDEs are available free of charge to anyone with a valid university e-mail address.

  3. VSCode: Install the Remote Development extension pack. See tutorial for details.

Known Issues

  1. Connecting to a running container by ssh will remove all variables set by ENV. This is because sshd starts a new environment, wiping out all previous variables. Using docker/docker compose to enter containers is strongly recommended.

  2. Building on CUDA 11.4.x is not available as of December 2021 because magma-cuda114 has not been released on the pytorch channel of anaconda. Bizarrely, magma-cuda115 is available. Users may attempt building with older versions of magma-cuda or try the version available on conda-forge. A source build of magma would be welcome as a pull request. The NVIDIA NGC images use NVIDIA's in-house build of magma.

  3. Ubuntu 16.04 build fails because the default git installed by apt on Ubuntu 16.04 does not support the --jobs flag. Remove the --jobs 0 argument from the git clone commands to make it work. Also, PyTorch v1.9+ may not build on Ubuntu 16.04. Lower the version tag to v1.8.2 to build. This project will not be modified to accommodate Ubuntu 16.04 builds as Xenial Xerus has already reached EOL.

  4. If the Docker Compose build fails with an error message that BuildKit is required, add COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 in front of the command. This issue occurs because Docker Compose V2 is not configured to use BuildKit on the host by default. One can tell if BuildKit is enabled by checking if the terminal outputs are in color. BuildKit outputs are colored blue, whereas the old Docker has no color. Example command: COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose up -d --build train.

  5. WSL users using Compose must disable ipc: host. WSL cannot use this option.

  6. torch.cuda.is_available() will return a ... UserWarning: CUDA initialization:... error or the image will simply not start if the CUDA driver on the host is incompatible with the CUDA version on the Docker image. Either upgrade the host CUDA driver or downgrade the CUDA version of the image. Check the compatibility matrix to see if the host CUDA driver is compatible with the desired version of CUDA. Also check if the CUDA driver has been configured correctly on the host. The CUDA driver version can be found using the nvidia-smi command.

Desiderata

  1. MORE STARS. If you are reading this, please star this repository immediately. No Contribution Without Appreciation!

  2. CentOS and UBI images have not been implemented yet. As they require only simple modifications, pull requests implementing them are very much welcome.

  3. Translations into other languages and updates to existing translations are welcome. Please make a separate LANG.README.md file and create a PR.

  4. A method to build magma from source would be greatly appreciated. Although the code for building the magma package is available at https://github.com/pytorch/builder/tree/main/magma, it is updated several months after a new CUDA version is released. A source build as a layer on the image would be welcome.

  5. Please feel free to share this project! I wish you good luck and happy coding!

Comments
  • Error during build

    Error during build

    Hi,

    I cloned the github version on the main branch, and then executed the following command make all-full CC="8.0" TRAIN_NAME=train_cu102, but I got the following error:

    DOCKER_BUILDKIT=1 docker build \
    	--target build-install \
    	--tag pytorch_source:build_install-ubuntu18.04-cuda10.2-cudnn8-py3.9 \
    	--build-arg LINUX_DISTRO=ubuntu \
    	--build-arg DISTRO_VERSION=18.04 \
    	--build-arg CUDA_VERSION=10.2 \
    	--build-arg CUDNN_VERSION=8 \
    	--build-arg MAGMA_VERSION=102   \
    	--build-arg PYTHON_VERSION=3.9 \
    	--build-arg BUILDKIT_INLINE_CACHE=1 \
    	- < Dockerfile
    [+] Building 0.7s (21/21) FINISHED                                                                       
     => [internal] load build definition from Dockerfile                                                0.0s
     => => transferring dockerfile: 12.61kB                                                             0.0s
     => [internal] load .dockerignore                                                                   0.0s
     => => transferring context: 2B                                                                     0.0s
     => resolve image config for docker.io/docker/dockerfile:1.3.0-labs                                 0.4s
     => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:03ca0e50aa4b6e76365fa9a560  0.0s
     => [internal] load build definition from Dockerfile                                                0.0s
     => [internal] load .dockerignore                                                                   0.0s
     => [internal] load metadata for docker.io/nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04                0.0s
     => [build-base-ubuntu 1/3] FROM docker.io/nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04                0.0s
     => CACHED [build-base-ubuntu 2/3] RUN rm -f /etc/apt/apt.conf.d/docker-clean;     echo 'Binary::a  0.0s
     => CACHED [build-base-ubuntu 3/3] RUN --mount=type=cache,id=apt-cache-build,target=/var/cache/apt  0.0s
     => CACHED [build-base 1/3] RUN /usr/sbin/update-ccache-symlinks                                    0.0s
     => CACHED [build-base 2/3] RUN mkdir /opt/ccache && ccache --set-config=cache_dir=/opt/ccache &&   0.0s
     => CACHED [build-base 3/3] RUN curl -fsSL -v -o ~/miniconda.sh -O  https://repo.anaconda.com/mini  0.0s
     => CACHED [build-install 1/6] RUN --mount=type=cache,id=conda-build,target=/opt/conda/pkgs     co  0.0s
     => CACHED [build-install 2/6] WORKDIR /opt                                                         0.0s
     => CACHED [build-install 3/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/pytor  0.0s
     => CACHED [build-install 4/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/visio  0.0s
     => CACHED [build-install 5/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/text.  0.0s
     => CACHED [build-install 6/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/audio  0.0s
     => exporting to image                                                                              0.0s
     => => exporting layers                                                                             0.0s
     => => writing image sha256:81330a777d1f8a68c315ec6f1859c16ea71076ebd84a5c0bbd88fa6a333949b5        0.0s
     => => naming to docker.io/library/pytorch_source:build_install-ubuntu18.04-cuda10.2-cudnn8-py3.9   0.0s
     => exporting cache                                                                                 0.0s
     => => preparing build cache for export                                                             0.0s
    DOCKER_BUILDKIT=1 docker build \
    	--target train-builds \
    	--cache-from=pytorch_source:build_install-ubuntu18.04-cuda10.2-cudnn8-py3.9 \
    	--tag pytorch_source:build_torch-v1.9.1-ubuntu18.04-cuda10.2-cudnn8-py3.9 \
    	--build-arg TORCH_CUDA_ARCH_LIST="8.0" \
    	--build-arg PYTORCH_VERSION_TAG=v1.9.1 \
    	--build-arg TORCHVISION_VERSION_TAG=v0.10.1 \
    	--build-arg TORCHTEXT_VERSION_TAG=v0.10.1 \
    	--build-arg TORCHAUDIO_VERSION_TAG=v0.9.1 \
    	--build-arg LINUX_DISTRO=ubuntu \
    	--build-arg DISTRO_VERSION=18.04 \
    	--build-arg CUDA_VERSION=10.2 \
    	--build-arg CUDNN_VERSION=8 \
    	--build-arg MAGMA_VERSION=102   \
    	--build-arg PYTHON_VERSION=3.9 \
    	--build-arg BUILDKIT_INLINE_CACHE=1 \
    	- < Dockerfile
    [+] Building 29.1s (24/37)                                                                               
     => [internal] load build definition from Dockerfile                                                0.0s
     => => transferring dockerfile: 12.61kB                                                             0.0s
     => [internal] load .dockerignore                                                                   0.0s
     => => transferring context: 2B                                                                     0.0s
     => resolve image config for docker.io/docker/dockerfile:1.3.0-labs                                 0.5s
     => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:03ca0e50aa4b6e76365fa9a560  0.0s
     => [internal] load build definition from Dockerfile                                                0.0s
     => [internal] load .dockerignore                                                                   0.0s
     => [internal] load metadata for docker.io/nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04                0.0s
     => importing cache manifest from pytorch_source:build_install-ubuntu18.04-cuda10.2-cudnn8-py3.9    0.0s
     => [train-builds 1/5] FROM docker.io/nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04                     0.0s
     => CACHED [build-base-ubuntu 2/3] RUN rm -f /etc/apt/apt.conf.d/docker-clean;     echo 'Binary::a  0.0s
     => CACHED [build-base-ubuntu 3/3] RUN --mount=type=cache,id=apt-cache-build,target=/var/cache/apt  0.0s
     => CACHED [build-base 1/3] RUN /usr/sbin/update-ccache-symlinks                                    0.0s
     => CACHED [build-base 2/3] RUN mkdir /opt/ccache && ccache --set-config=cache_dir=/opt/ccache &&   0.0s
     => CACHED [build-base 3/3] RUN curl -fsSL -v -o ~/miniconda.sh -O  https://repo.anaconda.com/mini  0.0s
     => CACHED [build-install 1/6] RUN --mount=type=cache,id=conda-build,target=/opt/conda/pkgs     co  0.0s
     => CACHED [build-install 2/6] WORKDIR /opt                                                         0.0s
     => CACHED [build-install 3/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/pytor  0.0s
     => CACHED [build-install 4/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/visio  0.0s
     => CACHED [build-install 5/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/text.  0.0s
     => CACHED [build-install 6/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/audio  0.0s
     => CACHED [train-builds 2/5] COPY --from=build-install /opt/conda /opt/conda                       0.0s
     => CACHED [build-torch 1/4] WORKDIR /opt/pytorch                                                   0.0s
     => CACHED [build-torch 2/4] RUN if [ -n v1.9.1 ]; then     git checkout v1.9.1 &&     git submodu  0.0s
     => ERROR [build-torch 3/4] RUN --mount=type=cache,target=/opt/ccache     USE_CUDA=1 USE_CUDNN=1   28.3s
    ------                                                                                                   
     > [build-torch 3/4] RUN --mount=type=cache,target=/opt/ccache     USE_CUDA=1 USE_CUDNN=1     TORCH_NVCC_FLAGS=-Xfatbin -compress-all     TORCH_CUDA_ARCH_LIST=8.0     CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"     python setup.py bdist_wheel -d /tmp/dist:                                                     
    #24 0.500 Building wheel torch-1.9.0a0+gitdfbd030                                                        
    #24 0.534 -- Building version 1.9.0a0+gitdfbd030                                                         
    #24 0.549 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt/pytorch/torch -DCMAKE_PREFIX_PATH=/opt/conda/bin/../ -DNUMPY_INCLUDE_DIR=/opt/conda/lib/python3.9/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/opt/conda/bin/python -DPYTHON_INCLUDE_DIR=/opt/conda/include/python3.9 -DPYTHON_LIBRARY=/opt/conda/lib/libpython3.9.a -DTORCH_BUILD_VERSION=1.9.0a0+gitdfbd030 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NUMPY=True /opt/pytorch
    #24 0.647 -- The CXX compiler identification is GNU 7.5.0
    #24 0.716 -- The C compiler identification is GNU 7.5.0
    #24 0.729 -- Detecting CXX compiler ABI info
    #24 0.810 -- Detecting CXX compiler ABI info - done
    #24 0.825 -- Check for working CXX compiler: /usr/bin/c++ - skipped
    #24 0.825 -- Detecting CXX compile features
    #24 0.826 -- Detecting CXX compile features - done
    #24 0.829 -- Detecting C compiler ABI info
    #24 0.900 -- Detecting C compiler ABI info - done
    #24 0.913 -- Check for working C compiler: /usr/bin/cc - skipped
    #24 0.913 -- Detecting C compile features
    #24 0.915 -- Detecting C compile features - done
    #24 0.916 -- Not forcing any particular BLAS to be found
    #24 0.926 -- Performing Test COMPILER_WORKS
    #24 1.001 -- Performing Test COMPILER_WORKS - Success
    #24 1.001 -- Performing Test SUPPORT_GLIBCXX_USE_C99
    #24 1.211 -- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
    #24 1.212 -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
    #24 1.409 -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
    #24 1.410 -- std::exception_ptr is supported.
    #24 1.410 -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
    #24 1.446 -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
    #24 1.446 -- Turning off deprecation warning due to glog.
    #24 1.446 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
    #24 1.696 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
    #24 1.696 -- Current compiler supports avx2 extension. Will build perfkernels.
    #24 1.697 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
    #24 1.904 -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
    #24 1.904 -- Current compiler supports avx512f extension. Will build fbgemm.
    #24 1.905 -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
    #24 1.988 -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
    #24 1.990 -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
    #24 2.068 -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
    #24 2.070 -- Performing Test COMPILER_SUPPORTS_RDYNAMIC
    #24 2.152 -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
    #24 2.170 -- Building using own protobuf under third_party per request.
    #24 2.170 -- Use custom protobuf build.
    #24 2.171 -- 
    #24 2.171 -- 3.11.4.0
    #24 2.172 -- Looking for pthread.h
    #24 2.248 -- Looking for pthread.h - found
    #24 2.248 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
    #24 2.327 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
    #24 2.327 -- Check if compiler accepts -pthread
    #24 2.402 -- Check if compiler accepts -pthread - yes
    #24 2.404 -- Found Threads: TRUE  
    #24 2.405 -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS
    #24 2.512 -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success
    #24 2.529 -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/opt/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
    #24 2.530 -- Trying to find preferred BLAS backend of choice: MKL
    #24 2.531 -- MKL_THREADING = OMP
    #24 2.532 -- Looking for sys/types.h
    #24 2.604 -- Looking for sys/types.h - found
    #24 2.605 -- Looking for stdint.h
    #24 2.672 -- Looking for stdint.h - found
    #24 2.673 -- Looking for stddef.h
    #24 2.743 -- Looking for stddef.h - found
    #24 2.743 -- Check size of void*
    #24 2.816 -- Check size of void* - done
    #24 3.083 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 3.083   The package name passed to `find_package_handle_standard_args` (OpenMP_C)
    #24 3.083   does not match the name of the calling package (OpenMP).  This can lead to
    #24 3.083   problems in calling code that expects `find_package` result variables
    #24 3.083   (e.g., `_FOUND`) to follow a certain pattern.
    #24 3.083 Call Stack (most recent call first):
    #24 3.083   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 3.083   cmake/Modules/FindMKL.cmake:213 (FIND_PACKAGE)
    #24 3.083   cmake/Modules/FindMKL.cmake:307 (CHECK_ALL_LIBRARIES)
    #24 3.083   cmake/Dependencies.cmake:144 (find_package)
    #24 3.083   CMakeLists.txt:621 (include)
    #24 3.083 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 3.083 
    #24 3.167 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 3.167   The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
    #24 3.167   does not match the name of the calling package (OpenMP).  This can lead to
    #24 3.167   problems in calling code that expects `find_package` result variables
    #24 3.167   (e.g., `_FOUND`) to follow a certain pattern.
    #24 3.167 Call Stack (most recent call first):
    #24 3.167   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 3.167   cmake/Modules/FindMKL.cmake:213 (FIND_PACKAGE)
    #24 3.167   cmake/Modules/FindMKL.cmake:307 (CHECK_ALL_LIBRARIES)
    #24 3.167   cmake/Dependencies.cmake:144 (find_package)
    #24 3.167   CMakeLists.txt:621 (include)
    #24 3.167 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 3.167 
    #24 3.172 -- Looking for cblas_sgemm
    #24 3.420 -- Looking for cblas_sgemm - found
    #24 3.426 -- MKL libraries: /opt/conda/lib/libmkl_intel_lp64.so;/opt/conda/lib/libmkl_gnu_thread.so;/opt/conda/lib/libmkl_core.so;-fopenmp;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so
    #24 3.426 -- MKL include directory: /opt/conda/include
    #24 3.426 -- MKL OpenMP type: GNU
    #24 3.426 -- MKL OpenMP library: -fopenmp
    #24 3.449 -- The ASM compiler identification is GNU
    #24 3.452 -- Found assembler: /usr/bin/cc
    #24 3.467 -- Brace yourself, we are building NNPACK
    #24 3.471 -- Performing Test NNPACK_ARCH_IS_X86_32
    #24 3.506 -- Performing Test NNPACK_ARCH_IS_X86_32 - Failed
    #24 3.529 -- Found PythonInterp: /opt/conda/bin/python (found version "3.9.7") 
    #24 3.529 -- NNPACK backend is x86-64
    #24 3.553 CMake Deprecation Warning at third_party/googletest/CMakeLists.txt:1 (cmake_minimum_required):
    #24 3.553   Compatibility with CMake < 2.8.12 will be removed from a future version of
    #24 3.553   CMake.
    #24 3.553 
    #24 3.553   Update the VERSION argument <min> value or use a ...<max> suffix to tell
    #24 3.553   CMake that the project does not need compatibility with older versions.
    #24 3.553 
    #24 3.553 
    #24 3.555 CMake Deprecation Warning at third_party/googletest/googlemock/CMakeLists.txt:42 (cmake_minimum_required):
    #24 3.555   Compatibility with CMake < 2.8.12 will be removed from a future version of
    #24 3.555   CMake.
    #24 3.555 
    #24 3.555   Update the VERSION argument <min> value or use a ...<max> suffix to tell
    #24 3.555   CMake that the project does not need compatibility with older versions.
    #24 3.555 
    #24 3.555 
    #24 3.556 CMake Deprecation Warning at third_party/googletest/googletest/CMakeLists.txt:49 (cmake_minimum_required):
    #24 3.556   Compatibility with CMake < 2.8.12 will be removed from a future version of
    #24 3.556   CMake.
    #24 3.556 
    #24 3.556   Update the VERSION argument <min> value or use a ...<max> suffix to tell
    #24 3.556   CMake that the project does not need compatibility with older versions.
    #24 3.556 
    #24 3.556 
    #24 3.586 -- Failed to find LLVM FileCheck
    #24 3.592 -- Found Git: /usr/bin/git (found version "2.17.1") 
    #24 3.614 -- git Version: v1.4.0-505be96a
    #24 3.614 -- Version: 1.4.0
    #24 3.617 -- Performing Test HAVE_CXX_FLAG_STD_CXX11
    #24 3.700 -- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
    #24 3.702 -- Performing Test HAVE_CXX_FLAG_WALL
    #24 3.783 -- Performing Test HAVE_CXX_FLAG_WALL - Success
    #24 3.785 -- Performing Test HAVE_CXX_FLAG_WEXTRA
    #24 3.868 -- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
    #24 3.870 -- Performing Test HAVE_CXX_FLAG_WSHADOW
    #24 3.952 -- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
    #24 3.954 -- Performing Test HAVE_CXX_FLAG_WERROR
    #24 4.031 -- Performing Test HAVE_CXX_FLAG_WERROR - Success
    #24 4.033 -- Performing Test HAVE_CXX_FLAG_PEDANTIC
    #24 4.112 -- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
    #24 4.114 -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
    #24 4.197 -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
    #24 4.199 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
    #24 4.240 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Failed
    #24 4.240 -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
    #24 4.320 -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
    #24 4.322 -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
    #24 4.402 -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
    #24 4.404 -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
    #24 4.484 -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
    #24 4.486 -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
    #24 4.566 -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
    #24 4.568 -- Performing Test HAVE_CXX_FLAG_WD654
    #24 4.598 -- Performing Test HAVE_CXX_FLAG_WD654 - Failed
    #24 4.600 -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
    #24 4.634 -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Failed
    #24 4.635 -- Performing Test HAVE_CXX_FLAG_COVERAGE
    #24 4.720 -- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
    #24 4.720 -- Performing Test HAVE_STD_REGEX
    #24 4.720 -- Performing Test HAVE_STD_REGEX
    #24 6.312 -- Performing Test HAVE_STD_REGEX -- success
    #24 6.312 -- Performing Test HAVE_GNU_POSIX_REGEX
    #24 6.312 -- Performing Test HAVE_GNU_POSIX_REGEX
    #24 6.350 -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
    #24 6.350 -- Performing Test HAVE_POSIX_REGEX
    #24 6.350 -- Performing Test HAVE_POSIX_REGEX
    #24 6.543 -- Performing Test HAVE_POSIX_REGEX -- success
    #24 6.543 -- Performing Test HAVE_STEADY_CLOCK
    #24 6.543 -- Performing Test HAVE_STEADY_CLOCK
    #24 6.681 -- Performing Test HAVE_STEADY_CLOCK -- success
    #24 6.714 -- Performing Test COMPILER_SUPPORTS_AVX512
    #24 6.798 -- Performing Test COMPILER_SUPPORTS_AVX512 - Success
    #24 6.802 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 6.802   The package name passed to `find_package_handle_standard_args` (OpenMP_C)
    #24 6.802   does not match the name of the calling package (OpenMP).  This can lead to
    #24 6.802   problems in calling code that expects `find_package` result variables
    #24 6.802   (e.g., `_FOUND`) to follow a certain pattern.
    #24 6.802 Call Stack (most recent call first):
    #24 6.802   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 6.802   third_party/fbgemm/CMakeLists.txt:60 (find_package)
    #24 6.802 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 6.802 
    #24 6.803 -- Found OpenMP_C: -fopenmp (found version "4.5") 
    #24 6.803 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 6.803   The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
    #24 6.803   does not match the name of the calling package (OpenMP).  This can lead to
    #24 6.803   problems in calling code that expects `find_package` result variables
    #24 6.803   (e.g., `_FOUND`) to follow a certain pattern.
    #24 6.803 Call Stack (most recent call first):
    #24 6.803   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 6.803   third_party/fbgemm/CMakeLists.txt:60 (find_package)
    #24 6.803 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 6.803 
    #24 6.804 -- Found OpenMP_CXX: -fopenmp (found version "4.5") 
    #24 6.804 -- Found OpenMP: TRUE (found version "4.5")  
    #24 6.804 CMake Warning at third_party/fbgemm/CMakeLists.txt:62 (message):
    #24 6.804   OpenMP found! OpenMP_C_INCLUDE_DIRS =
    #24 6.804 
    #24 6.804 
    #24 6.916 CMake Warning at third_party/fbgemm/CMakeLists.txt:142 (message):
    #24 6.916   ==========
    #24 6.916 
    #24 6.916 
    #24 6.916 CMake Warning at third_party/fbgemm/CMakeLists.txt:143 (message):
    #24 6.916   CMAKE_BUILD_TYPE = Release
    #24 6.916 
    #24 6.916 
    #24 6.916 CMake Warning at third_party/fbgemm/CMakeLists.txt:144 (message):
    #24 6.916   CMAKE_CXX_FLAGS_DEBUG is -g
    #24 6.916 
    #24 6.916 
    #24 6.916 CMake Warning at third_party/fbgemm/CMakeLists.txt:145 (message):
    #24 6.916   CMAKE_CXX_FLAGS_RELEASE is -O3 -DNDEBUG
    #24 6.916 
    #24 6.916 
    #24 6.916 CMake Warning at third_party/fbgemm/CMakeLists.txt:146 (message):
    #24 6.916   ==========
    #24 6.916 
    #24 6.916 
    #24 6.923 -- Performing Test __CxxFlag__fno_threadsafe_statics
    #24 7.006 -- Performing Test __CxxFlag__fno_threadsafe_statics - Success
    #24 7.008 -- Performing Test __CxxFlag__fno_semantic_interposition
    #24 7.087 -- Performing Test __CxxFlag__fno_semantic_interposition - Success
    #24 7.088 -- Performing Test __CxxFlag__fmerge_all_constants
    #24 7.172 -- Performing Test __CxxFlag__fmerge_all_constants - Success
    #24 7.174 -- Performing Test __CxxFlag__fno_enforce_eh_specs
    #24 7.254 -- Performing Test __CxxFlag__fno_enforce_eh_specs - Success
    #24 7.258 ** AsmJit Summary **
    #24 7.258    ASMJIT_DIR=/opt/pytorch/third_party/fbgemm/third_party/asmjit
    #24 7.258    ASMJIT_TEST=FALSE
    #24 7.258    ASMJIT_TARGET_TYPE=STATIC
    #24 7.258    ASMJIT_DEPS=pthread;rt
    #24 7.258    ASMJIT_LIBS=asmjit;pthread;rt
    #24 7.258    ASMJIT_CFLAGS=-DASMJIT_STATIC
    #24 7.258    ASMJIT_PRIVATE_CFLAGS=-Wall;-Wextra;-Wconversion;-fno-math-errno;-fno-threadsafe-statics;-fno-semantic-interposition;-DASMJIT_STATIC
    #24 7.258    ASMJIT_PRIVATE_CFLAGS_DBG=
    #24 7.258    ASMJIT_PRIVATE_CFLAGS_REL=-O2;-fmerge-all-constants;-fno-enforce-eh-specs
    #24 7.267 CMake Warning at cmake/Dependencies.cmake:793 (message):
    #24 7.267   Not compiling with NUMA.  Suppress this warning with -DUSE_NUMA=OFF
    #24 7.267 Call Stack (most recent call first):
    #24 7.267   CMakeLists.txt:621 (include)
    #24 7.267 
    #24 7.267 
    #24 7.267 -- Could NOT find Numa (missing: Numa_INCLUDE_DIR Numa_LIBRARIES) 
    #24 7.268 -- Using third party subdirectory Eigen.
    #24 7.291 -- Found PythonInterp: /opt/conda/bin/python (found suitable version "3.9.7", minimum required is "3.0") 
    #24 7.296 -- Found PythonLibs: /opt/conda/lib/libpython3.9.a (found suitable version "3.9.7", minimum required is "3.0") 
    #24 7.301 -- Could NOT find pybind11 (missing: pybind11_DIR)
    #24 7.302 -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
    #24 7.302 -- Using third_party/pybind11.
    #24 7.303 -- pybind11 include dirs: /opt/pytorch/cmake/../third_party/pybind11/include
    #24 7.426 -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) 
    #24 7.506 -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
    #24 7.507 -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) 
    #24 7.507     Reason given by package: MPI component 'Fortran' was requested, but language Fortran is not enabled.  
    #24 7.507 
    #24 7.507 CMake Warning at cmake/Dependencies.cmake:1050 (message):
    #24 7.507   Not compiling with MPI.  Suppress this warning with -DUSE_MPI=OFF
    #24 7.507 Call Stack (most recent call first):
    #24 7.507   CMakeLists.txt:621 (include)
    #24 7.507 
    #24 7.507 
    #24 7.509 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 7.509   The package name passed to `find_package_handle_standard_args` (OpenMP_C)
    #24 7.509   does not match the name of the calling package (OpenMP).  This can lead to
    #24 7.509   problems in calling code that expects `find_package` result variables
    #24 7.509   (e.g., `_FOUND`) to follow a certain pattern.
    #24 7.509 Call Stack (most recent call first):
    #24 7.509   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 7.509   cmake/Dependencies.cmake:1105 (find_package)
    #24 7.509   CMakeLists.txt:621 (include)
    #24 7.509 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 7.509 
    #24 7.509 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 7.509   The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
    #24 7.509   does not match the name of the calling package (OpenMP).  This can lead to
    #24 7.509   problems in calling code that expects `find_package` result variables
    #24 7.509   (e.g., `_FOUND`) to follow a certain pattern.
    #24 7.509 Call Stack (most recent call first):
    #24 7.509   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 7.509   cmake/Dependencies.cmake:1105 (find_package)
    #24 7.509   CMakeLists.txt:621 (include)
    #24 7.509 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 7.509 
    #24 7.510 -- Adding OpenMP CXX_FLAGS: -fopenmp
    #24 7.510 -- Will link against OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so
    #24 7.523 -- Found CUDA: /usr/local/cuda (found version "10.2") 
    #24 7.525 -- Caffe2: CUDA detected: 10.2
    #24 7.525 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
    #24 7.525 -- Caffe2: CUDA toolkit directory: /usr/local/cuda
    #24 7.628 -- Caffe2: Header version is: 10.2
    #24 7.633 -- Found CUDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so  
    #24 7.633 -- Found cuDNN: v8.2.0  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
    #24 7.677 -- /usr/local/cuda/lib64/libnvrtc.so shorthash is 08c4863f
    #24 7.678 CMake Warning at cmake/public/utils.cmake:365 (message):
    #24 7.678   In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
    #24 7.678   to cmake instead of implicitly setting it as an env variable.  This will
    #24 7.678   become a FATAL_ERROR in future version of pytorch.
    #24 7.678 Call Stack (most recent call first):
    #24 7.678   cmake/public/cuda.cmake:511 (torch_cuda_get_nvcc_gencode_flag)
    #24 7.678   cmake/Dependencies.cmake:1155 (include)
    #24 7.678   CMakeLists.txt:621 (include)
    #24 7.678 
    #24 7.678 
    #24 7.679 -- Added CUDA NVCC flags for: -gencode;arch=compute_80,code=sm_80
    #24 7.680 CMake Warning at cmake/public/utils.cmake:365 (message):
    #24 7.680   In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
    #24 7.680   to cmake instead of implicitly setting it as an env variable.  This will
    #24 7.680   become a FATAL_ERROR in future version of pytorch.
    #24 7.680 Call Stack (most recent call first):
    #24 7.680   cmake/External/nccl.cmake:13 (torch_cuda_get_nvcc_gencode_flag)
    #24 7.680   cmake/Dependencies.cmake:1288 (include)
    #24 7.680   CMakeLists.txt:621 (include)
    #24 7.680 
    #24 7.680 
    #24 7.698 CMake Warning at cmake/External/nccl.cmake:62 (message):
    #24 7.698   Objcopy version is too old to support NCCL library slimming
    #24 7.698 Call Stack (most recent call first):
    #24 7.698   cmake/Dependencies.cmake:1288 (include)
    #24 7.698   CMakeLists.txt:621 (include)
    #24 7.698 
    #24 7.698 
    #24 7.700 -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
    #24 7.703 CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
    #24 7.703   Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
    #24 7.703   --help-policy CMP0077" for policy details.  Use the cmake_policy command to
    #24 7.703   set the policy and suppress this warning.
    #24 7.703 
    #24 7.703   For compatibility with older versions of CMake, option is clearing the
    #24 7.703   normal variable 'BUILD_BENCHMARK'.
    #24 7.703 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 7.703 
    #24 7.703 -- Gloo build as SHARED library
    #24 7.711 -- Found CUDA: /usr/local/cuda (found suitable version "10.2", minimum required is "7.0") 
    #24 7.712 -- CUDA detected: 10.2
    #24 7.714 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 7.714   The package name passed to `find_package_handle_standard_args` (NCCL) does
    #24 7.714   not match the name of the calling package (nccl).  This can lead to
    #24 7.714   problems in calling code that expects `find_package` result variables
    #24 7.714   (e.g., `_FOUND`) to follow a certain pattern.
    #24 7.714 Call Stack (most recent call first):
    #24 7.714   third_party/gloo/cmake/Modules/Findnccl.cmake:45 (find_package_handle_standard_args)
    #24 7.714   third_party/gloo/cmake/Dependencies.cmake:128 (find_package)
    #24 7.714   third_party/gloo/CMakeLists.txt:109 (include)
    #24 7.714 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 7.714 
    #24 7.714 -- Found NCCL: /usr/include  
    #24 7.714 -- Determining NCCL version from the header file: /usr/include/nccl.h
    #24 7.714 -- NCCL_MAJOR_VERSION: 2
    #24 7.714 -- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl.so)
    #24 7.733 -- Found CUDA: /usr/local/cuda (found version "10.2") 
    #24 7.779 -- Performing Test UV_LINT_W4
    #24 7.806 -- Performing Test UV_LINT_W4 - Failed
    #24 7.808 -- Performing Test UV_LINT_NO_UNUSED_PARAMETER_MSVC
    #24 7.836 -- Performing Test UV_LINT_NO_UNUSED_PARAMETER_MSVC - Failed
    #24 7.837 -- Performing Test UV_LINT_NO_CONDITIONAL_CONSTANT_MSVC
    #24 7.866 -- Performing Test UV_LINT_NO_CONDITIONAL_CONSTANT_MSVC - Failed
    #24 7.867 -- Performing Test UV_LINT_NO_NONSTANDARD_MSVC
    #24 7.895 -- Performing Test UV_LINT_NO_NONSTANDARD_MSVC - Failed
    #24 7.896 -- Performing Test UV_LINT_NO_NONSTANDARD_EMPTY_TU_MSVC
    #24 7.923 -- Performing Test UV_LINT_NO_NONSTANDARD_EMPTY_TU_MSVC - Failed
    #24 7.924 -- Performing Test UV_LINT_NO_NONSTANDARD_FILE_SCOPE_MSVC
    #24 7.951 -- Performing Test UV_LINT_NO_NONSTANDARD_FILE_SCOPE_MSVC - Failed
    #24 7.953 -- Performing Test UV_LINT_NO_NONSTANDARD_NONSTATIC_DLIMPORT_MSVC
    #24 7.979 -- Performing Test UV_LINT_NO_NONSTANDARD_NONSTATIC_DLIMPORT_MSVC - Failed
    #24 7.981 -- Performing Test UV_LINT_NO_HIDES_LOCAL
    #24 8.006 -- Performing Test UV_LINT_NO_HIDES_LOCAL - Failed
    #24 8.008 -- Performing Test UV_LINT_NO_HIDES_PARAM
    #24 8.034 -- Performing Test UV_LINT_NO_HIDES_PARAM - Failed
    #24 8.036 -- Performing Test UV_LINT_NO_HIDES_GLOBAL
    #24 8.061 -- Performing Test UV_LINT_NO_HIDES_GLOBAL - Failed
    #24 8.063 -- Performing Test UV_LINT_NO_CONDITIONAL_ASSIGNMENT_MSVC
    #24 8.091 -- Performing Test UV_LINT_NO_CONDITIONAL_ASSIGNMENT_MSVC - Failed
    #24 8.092 -- Performing Test UV_LINT_NO_UNSAFE_MSVC
    #24 8.117 -- Performing Test UV_LINT_NO_UNSAFE_MSVC - Failed
    #24 8.118 -- Performing Test UV_LINT_WALL
    #24 8.190 -- Performing Test UV_LINT_WALL - Success
    #24 8.192 -- Performing Test UV_LINT_NO_UNUSED_PARAMETER
    #24 8.263 -- Performing Test UV_LINT_NO_UNUSED_PARAMETER - Success
    #24 8.265 -- Performing Test UV_LINT_STRICT_PROTOTYPES
    #24 8.340 -- Performing Test UV_LINT_STRICT_PROTOTYPES - Success
    #24 8.341 -- Performing Test UV_LINT_EXTRA
    #24 8.414 -- Performing Test UV_LINT_EXTRA - Success
    #24 8.416 -- Performing Test UV_LINT_UTF8_MSVC
    #24 8.440 -- Performing Test UV_LINT_UTF8_MSVC - Failed
    #24 8.442 -- Performing Test UV_F_STRICT_ALIASING
    #24 8.512 -- Performing Test UV_F_STRICT_ALIASING - Success
    #24 8.514 -- summary of build options:
    #24 8.514     Install prefix:  /opt/pytorch/torch
    #24 8.514     Target system:   Linux
    #24 8.514     Compiler:
    #24 8.514       C compiler:    /usr/bin/cc
    #24 8.514       CFLAGS:          -fopenmp
    #24 8.514 
    #24 8.517 -- Found uv: 1.38.1 (found version "1.38.1") 
    #24 8.519 CMake Warning at cmake/Dependencies.cmake:1406 (message):
    #24 8.519   Metal is only used in ios builds.
    #24 8.519 Call Stack (most recent call first):
    #24 8.519   CMakeLists.txt:621 (include)
    #24 8.519 
    #24 8.519 
    #24 8.522 Generated: /opt/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
    #24 8.522 Generated: /opt/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
    #24 8.523 Generated: /opt/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
    #24 8.626 -- 
    #24 8.626 -- ******** Summary ********
    #24 8.626 --   CMake version         : 3.19.6
    #24 8.626 --   CMake command         : /opt/conda/bin/cmake
    #24 8.626 --   System                : Linux
    #24 8.626 --   C++ compiler          : /usr/bin/c++
    #24 8.626 --   C++ compiler version  : 7.5.0
    #24 8.626 --   CXX flags             :  -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wnon-virtual-dtor
    #24 8.626 --   Build type            : Release
    #24 8.626 --   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
    #24 8.626 --   CMAKE_PREFIX_PATH     : /opt/conda/bin/../;/usr/local/cuda
    #24 8.626 --   CMAKE_INSTALL_PREFIX  : /opt/pytorch/torch
    #24 8.626 --   CMAKE_MODULE_PATH     : /opt/pytorch/cmake/Modules;/opt/pytorch/cmake/public/../Modules_CUDA_fix
    #24 8.626 -- 
    #24 8.626 --   ONNX version          : 1.8.0
    #24 8.626 --   ONNX NAMESPACE        : onnx_torch
    #24 8.626 --   ONNX_BUILD_TESTS      : OFF
    #24 8.626 --   ONNX_BUILD_BENCHMARKS : OFF
    #24 8.626 --   ONNX_USE_LITE_PROTO   : OFF
    #24 8.626 --   ONNXIFI_DUMMY_BACKEND : OFF
    #24 8.626 --   ONNXIFI_ENABLE_EXT    : OFF
    #24 8.626 -- 
    #24 8.626 --   Protobuf compiler     : 
    #24 8.626 --   Protobuf includes     : 
    #24 8.626 --   Protobuf libraries    : 
    #24 8.626 --   BUILD_ONNX_PYTHON     : OFF
    #24 8.627 -- 
    #24 8.627 -- ******** Summary ********
    #24 8.627 --   CMake version         : 3.19.6
    #24 8.627 --   CMake command         : /opt/conda/bin/cmake
    #24 8.627 --   System                : Linux
    #24 8.627 --   C++ compiler          : /usr/bin/c++
    #24 8.627 --   C++ compiler version  : 7.5.0
    #24 8.627 --   CXX flags             :  -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wnon-virtual-dtor
    #24 8.627 --   Build type            : Release
    #24 8.627 --   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
    #24 8.627 --   CMAKE_PREFIX_PATH     : /opt/conda/bin/../;/usr/local/cuda
    #24 8.627 --   CMAKE_INSTALL_PREFIX  : /opt/pytorch/torch
    #24 8.627 --   CMAKE_MODULE_PATH     : /opt/pytorch/cmake/Modules;/opt/pytorch/cmake/public/../Modules_CUDA_fix
    #24 8.627 -- 
    #24 8.627 --   ONNX version          : 1.4.1
    #24 8.627 --   ONNX NAMESPACE        : onnx_torch
    #24 8.627 --   ONNX_BUILD_TESTS      : OFF
    #24 8.627 --   ONNX_BUILD_BENCHMARKS : OFF
    #24 8.627 --   ONNX_USE_LITE_PROTO   : OFF
    #24 8.627 --   ONNXIFI_DUMMY_BACKEND : OFF
    #24 8.627 -- 
    #24 8.627 --   Protobuf compiler     : 
    #24 8.627 --   Protobuf includes     : 
    #24 8.627 --   Protobuf libraries    : 
    #24 8.627 --   BUILD_ONNX_PYTHON     : OFF
    #24 8.628 -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
    #24 8.628 -- Adding -DNDEBUG to compile flags
    #24 8.629 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2
    #24 8.755 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True
    #24 8.755 -- Compiling with MAGMA support
    #24 8.755 -- MAGMA INCLUDE DIRECTORIES: /opt/conda/include
    #24 8.755 -- MAGMA LIBRARIES: /opt/conda/lib/libmagma.a
    #24 8.755 -- MAGMA V2 check: 1
    #24 8.792 -- Could not find hardware support for NEON on this machine.
    #24 8.792 -- No OMAP3 processor on this machine.
    #24 8.792 -- No OMAP4 processor on this machine.
    #24 8.793 -- Looking for cpuid.h
    #24 8.867 -- Looking for cpuid.h - found
    #24 8.868 -- Performing Test HAVE_GCC_GET_CPUID
    #24 8.948 -- Performing Test HAVE_GCC_GET_CPUID - Success
    #24 8.948 -- Performing Test NO_GCC_EBX_FPIC_BUG
    #24 9.022 -- Performing Test NO_GCC_EBX_FPIC_BUG - Success
    #24 9.022 -- <FindVSX>
    #24 9.025 -- Performing Test C_VSX_FOUND
    #24 9.056 -- Performing Test C_VSX_FOUND - Failed
    #24 9.056 -- Performing Test CXX_VSX_FOUND
    #24 9.086 -- Performing Test CXX_VSX_FOUND - Failed
    #24 9.086 -- </FindVSX>
    #24 9.087 -- Performing Test C_HAS_AVX_1
    #24 9.215 -- Performing Test C_HAS_AVX_1 - Failed
    #24 9.215 -- Performing Test C_HAS_AVX_2
    #24 9.380 -- Performing Test C_HAS_AVX_2 - Success
    #24 9.381 -- Performing Test C_HAS_AVX2_1
    #24 9.507 -- Performing Test C_HAS_AVX2_1 - Failed
    #24 9.507 -- Performing Test C_HAS_AVX2_2
    #24 9.665 -- Performing Test C_HAS_AVX2_2 - Success
    #24 9.666 -- Performing Test CXX_HAS_AVX_1
    #24 9.797 -- Performing Test CXX_HAS_AVX_1 - Failed
    #24 9.797 -- Performing Test CXX_HAS_AVX_2
    #24 9.960 -- Performing Test CXX_HAS_AVX_2 - Success
    #24 9.961 -- Performing Test CXX_HAS_AVX2_1
    #24 10.09 -- Performing Test CXX_HAS_AVX2_1 - Failed
    #24 10.09 -- Performing Test CXX_HAS_AVX2_2
    #24 10.25 -- Performing Test CXX_HAS_AVX2_2 - Success
    #24 10.25 -- AVX compiler support found
    #24 10.25 -- AVX2 compiler support found
    #24 10.25 -- Performing Test BLAS_F2C_DOUBLE_WORKS
    #24 10.50 -- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
    #24 10.50 -- Performing Test BLAS_F2C_FLOAT_WORKS
    #24 10.74 -- Performing Test BLAS_F2C_FLOAT_WORKS - Success
    #24 10.74 -- Performing Test BLAS_USE_CBLAS_DOT
    #24 11.03 -- Performing Test BLAS_USE_CBLAS_DOT - Success
    #24 11.03 -- Found a library with BLAS API (mkl). Full path: (/opt/conda/lib/libmkl_intel_lp64.so;/opt/conda/lib/libmkl_gnu_thread.so;/opt/conda/lib/libmkl_core.so;-fopenmp;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so)
    #24 11.03 -- Found a library with LAPACK API (mkl).
    #24 11.03 disabling ROCM because NOT USE_ROCM is set
    #24 11.03 -- MIOpen not found. Compiling without MIOpen support
    #24 11.04 -- MKLDNN_CPU_RUNTIME = OMP
    #24 11.04 CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:17 (cmake_minimum_required):
    #24 11.04   Compatibility with CMake < 2.8.12 will be removed from a future version of
    #24 11.04   CMake.
    #24 11.04 
    #24 11.04   Update the VERSION argument <min> value or use a ...<max> suffix to tell
    #24 11.04   CMake that the project does not need compatibility with older versions.
    #24 11.04 
    #24 11.04 
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_ENABLE_CONCURRENT_EXEC to MKLDNN_ENABLE_CONCURRENT_EXEC with value `ON`
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_BUILD_EXAMPLES to MKLDNN_BUILD_EXAMPLES with value `FALSE`
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_BUILD_TESTS to MKLDNN_BUILD_TESTS with value `FALSE`
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_LIBRARY_TYPE to MKLDNN_LIBRARY_TYPE with value `STATIC`
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_ARCH_OPT_FLAGS to MKLDNN_ARCH_OPT_FLAGS with value `-msse4`
    #24 11.04 -- Intel MKL-DNN compat: set DNNL_CPU_RUNTIME to MKLDNN_CPU_RUNTIME with value `OMP`
    #24 11.05 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 11.05   The package name passed to `find_package_handle_standard_args` (OpenMP_C)
    #24 11.05   does not match the name of the calling package (OpenMP).  This can lead to
    #24 11.05   problems in calling code that expects `find_package` result variables
    #24 11.05   (e.g., `_FOUND`) to follow a certain pattern.
    #24 11.05 Call Stack (most recent call first):
    #24 11.05   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 11.05   third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:61 (find_package)
    #24 11.05   third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
    #24 11.05 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 11.05 
    #24 11.05 -- Found OpenMP_C: -fopenmp (found version "4.5") 
    #24 11.05 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 11.05   The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
    #24 11.05   does not match the name of the calling package (OpenMP).  This can lead to
    #24 11.05   problems in calling code that expects `find_package` result variables
    #24 11.05   (e.g., `_FOUND`) to follow a certain pattern.
    #24 11.05 Call Stack (most recent call first):
    #24 11.05   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 11.05   third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:61 (find_package)
    #24 11.05   third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
    #24 11.05 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 11.05 
    #24 11.05 -- Found OpenMP_CXX: -fopenmp (found version "4.5") 
    #24 11.07 -- Primitive cache is enabled
    #24 11.09 -- Found MKL-DNN: TRUE
    #24 11.09 -- Looking for clock_gettime in rt
    #24 11.17 -- Looking for clock_gettime in rt - found
    #24 11.17 -- Looking for mmap
    #24 11.25 -- Looking for mmap - found
    #24 11.25 -- Looking for shm_open
    #24 11.33 -- Looking for shm_open - found
    #24 11.33 -- Looking for shm_unlink
    #24 11.41 -- Looking for shm_unlink - found
    #24 11.41 -- Looking for malloc_usable_size
    #24 11.48 -- Looking for malloc_usable_size - found
    #24 11.48 -- Performing Test C_HAS_THREAD
    #24 11.56 -- Performing Test C_HAS_THREAD - Success
    #24 11.57 -- Version: 7.0.3
    #24 11.57 -- Build type: Release
    #24 11.57 -- CXX_STANDARD: 14
    #24 11.57 -- Performing Test has_std_14_flag
    #24 11.66 -- Performing Test has_std_14_flag - Success
    #24 11.66 -- Performing Test has_std_1y_flag
    #24 11.74 -- Performing Test has_std_1y_flag - Success
    #24 11.74 -- Performing Test SUPPORTS_USER_DEFINED_LITERALS
    #24 11.83 -- Performing Test SUPPORTS_USER_DEFINED_LITERALS - Success
    #24 11.83 -- Performing Test FMT_HAS_VARIANT
    #24 11.96 -- Performing Test FMT_HAS_VARIANT - Success
    #24 11.96 -- Required features: cxx_variadic_templates
    #24 11.96 -- Looking for strtod_l
    #24 12.00 -- Looking for strtod_l - not found
    #24 12.01 -- Using Kineto with CUPTI support
    #24 12.01 -- Configuring Kineto dependency:
    #24 12.01 --   KINETO_SOURCE_DIR = /opt/pytorch/third_party/kineto/libkineto
    #24 12.01 --   KINETO_BUILD_TESTS = OFF
    #24 12.01 --   KINETO_LIBRARY_TYPE = static
    #24 12.01 --   CUDA_SOURCE_DIR = /usr/local/cuda
    #24 12.01 --   CUDA_INCLUDE_DIRS = /usr/local/cuda/include
    #24 12.01 --   CUPTI_INCLUDE_DIR = /usr/local/cuda/include
    #24 12.01 --   CUDA_cupti_LIBRARY = /usr/local/cuda/lib64/libcupti_static.a
    #24 12.01 -- Found CUPTI
    #24 12.03 -- Found PythonInterp: /opt/conda/bin/python (found version "3.9.7") 
    #24 12.09 -- Kineto: FMT_SOURCE_DIR = /opt/pytorch/third_party/fmt
    #24 12.09 -- Kineto: FMT_INCLUDE_DIR = /opt/pytorch/third_party/fmt/include
    #24 12.09 INFO CUPTI_INCLUDE_DIR = /usr/local/cuda/include
    #24 12.09 -- Configured Kineto
    #24 12.09 -- GCC 7.5.0: Adding gcc and gcc_s libs to link line
    #24 12.09 -- Performing Test HAS_WERROR_FORMAT
    #24 12.18 -- Performing Test HAS_WERROR_FORMAT - Success
    #24 12.18 -- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE
    #24 12.22 -- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE - Failed
    #24 12.23 -- Looking for backtrace
    #24 12.30 -- Looking for backtrace - found
    #24 12.30 -- backtrace facility detected in default set of libraries
    #24 12.30 -- Found Backtrace: /usr/include  
    #24 12.30 -- don't use NUMA
    #24 12.32 -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
    #24 12.40 -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success
    #24 13.54 -- Using ATen parallel backend: OMP
    #24 13.61 CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:91 (cmake_policy):
    #24 13.61   The OLD behavior for policy CMP0066 will be removed from a future version
    #24 13.61   of CMake.
    #24 13.61 
    #24 13.61   The cmake-policies(7) manual explains that the OLD behaviors of all
    #24 13.61   policies are deprecated and that a policy should be set to OLD only under
    #24 13.61   specific short-term circumstances.  Projects should be ported to the NEW
    #24 13.61   behavior and not rely on setting a policy to OLD.
    #24 13.61 
    #24 13.61 
    #24 14.09 -- Found OpenSSL: /opt/conda/lib/libcrypto.so (found version "1.1.1l")  
    #24 14.10 -- Check size of long double
    #24 14.18 -- Check size of long double - done
    #24 14.19 -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
    #24 14.27 -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
    #24 14.27 -- Performing Test COMPILER_SUPPORTS_FLOAT128
    #24 14.34 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success
    #24 14.34 -- Performing Test COMPILER_SUPPORTS_SSE2
    #24 14.53 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success
    #24 14.53 -- Performing Test COMPILER_SUPPORTS_SSE4
    #24 14.71 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success
    #24 14.71 -- Performing Test COMPILER_SUPPORTS_AVX
    #24 14.89 -- Performing Test COMPILER_SUPPORTS_AVX - Success
    #24 14.89 -- Performing Test COMPILER_SUPPORTS_FMA4
    #24 15.07 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success
    #24 15.07 -- Performing Test COMPILER_SUPPORTS_AVX2
    #24 15.26 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success
    #24 15.26 -- Performing Test COMPILER_SUPPORTS_AVX512F
    #24 15.44 -- Performing Test COMPILER_SUPPORTS_AVX512F - Success
    #24 15.44 -- Performing Test COMPILER_SUPPORTS_OPENMP
    #24 15.53 -- Performing Test COMPILER_SUPPORTS_OPENMP - Success
    #24 15.53 -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
    #24 15.61 -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success
    #24 15.61 -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
    #24 15.69 -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
    #24 15.69 -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
    #24 15.77 -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Success
    #24 15.79 -- Configuring build for SLEEF-v3.6.0
    #24 15.79 -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
    #24 15.79 -- Building shared libs : OFF
    #24 15.79    Target system: Linux-5.4.0-88-generic
    #24 15.79    Target processor: x86_64
    #24 15.79    Host system: Linux-5.4.0-88-generic
    #24 15.79    Host processor: x86_64
    #24 15.79    Detected C compiler: GNU @ /usr/bin/cc
    #24 15.79    CMake: 3.19.6
    #24 15.79    Make program: /opt/conda/bin/ninja
    #24 15.79 -- Building static test bins: OFF
    #24 15.79 -- MPFR : LIB_MPFR-NOTFOUND
    #24 15.79 -- GMP : LIBGMP-NOTFOUND
    #24 15.79 -- RT : /usr/lib/x86_64-linux-gnu/librt.so
    #24 15.79 -- FFTW3 : LIBFFTW3-NOTFOUND
    #24 15.79 -- OPENSSL : 1.1.1l
    #24 15.79 -- SDE : SDE_COMMAND-NOTFOUND
    #24 15.79 -- RUNNING_ON_TRAVIS : 
    #24 15.79 -- COMPILER_SUPPORTS_OPENMP : 1
    #24 15.80 AT_INSTALL_INCLUDE_DIR include/ATen/core
    #24 15.80 core header install: /opt/pytorch/build/aten/src/ATen/core/TensorBody.h
    #24 15.84 -- Include NCCL operators
    #24 15.84 -- Excluding FakeLowP operators
    #24 15.85 -- Including IDEEP operators
    #24 15.85 -- Excluding image processing operators due to no opencv
    #24 15.85 -- Excluding video processing operators due to no opencv
    #24 15.85 -- MPI operators skipped due to no MPI support
    #24 15.85 -- Include Observer library
    #24 17.11 -- breakpad library not found
    #24 17.11 -- /usr/bin/c++ /opt/pytorch/torch/abi-check.cpp -o /opt/pytorch/build/abi-check
    #24 17.33 -- Determined _GLIBCXX_USE_CXX11_ABI=1
    #24 17.34 CMake Warning (dev) at torch/CMakeLists.txt:348:
    #24 17.34   Syntax Warning in cmake code at column 107
    #24 17.34 
    #24 17.34   Argument not separated from preceding token by whitespace.
    #24 17.34 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 17.34 
    #24 17.34 CMake Warning (dev) at torch/CMakeLists.txt:348:
    #24 17.34   Syntax Warning in cmake code at column 115
    #24 17.34 
    #24 17.34   Argument not separated from preceding token by whitespace.
    #24 17.34 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 17.34 
    #24 17.42 CMake Warning at cmake/public/utils.cmake:365 (message):
    #24 17.42   In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
    #24 17.42   to cmake instead of implicitly setting it as an env variable.  This will
    #24 17.42   become a FATAL_ERROR in future version of pytorch.
    #24 17.42 Call Stack (most recent call first):
    #24 17.42   torch/CMakeLists.txt:315 (torch_cuda_get_nvcc_gencode_flag)
    #24 17.42 
    #24 17.42 
    #24 17.42 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 17.42   The package name passed to `find_package_handle_standard_args` (OpenMP_C)
    #24 17.42   does not match the name of the calling package (OpenMP).  This can lead to
    #24 17.42   problems in calling code that expects `find_package` result variables
    #24 17.42   (e.g., `_FOUND`) to follow a certain pattern.
    #24 17.42 Call Stack (most recent call first):
    #24 17.42   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 17.42   caffe2/CMakeLists.txt:1155 (find_package)
    #24 17.42 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 17.42 
    #24 17.42 CMake Warning (dev) at /opt/conda/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
    #24 17.42   The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
    #24 17.42   does not match the name of the calling package (OpenMP).  This can lead to
    #24 17.42   problems in calling code that expects `find_package` result variables
    #24 17.42   (e.g., `_FOUND`) to follow a certain pattern.
    #24 17.42 Call Stack (most recent call first):
    #24 17.42   cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
    #24 17.42   caffe2/CMakeLists.txt:1155 (find_package)
    #24 17.42 This warning is for project developers.  Use -Wno-dev to suppress it.
    #24 17.42 
    #24 17.42 -- pytorch is compiling with OpenMP. 
    #24 17.42 OpenMP CXX_FLAGS: -fopenmp. 
    #24 17.42 OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
    #24 17.42 -- Caffe2 is compiling with OpenMP. 
    #24 17.42 OpenMP CXX_FLAGS: -fopenmp. 
    #24 17.42 OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so.
    #24 17.53 -- Using lib/python3.9/site-packages as python relative installation path
    #24 17.63 CMake Warning at CMakeLists.txt:941 (message):
    #24 17.63   Generated cmake files are only fully tested if one builds with system glog,
    #24 17.63   gflags, and protobuf.  Other settings may generate files that are not well
    #24 17.63   tested.
    #24 17.63 
    #24 17.63 
    #24 17.64 -- 
    #24 17.64 -- ******** Summary ********
    #24 17.64 -- General:
    #24 17.64 --   CMake version         : 3.19.6
    #24 17.64 --   CMake command         : /opt/conda/bin/cmake
    #24 17.64 --   System                : Linux
    #24 17.64 --   C++ compiler          : /usr/bin/c++
    #24 17.64 --   C++ compiler id       : GNU
    #24 17.64 --   C++ compiler version  : 7.5.0
    #24 17.64 --   Using ccache if found : ON
    #24 17.64 --   Found ccache          : /usr/bin/ccache
    #24 17.64 --   CXX flags             :  -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow
    #24 17.64 --   Build type            : Release
    #24 17.64 --   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;MAGMA_V2;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
    #24 17.64 --   CMAKE_PREFIX_PATH     : /opt/conda/bin/../;/usr/local/cuda
    #24 17.64 --   CMAKE_INSTALL_PREFIX  : /opt/pytorch/torch
    #24 17.64 --   USE_GOLD_LINKER       : OFF
    #24 17.64 -- 
    #24 17.64 --   TORCH_VERSION         : 1.9.0
    #24 17.64 --   CAFFE2_VERSION        : 1.9.0
    #24 17.64 --   BUILD_CAFFE2          : ON
    #24 17.64 --   BUILD_CAFFE2_OPS      : ON
    #24 17.64 --   BUILD_CAFFE2_MOBILE   : OFF
    #24 17.64 --   BUILD_STATIC_RUNTIME_BENCHMARK: OFF
    #24 17.64 --   BUILD_TENSOREXPR_BENCHMARK: OFF
    #24 17.64 --   BUILD_BINARY          : OFF
    #24 17.64 --   BUILD_CUSTOM_PROTOBUF : ON
    #24 17.64 --     Link local protobuf : ON
    #24 17.64 --   BUILD_DOCS            : OFF
    #24 17.64 --   BUILD_PYTHON          : True
    #24 17.64 --     Python version      : 3.9.7
    #24 17.64 --     Python executable   : /opt/conda/bin/python
    #24 17.64 --     Pythonlibs version  : 3.9.7
    #24 17.64 --     Python library      : /opt/conda/lib/libpython3.9.a
    #24 17.64 --     Python includes     : /opt/conda/include/python3.9
    #24 17.64 --     Python site-packages: lib/python3.9/site-packages
    #24 17.64 --   BUILD_SHARED_LIBS     : ON
    #24 17.64 --   CAFFE2_USE_MSVC_STATIC_RUNTIME     : OFF
    #24 17.64 --   BUILD_TEST            : True
    #24 17.64 --   BUILD_JNI             : OFF
    #24 17.64 --   BUILD_MOBILE_AUTOGRAD : OFF
    #24 17.64 --   BUILD_LITE_INTERPRETER: OFF
    #24 17.64 --   INTERN_BUILD_MOBILE   : 
    #24 17.64 --   USE_BLAS              : 1
    #24 17.64 --     BLAS                : mkl
    #24 17.64 --   USE_LAPACK            : 1
    #24 17.64 --     LAPACK              : mkl
    #24 17.64 --   USE_ASAN              : OFF
    #24 17.64 --   USE_CPP_CODE_COVERAGE : OFF
    #24 17.64 --   USE_CUDA              : 1
    #24 17.64 --     Split CUDA          : OFF
    #24 17.64 --     CUDA static link    : OFF
    #24 17.64 --     USE_CUDNN           : 1
    #24 17.64 --     CUDA version        : 10.2
    #24 17.64 --     cuDNN version       : 8.2.0
    #24 17.64 --     CUDA root directory : /usr/local/cuda
    #24 17.64 --     CUDA library        : /usr/local/cuda/lib64/stubs/libcuda.so
    #24 17.64 --     cudart library      : /usr/local/cuda/lib64/libcudart.so
    #24 17.64 --     cublas library      : /usr/lib/x86_64-linux-gnu/libcublas.so
    #24 17.64 --     cufft library       : /usr/local/cuda/lib64/libcufft.so
    #24 17.64 --     curand library      : /usr/local/cuda/lib64/libcurand.so
    #24 17.64 --     cuDNN library       : /usr/lib/x86_64-linux-gnu/libcudnn.so
    #24 17.64 --     nvrtc               : /usr/local/cuda/lib64/libnvrtc.so
    #24 17.64 --     CUDA include path   : /usr/local/cuda/include
    #24 17.64 --     NVCC executable     : /usr/local/cuda/bin/nvcc
    #24 17.64 --     NVCC flags          : -Xfatbin;-compress-all;-DONNX_NAMESPACE=onnx_torch;-gencode;arch=compute_80,code=sm_80;-Xcudafe;--diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl;-std=c++14;-Xcompiler;-fPIC;--expt-relaxed-constexpr;--expt-extended-lambda;-Wno-deprecated-gpu-targets;--expt-extended-lambda;-Xfatbin;-compress-all;-Xcompiler;-fPIC;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS__;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_BFLOAT16_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__
    #24 17.64 --     CUDA host compiler  : /usr/bin/cc
    #24 17.64 --     NVCC --device-c     : OFF
    #24 17.64 --     USE_TENSORRT        : OFF
    #24 17.64 --   USE_ROCM              : OFF
    #24 17.64 --   USE_EIGEN_FOR_BLAS    : 
    #24 17.64 --   USE_FBGEMM            : ON
    #24 17.64 --     USE_FAKELOWP          : OFF
    #24 17.64 --   USE_KINETO            : ON
    #24 17.64 --   USE_FFMPEG            : OFF
    #24 17.64 --   USE_GFLAGS            : OFF
    #24 17.64 --   USE_GLOG              : OFF
    #24 17.64 --   USE_LEVELDB           : OFF
    #24 17.64 --   USE_LITE_PROTO        : OFF
    #24 17.64 --   USE_LMDB              : OFF
    #24 17.64 --   USE_METAL             : OFF
    #24 17.64 --   USE_PYTORCH_METAL     : OFF
    #24 17.64 --   USE_FFTW              : OFF
    #24 17.64 --   USE_MKL               : ON
    #24 17.64 --   USE_MKLDNN            : ON
    #24 17.64 --   USE_MKLDNN_ACL        : OFF
    #24 17.64 --   USE_MKLDNN_CBLAS      : OFF
    #24 17.64 --   USE_NCCL              : ON
    #24 17.64 --     USE_SYSTEM_NCCL     : OFF
    #24 17.64 --   USE_NNPACK            : ON
    #24 17.64 --   USE_NUMPY             : ON
    #24 17.64 --   USE_OBSERVERS         : ON
    #24 17.64 --   USE_OPENCL            : OFF
    #24 17.64 --   USE_OPENCV            : OFF
    #24 17.64 --   USE_OPENMP            : ON
    #24 17.64 --   USE_TBB               : OFF
    #24 17.64 --   USE_VULKAN            : OFF
    #24 17.64 --   USE_PROF              : OFF
    #24 17.64 --   USE_QNNPACK           : ON
    #24 17.64 --   USE_PYTORCH_QNNPACK   : ON
    #24 17.64 --   USE_REDIS             : OFF
    #24 17.64 --   USE_ROCKSDB           : OFF
    #24 17.64 --   USE_ZMQ               : OFF
    #24 17.64 --   USE_DISTRIBUTED       : ON
    #24 17.64 --     USE_MPI             : OFF
    #24 17.64 --     USE_GLOO            : ON
    #24 17.64 --     USE_TENSORPIPE      : ON
    #24 17.64 --   USE_DEPLOY           : OFF
    #24 17.64 --   Public Dependencies  : Threads::Threads;caffe2::mkl;caffe2::mkldnn
    #24 17.64 --   Private Dependencies : pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fbgemm;fp16;gloo;tensorpipe;aten_op_header_gen;foxi_loader;rt;fmt::fmt-header-only;kineto;gcc_s;gcc;dl
    #24 17.65 -- Configuring done
    #24 18.73 CMake Warning at caffe2/CMakeLists.txt:791 (add_library):
    #24 18.73   Cannot generate a safe runtime search path for target torch_cpu because
    #24 18.73   files in some directories may conflict with libraries in implicit
    #24 18.73   directories:
    #24 18.73 
    #24 18.73     runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/7 may be hidden by files in:
    #24 18.73       /opt/conda/lib
    #24 18.73 
    #24 18.73   Some of these libraries may not be found correctly.
    #24 18.73 
    #24 18.73 
    #24 19.13 CMake Warning at cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1865 (add_library):
    #24 19.13   Cannot generate a safe runtime search path for target
    #24 19.13   caffe2_detectron_ops_gpu because files in some directories may conflict
    #24 19.13   with libraries in implicit directories:
    #24 19.13 
    #24 19.13     runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-linux-gnu/7 may be hidden by files in:
    #24 19.13       /opt/conda/lib
    #24 19.13 
    #24 19.13   Some of these libraries may not be found correctly.
    #24 19.13 Call Stack (most recent call first):
    #24 19.13   modules/detectron/CMakeLists.txt:13 (CUDA_ADD_LIBRARY)
    #24 19.13 
    #24 19.13 
    #24 19.15 -- Generating done
    #24 19.26 -- Build files have been written to: /opt/pytorch/build
    #24 19.35 cmake --build . --target install --config Release -- -j 64
    #24 19.59 [1/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/any_lite.cc.o
    #24 19.59 [2/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/arena.cc.o
    #24 19.60 [3/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/extension_set.cc.o
    #24 19.60 [4/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_enum_util.cc.o
    #24 19.60 [5/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_message_table_driven_lite.cc.o
    #24 19.60 [6/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_message_util.cc.o
    #24 19.60 [7/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/implicit_weak_message.cc.o
    #24 19.60 [8/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/coded_stream.cc.o
    #24 19.60 [9/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/io_win32.cc.o
    #24 19.60 [10/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/strtod.cc.o
    #24 19.60 [11/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/zero_copy_stream.cc.o
    #24 19.61 [12/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/zero_copy_stream_impl.cc.o
    #24 19.61 [13/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/zero_copy_stream_impl_lite.cc.o
    #24 19.61 [14/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/parse_context.cc.o
    #24 19.61 [15/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/bytestream.cc.o
    #24 19.61 [16/6115] Creating directories for 'nccl_external'
    #24 19.61 [17/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/repeated_field.cc.o
    #24 19.61 [18/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/common.cc.o
    #24 19.62 [19/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/int128.cc.o
    #24 19.62 [20/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/status.cc.o
    #24 19.62 [21/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/statusor.cc.o
    #24 19.62 [22/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/stringpiece.cc.o
    #24 19.62 [23/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/stringprintf.cc.o
    #24 19.62 [24/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/structurally_valid.cc.o
    #24 19.62 [25/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/strutil.cc.o
    #24 19.63 [26/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/time.cc.o
    #24 19.63 [27/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/wire_format_lite.cc.o
    #24 19.63 [28/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/message_lite.cc.o
    #24 19.63 In file included from /usr/include/string.h:494:0,
    #24 19.63                  from ../third_party/protobuf/src/google/protobuf/stubs/port.h:39,
    #24 19.63                  from ../third_party/protobuf/src/google/protobuf/stubs/common.h:46,
    #24 19.63                  from ../third_party/protobuf/src/google/protobuf/message_lite.h:45,
    #24 19.63                  from ../third_party/protobuf/src/google/protobuf/message_lite.cc:36:
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializeToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializeToFileDescriptor(int) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToFileDescriptor(int) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:401:42:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializeToOstream(std::ostream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.63     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.63     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToOstream(std::ostream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:414:60:
    #24 19.63 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.63    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.63                                                                        ^
    #24 19.63 [29/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/code_generator.cc.o
    #24 19.63 [30/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_enum.cc.o
    #24 19.63 [31/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_enum_field.cc.o
    #24 19.64 [32/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_extension.cc.o
    #24 19.64 [33/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_field.cc.o
    #24 19.64 [34/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_generator.cc.o
    #24 19.64 [35/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_map_field.cc.o
    #24 19.64 [36/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_message.cc.o
    #24 19.64 [37/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_message_field.cc.o
    #24 19.64 [38/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_padding_optimizer.cc.o
    #24 19.64 [39/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_primitive_field.cc.o
    #24 19.64 [40/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_string_field.cc.o
    #24 19.64 [41/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_doc_comment.cc.o
    #24 19.65 [42/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_reflection_class.cc.o
    #24 19.65 [43/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/command_line_interface.cc.o
    #24 19.65 [44/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_file.cc.o
    #24 19.65 [45/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_helpers.cc.o
    #24 19.65 [46/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_service.cc.o
    #24 19.65 [47/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_enum.cc.o
    #24 19.65 [48/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_enum_field.cc.o
    #24 19.65 [49/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_field_base.cc.o
    #24 19.65 [50/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_generator.cc.o
    #24 19.66 [51/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_helpers.cc.o
    #24 19.66 [52/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_map_field.cc.o
    #24 19.66 [53/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_message.cc.o
    #24 19.66 [54/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_message_field.cc.o
    #24 19.66 [55/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_primitive_field.cc.o
    #24 19.66 [56/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_enum_field.cc.o
    #24 19.66 [57/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_message_field.cc.o
    #24 19.66 [58/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_primitive_field.cc.o
    #24 19.67 [59/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_source_generator_base.cc.o
    #24 19.67 [60/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_wrapper_field.cc.o
    #24 19.67 [61/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_doc_comment.cc.o
    #24 19.67 [62/6115] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/pthreads.c.o
    #24 19.68 [63/6115] No download step for 'nccl_external'
    #24 19.68 [64/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/js/well_known_types_embed.cc.o
    #24 19.68 [65/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_context.cc.o
    #24 19.68 [66/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum.cc.o
    #24 19.68 [67/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_field.cc.o
    #24 19.68 [68/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_extension_lite.cc.o
    #24 19.69 [69/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_field.cc.o
    #24 19.69 [70/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_generator.cc.o
    #24 19.69 [71/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_map_field_lite.cc.o
    #24 19.69 [72/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/io_win32.cc.o
    #24 19.69 [73/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_field_lite.cc.o
    #24 19.69 [74/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_lite.cc.o
    #24 19.69 [75/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_extension.cc.o
    #24 19.70 [76/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_file.cc.o
    #24 19.70 [77/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_generator_factory.cc.o
    #24 19.70 [78/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_helpers.cc.o
    #24 19.70 [79/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_map_field.cc.o
    #24 19.71 [80/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message.cc.o
    #24 19.71 [81/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_builder_lite.cc.o
    #24 19.71 [82/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_field.cc.o
    #24 19.71 [83/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_field_lite.cc.o
    #24 19.71 [84/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_lite.cc.o
    #24 19.71 [85/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_name_resolver.cc.o
    #24 19.71 [86/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_primitive_field.cc.o
    #24 19.72 [87/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_primitive_field_lite.cc.o
    #24 19.72 [88/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_shared_code_generator.cc.o
    #24 19.72 [89/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_string_field_lite.cc.o
    #24 19.72 [90/6115] No update step for 'nccl_external'
    #24 19.72 [91/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_builder.cc.o
    #24 19.73 [92/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_service.cc.o
    #24 19.73 [93/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/js/js_generator.cc.o
    #24 19.73 [94/6115] Linking CXX static library lib/libprotobuf-lite.a
    #24 19.73 [95/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_enum.cc.o
    #24 19.74 [96/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_string_field.cc.o
    #24 19.74 [97/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_enum_field.cc.o
    #24 19.74 [98/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_extension.cc.o
    #24 19.74 [99/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_generator.cc.o
    #24 19.74 [100/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_helpers.cc.o
    #24 19.74 [101/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_map_field.cc.o
    #24 19.74 [102/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_oneof.cc.o
    #24 19.75 [103/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/python/python_generator.cc.o
    #24 19.75 [104/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/arena.cc.o
    #24 19.75 [105/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_field.cc.o
    #24 19.75 [106/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_file.cc.o
    #24 19.75 [107/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_message.cc.o
    #24 19.75 [108/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_message_field.cc.o
    #24 19.75 [109/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_primitive_field.cc.o
    #24 19.75 [110/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/php/php_generator.cc.o
    #24 19.75 [111/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/plugin.cc.o
    #24 19.75 [112/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/plugin.pb.cc.o
    #24 19.76 [113/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/zip_writer.cc.o
    #24 19.76 [114/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/protoc.dir/__/src/google/protobuf/compiler/main.cc.o
    #24 19.76 [115/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/any_lite.cc.o
    #24 19.76 [116/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/extension_set.cc.o
    #24 19.76 [117/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/coded_stream.cc.o
    #24 19.76 [118/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/strtod.cc.o
    #24 19.76 [119/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream.cc.o
    #24 19.76 [120/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/ruby/ruby_generator.cc.o
    #24 19.77 [121/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/subprocess.cc.o
    #24 19.77 [122/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_enum_util.cc.o
    #24 19.77 [123/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_table_driven_lite.cc.o
    #24 19.77 [124/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_util.cc.o
    #24 19.77 [125/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/implicit_weak_message.cc.o
    #24 19.78 [126/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream_impl.cc.o
    #24 19.78 [127/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream_impl_lite.cc.o
    #24 19.79 [128/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/parse_context.cc.o
    #24 19.79 [129/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/repeated_field.cc.o
    #24 19.79 [130/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/bytestream.cc.o
    #24 19.80 [131/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/common.cc.o
    #24 19.80 [132/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/int128.cc.o
    #24 19.80 [133/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/status.cc.o
    #24 19.80 [134/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/statusor.cc.o
    #24 19.80 [135/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/stringpiece.cc.o
    #24 19.80 [136/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/stringprintf.cc.o
    #24 19.80 [137/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/structurally_valid.cc.o
    #24 19.80 [138/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/strutil.cc.o
    #24 19.81 [139/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/time.cc.o
    #24 19.82 [140/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wire_format_lite.cc.o
    #24 19.84 [141/6115] No patch step for 'nccl_external'
    #24 19.85 [142/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/gzip_stream.cc.o
    #24 19.85 [143/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/message_lite.cc.o
    #24 19.85 In file included from /usr/include/string.h:494:0,
    #24 19.85                  from ../third_party/protobuf/src/google/protobuf/stubs/port.h:39,
    #24 19.85                  from ../third_party/protobuf/src/google/protobuf/stubs/common.h:46,
    #24 19.85                  from ../third_party/protobuf/src/google/protobuf/message_lite.h:45,
    #24 19.85                  from ../third_party/protobuf/src/google/protobuf/message_lite.cc:36:
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializeToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializeToFileDescriptor(int) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToFileDescriptor(int) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:401:42:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializeToOstream(std::ostream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:372:49:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.85 In function ‘void* memcpy(void*, const void*, size_t)’,
    #24 19.85     inlined from ‘google::protobuf::uint8* google::protobuf::io::EpsCopyOutputStream::WriteRaw(const void*, int, google::protobuf::uint8*)’ at ../third_party/protobuf/src/google/protobuf/io/coded_stream.h:699:16,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToZeroCopyStream(google::protobuf::io::ZeroCopyOutputStream*) const’ at ../third_party/protobuf/src/google/protobuf/implicit_weak_message.h:88:35,
    #24 19.85     inlined from ‘bool google::protobuf::MessageLite::SerializePartialToOstream(std::ostream*) const’ at ../third_party/protobuf/src/google/protobuf/message_lite.cc:414:60:
    #24 19.85 /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34:71: warning: ‘void* __builtin___memcpy_chk(void*, const void*, long unsigned int, long unsigned int)’: specified size between 18446744071562067968 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
    #24 19.85    return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
    #24 19.85                                                                        ^
    #24 19.87 [144/6115] No configure step for 'nccl_external'
    #24 19.93 [145/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-input-gradient.c.o
    #24 19.94 [146/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-output.c.o
    #24 19.94 [147/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/cache.c.o
    #24 19.96 [148/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/vendor.c.o
    #24 19.98 [149/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/info.c.o
    #24 20.01 [150/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/init.c.o
    #24 20.06 [151/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/uarch.c.o
    #24 20.06 [152/6115] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/memory.c.o
    #24 20.08 [153/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/init.c.o
    #24 20.08 [154/6115] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/legacy-api.c.o
    #24 20.11 [155/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/api.c.o
    #24 20.13 [156/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/topology.c.o
    #24 20.13 [157/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/isa.c.o
    #24 20.16 [158/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/name.c.o
    #24 20.19 [159/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/descriptor.c.o
    #24 20.21 [160/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/init.c.o
    #24 20.27 [161/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/deterministic.c.o
    #24 20.35 [162/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/linux/cpuinfo.c.o
    #24 20.40 [163/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/linux/init.c.o
    #24 20.42 [164/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/smallfile.c.o
    #24 20.44 [165/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/multiline.c.o
    #24 20.44 [166/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/init.c.o
    #24 20.44 [167/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/cache.c.o
    #24 20.45 [168/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/cpulist.c.o
    #24 20.46 [169/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/processors.c.o
    #24 20.48 [170/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/info.c.o
    #24 20.49 [171/6115] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/fastpath.c.o
    #24 20.50 [172/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/init.c.o
    #24 20.51 [173/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o
    #24 20.52 [174/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/service.cc.o
    #24 20.57 [175/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/vendor.c.o
    #24 20.58 [176/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/topology.c.o
    #24 20.60 [177/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/uarch.c.o
    #24 20.71 [178/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/descriptor.c.o
    #24 20.73 [179/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/substitute.cc.o
    #24 20.74 [180/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_escaping.cc.o
    #24 20.78 [181/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/error_listener.cc.o
    #24 20.80 [182/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/deterministic.c.o
    #24 20.82 [183/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/name.c.o
    #24 20.84 [184/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/isa.c.o
    #24 20.84 [185/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/init.c.o
    #24 20.85 [186/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/linux/cpuinfo.c.o
    #24 20.85 [187/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/smallfile.c.o
    #24 20.85 [188/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/multiline.c.o
    #24 20.86 [189/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/cpulist.c.o
    #24 20.89 [190/6115] Building C object confu-deps/cpuinfo/deps/clog/CMakeFiles/clog.dir/src/clog.c.o
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_fatal’:
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c:112:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
    #24 20.89     write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
    #24 20.89     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_error’:
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c:188:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
    #24 20.89     write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
    #24 20.89     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_warning’:
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c:264:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
    #24 20.89     write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
    #24 20.89     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_info’:
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c:340:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
    #24 20.89     write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
    #24 20.89     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_debug’:
    #24 20.89 ../third_party/cpuinfo/deps/clog/src/clog.c:416:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
    #24 20.89     write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
    #24 20.89     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    #24 20.92 [191/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/processors.c.o
    #24 20.95 [192/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/delimited_message_util.cc.o
    #24 20.97 [193/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/init.c.o
    #24 20.98 [194/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/any.cc.o
    #24 20.99 [195/6115] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/linux/init.c.o
    #24 21.00 [196/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/add.c.o
    #24 21.02 [197/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/global-average-pooling.c.o
    #24 21.04 [198/6115] Linking C static library lib/libclog.a
    #24 21.05 [199/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/average-pooling.c.o
    #24 21.05 [200/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/channel-shuffle.c.o
    #24 21.06 [201/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/clamp.c.o
    #24 21.07 [202/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/leaky-relu.c.o
    #24 21.09 [203/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/max-pooling.c.o
    #24 21.11 [204/6115] Linking C static library lib/libcpuinfo_internals.a
    #24 21.13 [205/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/fully-connected.c.o
    #24 21.14 [206/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/sigmoid.c.o
    #24 21.15 [207/6115] Linking C static library lib/libcpuinfo.a
    #24 21.15 [208/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/operator-delete.c.o
    #24 21.16 [209/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8lut32norm/scalar.c.o
    #24 21.16 [210/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/deconvolution.c.o
    #24 21.17 [211/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/empty.pb.cc.o
    #24 21.19 [212/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/convolution.c.o
    #24 21.19 [213/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/sgemm/6x8-psimd.c.o
    #24 21.21 [214/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/softargmax.c.o
    #24 21.21 [215/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/duration.pb.cc.o
    #24 21.21 [216/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/printer.cc.o
    #24 21.22 [217/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8lut/scalar.c.o
    #24 21.23 [218/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/field_mask.pb.cc.o
    #24 21.24 [219/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/object_writer.cc.o
    #24 21.28 [220/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/timestamp.pb.cc.o
    #24 21.29 [221/6115] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/portable-api.c.o
    #24 21.31 [222/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/indirection.c.o
    #24 21.31 [223/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/source_context.pb.cc.o
    #24 21.32 [224/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/up8x9-sse2.c.o
    #24 21.34 [225/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/up8xm-sse2.c.o
    #24 21.35 [226/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/any.pb.cc.o
    #24 21.35 [227/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8clamp/sse2.c.o
    #24 21.36 [228/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x2-sse2.c.o
    #24 21.37 [229/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/field_comparator.cc.o
    #24 21.38 [230/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/operator-run.c.o
    #24 21.38 [231/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/up8x7-sse2.c.o
    #24 21.39 [232/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/up8xm-sse2.c.o
    #24 21.40 [233/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8maxpool/sub16-sse2.c.o
    #24 21.40 [234/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8rmax/sse2.c.o
    #24 21.42 [235/6115] Linking C static library lib/libpthreadpool.a
    #24 21.43 [236/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/type_info_test_helper.cc.o
    #24 21.44 [237/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/mp8x9p8q-sse2.c.o
    #24 21.44 [238/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/mp8x7p7q-sse2.c.o
    #24 21.45 [239/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8maxpool/16x9p8q-sse2.c.o
    #24 21.46 [240/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x3-sse2.c.o
    #24 21.50 [241/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x4-sse2.c.o
    #24 21.50 [242/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/tokenizer.cc.o
    #24 21.52 [243/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/field_mask_utility.cc.o
    #24 21.54 [244/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/time_util.cc.o
    #24 21.56 [245/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/xm-sse2.c.o
    #24 21.58 [246/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/channel-shuffle.c.o
    #24 21.60 [247/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_objectwriter.cc.o
    #24 21.61 [248/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8conv/4x4c2-sse2.c.o
    #24 21.61 [249/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/init.c.o
    #24 21.61 [250/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/add.c.o
    #24 21.61 [251/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/average-pooling.c.o
    #24 21.62 [252/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/fully-connected.c.o
    #24 21.63 [253/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/global-average-pooling.c.o
    #24 21.64 [254/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_stream_parser.cc.o
    #24 21.64 [255/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gemm/4x4c2-sse2.c.o
    #24 21.66 [256/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8vadd/sse2.c.o
    #24 21.66 [257/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/clamp.c.o
    #24 21.67 [258/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/hardsigmoid.c.o
    #24 21.67 [259/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/hardswish.c.o
    #24 21.68 [260/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/sigmoid.c.o
    #24 21.68 [261/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8dwconv/up8x9-sse2.c.o
    #24 21.70 [262/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gemm/2x4c8-sse2.c.o
    #24 21.72 [263/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/deconvolution.c.o
    #24 21.74 [264/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/fully-connected-sparse.c.o
    #24 21.75 [265/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/leaky-relu.c.o
    #24 21.76 [266/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/max-pooling.c.o
    #24 21.76 [267/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/softargmax.c.o
    #24 21.77 [268/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/reflection_ops.cc.o
    #24 21.77 [269/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/json_util.cc.o
    #24 21.77 [270/6115] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8dwconv/mp8x25-sse2.c.o
    #24 21.77 [271/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/convolution.c.o
    #24 21.78 [272/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/tanh.c.o
    #24 21.81 [273/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/operator-delete.c.o
    #24 21.83 [274/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/x8lut/scalar.c.o
    #24 21.84 [275/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/u8lut32norm/scalar.c.o
    #24 21.84 [276/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/sgemm/6x8-psimd.c.o
    #24 21.85 [277/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8avgpool/up8xm-sse2.c.o
    #24 21.86 [278/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8avgpool/up8x9-sse2.c.o
    #24 21.88 [279/6115] Linking C static library lib/libqnnpack.a
    #24 21.89 [280/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/utility.cc.o
    #24 21.90 [281/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/type_info.cc.o
    #24 21.91 [282/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/indirection.c.o
    #24 21.91 [283/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8avgpool/mp8x9p8q-sse2.c.o
    #24 21.92 [284/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gavgpool/up8xm-sse2.c.o
    #24 21.93 [285/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/dynamic_message.cc.o
    #24 21.94 [286/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gavgpool/up8x7-sse2.c.o
    #24 21.94 [287/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/unknown_field_set.cc.o
    #24 21.95 [288/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gavgpool/mp8x7p7q-sse2.c.o
    #24 21.95 [289/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/extension_set_heavy.cc.o
    #24 21.95 [290/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/fc-prepack.cc.o
    #24 21.97 [291/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/compiler/importer.cc.o
    #24 21.97 [292/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/api.pb.cc.o
    #24 21.99 [293/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/conv-prepack.cc.o
    #24 22.02 [294/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/u8clamp/sse2.c.o
    #24 22.02 [295/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/operator-run.c.o
    #24 22.03 [296/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/u8maxpool/sub16-sse2.c.o
    #24 22.03 [297/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/roundz-scalar-trunc.c.o
    #24 22.03 [298/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-scalar-rr2-lut64-p2-div.c.o
    #24 22.04 [299/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/x32-unpool/scalar.c.o
    #24 22.04 [300/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/u8maxpool/16x9p8q-sse2.c.o
    #24 22.04 [301/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/u8rmax/sse2.c.o
    #24 22.04 [302/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/x8zip/x2-sse2.c.o
    #24 22.04 [303/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/x8zip/x3-sse2.c.o
    #24 22.04 [304/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-requantization/fp32-scalar-lrintf.c.o
    #24 22.04 [305/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/x32-zip/x2-scalar.c.o
    #24 22.05 [306/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/field_mask_util.cc.o
    #24 22.05 [307/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-scalar-rr2-lut2048-p1-div.c.o
    #24 22.05 [308/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/x32-pad/scalar-int.c.o
    #24 22.06 [309/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/x8zip/x4-sse2.c.o
    #24 22.06 [310/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-scalar-rr2-p5-div.c.o
    #24 22.06 [311/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/fully-connected-output.c.o
    #24 22.06 [312/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-kernel.c.o
    #24 22.07 [313/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/softmax-output.c.o
    #24 22.07 [314/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/max-pooling-output.c.o
    #24 22.08 [315/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/x8zip/xm-sse2.c.o
    #24 22.09 [316/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/relu-output.c.o
    #24 22.10 [317/6115] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/relu-input-gradient.c.o
    #24 22.11 [318/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/fc-dynamic-run.cc.o
    #24 22.12 [319/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gemm/2x4c8-sse2.c.o
    #24 22.14 [320/6115] Linking C static library lib/libnnpack_reference_layers.a
    #24 22.15 [321/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/fc-run.cc.o
    #24 22.17 [322/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8conv/4x4c2-sse2.c.o
    #24 22.19 [323/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/type_resolver_util.cc.o
    #24 22.20 [324/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8dwconv/up8x9-sse2-per-channel.c.o
    #24 22.22 [325/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wrappers.pb.cc.o
    #24 22.22 [326/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gemm/4x4c2-sse2.c.o
    #24 22.22 [327/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8vadd/sse2.c.o
    #24 22.22 [328/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/channel-shuffle-nc.c.o
    #24 22.23 [329/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/depth-to-space-nchw2nhwc.c.o
    #24 22.23 [330/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/depth-to-space-nhwc.c.o
    #24 22.23 [331/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8dwconv/up8x9-sse2.c.o
    #24 22.23 [332/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gemm_sparse/8x4-packA-sse2.c.o
    #24 22.23 [333/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/argmax-pooling-nhwc.c.o
    #24 22.24 [334/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/add2.c.o
    #24 22.25 [335/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/deconv-run.cc.o
    #24 22.25 [336/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gemm/4x4c2-dq-sse2.c.o
    #24 22.26 [337/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8gemm_sparse/8x4c1x4-dq-packedA-sse2.c.o
    #24 22.26 [338/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/leaky-relu-nc.c.o
    #24 22.26 [339/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/proto_writer.cc.o
    #24 22.28 [340/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/prelu-nc.c.o
    #24 22.29 [341/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8dwconv/mp8x25-sse2.c.o
    #24 22.29 [342/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/constant-pad-nd.c.o
    #24 22.29 [343/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/global-average-pooling-ncw.c.o
    #24 22.29 [344/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/resize-bilinear-nchw.c.o
    #24 22.29 [345/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/resize-bilinear-nhwc.c.o
    #24 22.30 [346/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/sigmoid-nc.c.o
    #24 22.30 [347/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/unpooling-nhwc.c.o
    #24 22.30 [348/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/argmax-pooling-2d.c.o
    #24 22.30 [349/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/average-pooling-2d.c.o
    #24 22.30 [350/6115] Building C object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/q8dwconv/mp8x25-sse2-per-channel.c.o
    #24 22.31 [351/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/fully-connected-nc.c.o
    #24 22.31 [352/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/max-pooling-nhwc.c.o
    #24 22.32 [353/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/softmax-nc.c.o
    #24 22.32 [354/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/abs.c.o
    #24 22.32 [355/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/message.cc.o
    #24 22.33 [356/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/conv-run.cc.o
    #24 22.33 [357/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/binary-elementwise-nd.c.o
    #24 22.35 [358/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/global-average-pooling-nwc.c.o
    #24 22.35 [359/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/bankers-rounding.c.o
    #24 22.35 [360/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/ceiling.c.o
    #24 22.36 [361/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/average-pooling-nhwc.c.o
    #24 22.36 [362/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/convolution-nchw.c.o
    #24 22.37 [363/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/protostream_objectsource.cc.o
    #24 22.38 [364/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/deconvolution-nhwc.c.o
    #24 22.39 [365/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/divide.c.o
    #24 22.39 [366/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/depth-to-space.c.o
    #24 22.39 [367/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/clamp.c.o
    #24 22.40 [368/6115] Building CXX object confu-deps/pytorch_qnnpack/CMakeFiles/pytorch_qnnpack.dir/src/pack_block_sparse.cc.o
    #24 22.40 [369/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/convolution-2d.c.o
    #24 22.40 [370/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/deconvolution-2d.c.o
    #24 22.41 [371/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/minimum2.c.o
    #24 22.41 [372/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2-k-over-64.c.o
    #24 22.41 [373/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/elu.c.o
    #24 22.41 [374/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/floor.c.o
    #24 22.42 [375/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/fully-connected.c.o
    #24 22.42 [376/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2-k-over-2048.c.o
    #24 22.42 [377/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/unary-elementwise-nc.c.o
    #24 22.43 [378/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/depthwise-convolution-2d.c.o
    #24 22.43 [379/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/hardswish.c.o
    #24 22.43 [380/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2minus-k-over-8.c.o
    #24 22.43 [381/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/global-average-pooling-2d.c.o
    #24 22.43 [382/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/max-pooling-2d.c.o
    #24 22.43 [383/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/maximum2.c.o
    #24 22.44 [384/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2minus-k-over-64.c.o
    #24 22.44 [385/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/leaky-relu.c.o
    #24 22.45 [386/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2minus-k-over-4.c.o
    #24 22.45 [387/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2minus-k-over-16.c.o
    #24 22.45 [388/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/datapiece.cc.o
    #24 22.45 [389/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/negate.c.o
    #24 22.45 [390/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/prelu.c.o
    #24 22.45 [391/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/memory.c.o
    #24 22.46 [392/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tables/exp2minus-k-over-2048.c.o
    #24 22.46 [393/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/multiply2.c.o
    #24 22.46 [394/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/softmax.c.o
    #24 22.47 [395/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/squared-difference.c.o
    #24 22.47 [396/6115] Linking CXX static library lib/libpytorch_qnnpack.a
    #24 22.48 [397/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/sigmoid.c.o
    #24 22.48 [398/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operator-delete.c.o
    #24 22.49 [399/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/unpooling-2d.c.o
    #24 22.49 [400/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operators/convolution-nhwc.c.o
    #24 22.49 [401/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operator-strings.c.o
    #24 22.49 [402/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-argmaxpool/4x-scalar-c1.c.o
    #24 22.50 [403/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/static-resize-bilinear-2d.c.o
    #24 22.50 [404/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/square-root.c.o
    #24 22.50 [405/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/subtract.c.o
    #24 22.50 [406/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph-strings.c.o
    #24 22.51 [407/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/static-constant-pad.c.o
    #24 22.51 [408/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/square.c.o
    #24 22.51 [409/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-clamp/gen/scalar-x4.c.o
    #24 22.52 [410/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-clamp/gen/scalar-x2.c.o
    #24 22.53 [411/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-clamp/gen/scalar-x1.c.o
    #24 22.54 [412/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph/static-reshape.c.o
    #24 22.54 [413/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/tensor.c.o
    #24 22.54 [414/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-argmaxpool/9x-scalar-c1.c.o
    #24 22.54 [415/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-avgpool/9x-minmax-scalar-c1.c.o
    #24 22.55 [416/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x4-minmax-scalar-acc2.c.o
    #24 22.55 [417/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x4-minmax-scalar.c.o
    #24 22.55 [418/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/memory-planner.c.o
    #24 22.55 [419/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-argmaxpool/9p8x-scalar-c1.c.o
    #24 22.55 [420/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x4-scalar-acc2.c.o
    #24 22.56 [421/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x9-minmax-scalar-acc2.c.o
    #24 22.56 [422/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x4-scalar.c.o
    #24 22.56 [423/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x9-scalar.c.o
    #24 22.57 [424/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x9-scalar-acc2.c.o
    #24 22.57 [425/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x4-scalar.c.o
    #24 22.58 [426/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-avgpool/9p8x-minmax-scalar-c1.c.o
    #24 22.58 [427/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x4-minmax-scalar.c.o
    #24 22.58 [428/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x4-scalar-acc2.c.o
    #24 22.58 [429/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x9-minmax-scalar.c.o
    #24 22.59 [430/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/default_value_objectwriter.cc.o
    #24 22.59 [431/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-1x1-acc4.c.o
    #24 22.59 [432/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-conv-hwc/3x3s2p1c3x4-scalar-1x1.c.o
    #24 22.60 [433/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x25-scalar.c.o
    #24 22.60 [434/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-1x1-acc3.c.o
    #24 22.60 [435/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-1x1.c.o
    #24 22.60 [436/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x25-minmax-scalar.c.o
    #24 22.60 [437/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x25-scalar-acc2.c.o
    #24 22.60 [438/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x9-minmax-scalar.c.o
    #24 22.61 [439/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x9-minmax-scalar-acc2.c.o
    #24 22.61 [440/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-1x1-acc2.c.o
    #24 22.61 [441/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-conv-hwc2chw/3x3s2p1c3x4-scalar-1x1.c.o
    #24 22.61 [442/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x4-minmax-scalar-acc2.c.o
    #24 22.61 [443/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x9-scalar.c.o
    #24 22.62 [444/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x9-scalar-acc2.c.o
    #24 22.62 [445/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-conv-hwc/3x3s2p0p1c3x4-scalar-1x1.c.o
    #24 22.62 [446/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-2x1.c.o
    #24 22.63 [447/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x25-scalar-acc2.c.o
    #24 22.64 [448/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-2x1-acc2.c.o
    #24 22.65 [449/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x25-minmax-scalar.c.o
    #24 22.65 [450/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-3x1.c.o
    #24 22.66 [451/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-2x1-acc2.c.o
    #24 22.66 [452/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/type.pb.cc.o
    #24 22.66 [453/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-1x1-acc3.c.o
    #24 22.66 [454/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/map_field.cc.o
    #24 22.66 [455/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/subgraph.c.o
    #24 22.66 [456/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-4x1.c.o
    #24 22.66 [457/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-1x1.c.o
    #24 22.67 [458/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-1x1-acc2.c.o
    #24 22.67 [459/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up1x25-minmax-scalar-acc2.c.o
    #24 22.67 [460/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x25-minmax-scalar-acc2.c.o
    #24 22.68 [461/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-5x1.c.o
    #24 22.68 [462/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-2x1.c.o
    #24 22.69 [463/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-3x1.c.o
    #24 22.69 [464/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/indirection.c.o
    #24 22.69 [465/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/gen/up2x25-scalar.c.o
    #24 22.69 [466/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-6x1.c.o
    #24 22.70 [467/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-1x1-acc4.c.o
    #24 22.70 [468/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-4x1.c.o
    #24 22.70 [469/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-1x1-acc2.c.o
    #24 22.70 [470/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-1x1-acc3.c.o
    #24 22.70 [471/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-1x1-acc4.c.o
    #24 22.70 [472/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/init.c.o
    #24 22.70 [473/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-1x1-acc5.c.o
    #24 22.70 [474/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-1x1.c.o
    #24 22.72 [475/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/operator-run.c.o
    #24 22.72 [476/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-1x1-acc5.c.o
    #24 22.72 [477/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gavgpool-cw/scalar-x1.c.o
    #24 22.72 [478/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gavgpool/7x-minmax-scalar-c1.c.o
    #24 22.72 [479/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gavgpool/7p7x-minmax-scalar-c1.c.o
    #24 22.72 [480/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-2x1.c.o
    #24 22.73 [481/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/2x4inc-minmax-scalar.c.o
    #24 22.73 [482/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-1x1-acc2.c.o
    #24 22.73 [483/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-1x1-acc3.c.o
    #24 22.74 [484/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-2x1-acc2.c.o
    #24 22.74 [485/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-2x1-acc3.c.o
    #24 22.74 [486/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-1x1-acc4.c.o
    #24 22.74 [487/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-1x1.c.o
    #24 22.74 [488/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x4inc-minmax-scalar.c.o
    #24 22.75 [489/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x4-minmax-scalar.c.o
    #24 22.75 [490/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x4-relu-scalar.c.o
    #24 22.76 [491/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-3x1.c.o
    #24 22.76 [492/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-2x1-acc3.c.o
    #24 22.76 [493/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-2x1.c.o
    #24 22.76 [494/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x4inc-minmax-scalar.c.o
    #24 22.76 [495/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/2x4-minmax-scalar.c.o
    #24 22.76 [496/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/2x4-relu-scalar.c.o
    #24 22.76 [497/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/2x4-scalar.c.o
    #24 22.76 [498/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-hswish/gen/hswish-scalar-x1.c.o
    #24 22.76 [499/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-3x1-acc2.c.o
    #24 22.76 [500/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-2x1-acc2.c.o
    #24 22.77 [501/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x4-scalar.c.o
    #24 22.77 [502/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x2-minmax-scalar.c.o
    #24 22.77 [503/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear-chw/gen/scalar-p1.c.o
    #24 22.78 [504/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x2-relu-scalar.c.o
    #24 22.78 [505/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x2-scalar.c.o
    #24 22.78 [506/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-hswish/gen/hswish-scalar-x2.c.o
    #24 22.78 [507/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-3x1.c.o
    #24 22.79 [508/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x4-relu-scalar.c.o
    #24 22.79 [509/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x4-scalar.c.o
    #24 22.79 [510/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-hswish/gen/hswish-scalar-x4.c.o
    #24 22.79 [511/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear-chw/gen/scalar-p2.c.o
    #24 22.79 [512/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear/gen/scalar-c1.c.o
    #24 22.79 [513/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear/gen/scalar-c2.c.o
    #24 22.79 [514/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-3x1-acc2.c.o
    #24 22.79 [515/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x4-minmax-scalar.c.o
    #24 22.80 [516/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear-chw/gen/scalar-p4.c.o
    #24 22.80 [517/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ibilinear/gen/scalar-c4.c.o
    #24 22.80 [518/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x4-minmax-scalar.c.o
    #24 22.80 [519/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x4-relu-scalar.c.o
    #24 22.81 [520/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x4-scalar.c.o
    #24 22.81 [521/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/protostream_objectwriter.cc.o
    #24 22.82 [522/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x2-minmax-scalar.c.o
    #24 22.82 [523/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/2x4-minmax-scalar.c.o
    #24 22.83 [524/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/2x4-relu-scalar.c.o
    #24 22.83 [525/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/2x4-scalar.c.o
    #24 22.83 [526/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x2-relu-scalar.c.o
    #24 22.83 [527/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x2-scalar.c.o
    #24 22.83 [528/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ppmm/gen/2x4-minmax-scalar.c.o
    #24 22.83 [529/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-pavgpool/9x-minmax-scalar-c1.c.o
    #24 22.84 [530/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-prelu/gen/scalar-2x1.c.o
    #24 22.84 [531/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x4-minmax-scalar.c.o
    #24 22.84 [532/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x4-scalar.c.o
    #24 22.85 [533/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-maxpool/9p8x-minmax-scalar-c1.c.o
    #24 22.85 [534/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ppmm/gen/4x2-minmax-scalar.c.o
    #24 22.85 [535/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/runtime.c.o
    #24 22.85 [536/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x4-relu-scalar.c.o
    #24 22.85 [537/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ppmm/gen/3x3-minmax-scalar.c.o
    #24 22.86 [538/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-prelu/gen/scalar-2x4.c.o
    #24 22.86 [539/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x1.c.o
    #24 22.86 [540/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x4-acc2.c.o
    #24 22.87 [541/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x4.c.o
    #24 22.87 [542/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-pavgpool/9p8x-minmax-scalar-c1.c.o
    #24 22.87 [543/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-ppmm/gen/4x4-minmax-scalar.c.o
    #24 22.87 [544/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x2-acc2.c.o
    #24 22.87 [545/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x2.c.o
    #24 22.88 [546/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-lut64-p2-x4-acc4.c.o
    #24 22.88 [547/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x1.c.o
    #24 22.89 [548/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x2-acc2.c.o
    #24 22.90 [549/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x2.c.o
    #24 22.91 [550/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x4-acc4.c.o
    #24 22.91 [551/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x4.c.o
    #24 22.91 [552/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-rmax/scalar.c.o
    #24 22.91 [553/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-raddstoreexpminusmax/gen/scalar-p5-x4-acc2.c.o
    #24 22.91 [554/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-relu/gen/scalar-x1.c.o
    #24 22.91 [555/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-relu/gen/scalar-x2.c.o
    #24 22.91 [556/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-relu/gen/scalar-x4.c.o
    #24 22.91 [557/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-relu/gen/scalar-x8.c.o
    #24 22.92 [558/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut64-p2-div-x1.c.o
    #24 22.92 [559/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut2048-p1-div-x1.c.o
    #24 22.92 [560/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-p5-div-x1.c.o
    #24 22.92 [561/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/1x1-minmax-scalar-pipelined.c.o
    #24 22.93 [562/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/1x1-minmax-scalar.c.o
    #24 22.93 [563/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/2x1-minmax-scalar.c.o
    #24 22.94 [564/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut64-p2-div-x2.c.o
    #24 22.94 [565/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut2048-p1-div-x2.c.o
    #24 22.94 [566/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-p5-div-x2.c.o
    #24 22.94 [567/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/2x1-minmax-scalar-pipelined.c.o
    #24 22.94 [568/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/4x1-minmax-scalar.c.o
    #24 22.94 [569/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-minmax-scalar-x1.c.o
    #24 22.94 [570/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-minmax-scalar-x2.c.o
    #24 22.95 [571/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-minmax-scalar-x4.c.o
    #24 22.95 [572/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-relu-scalar-x1.c.o
    #24 22.95 [573/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-relu-scalar-x2.c.o
    #24 22.95 [574/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut64-p2-div-x4.c.o
    #24 22.95 [575/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-lut2048-p1-div-x4.c.o
    #24 22.95 [576/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-sigmoid/gen/scalar-p5-div-x4.c.o
    #24 22.95 [577/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/4x1-minmax-scalar-pipelined.c.o
    #24 22.96 [578/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-minmax-scalar-x8.c.o
    #24 22.96 [579/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-relu-scalar-x4.c.o
    #24 22.96 [580/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-relu-scalar-x8.c.o
    #24 22.96 [581/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-scalar-x2.c.o
    #24 22.96 [582/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-scalar-x4.c.o
    #24 22.96 [583/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-scalar-x8.c.o
    #24 22.96 [584/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-minmax-scalar-x1.c.o
    #24 22.96 [585/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/8x1-minmax-scalar-pipelined.c.o
    #24 22.97 [586/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vadd-scalar-x1.c.o
    #24 22.97 [587/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-minmax-scalar-x2.c.o
    #24 22.97 [588/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-minmax-scalar-x4.c.o
    #24 22.98 [589/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-relu-scalar-x1.c.o
    #24 22.98 [590/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-relu-scalar-x2.c.o
    #24 22.98 [591/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-relu-scalar-x4.c.o
    #24 22.98 [592/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/8x1-minmax-scalar.c.o
    #24 22.98 [593/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-minmax-scalar-x8.c.o
    #24 22.98 [594/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-relu-scalar-x8.c.o
    #24 22.98 [595/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-scalar-x1.c.o
    #24 22.98 [596/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-scalar-x2.c.o
    #24 22.99 [597/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-minmax-scalar-x2.c.o
    #24 22.99 [598/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/8x2-minmax-scalar.c.o
    #24 22.99 [599/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-relu-scalar-x1.c.o
    #24 22.99 [600/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-scalar-x4.c.o
    #24 22.99 [601/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vaddc-scalar-x8.c.o
    #24 23.00 [602/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-minmax-scalar-x1.c.o
    #24 23.00 [603/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-relu-scalar-x2.c.o
    #24 23.00 [604/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-minmax-scalar-x4.c.o
    #24 23.00 [605/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-relu-scalar-x8.c.o
    #24 23.01 [606/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-scalar-x1.c.o
    #24 23.01 [607/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-minmax-scalar-x8.c.o
    #24 23.03 [608/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-minmax-scalar-x2.c.o
    #24 23.03 [609/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-relu-scalar-x4.c.o
    #24 23.03 [610/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-minmax-scalar-x1.c.o
    #24 23.04 [611/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-scalar-x4.c.o
    #24 23.04 [612/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-scalar-x2.c.o
    #24 23.04 [613/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-scalar-x2.c.o
    #24 23.05 [614/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-minmax-scalar-x4.c.o
    #24 23.05 [615/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-relu-scalar-x1.c.o
    #24 23.05 [616/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-relu-scalar-x4.c.o
    #24 23.05 [617/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmaxc-scalar-x1.c.o
    #24 23.05 [618/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdiv-scalar-x8.c.o
    #24 23.05 [619/6115] Performing build step for 'nccl_external'
    #24 23.05 FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a 
    #24 23.05 cd /opt/pytorch/third_party/nccl/nccl && env make CXX=/usr/bin/c++ CUDA_HOME=/usr/local/cuda NVCC=/usr/local/cuda/bin/nvcc NVCC_GENCODE=-gencode=arch=compute_80,code=sm_80 BUILDDIR=/opt/pytorch/build/nccl VERBOSE=0 -j && /opt/conda/bin/cmake -E touch /opt/pytorch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
    #24 23.05 make -C src build BUILDDIR=/opt/pytorch/build/nccl
    #24 23.05 make[1]: Entering directory '/opt/pytorch/third_party/nccl/nccl/src'
    #24 23.05 Grabbing   include/nccl_net.h                  > /opt/pytorch/build/nccl/include/nccl_net.h
    #24 23.05 Generating nccl.h.in                           > /opt/pytorch/build/nccl/include/nccl.h
    #24 23.05 Generating nccl.pc.in                          > /opt/pytorch/build/nccl/lib/pkgconfig/nccl.pc
    #24 23.05 Compiling  init.cc                             > /opt/pytorch/build/nccl/obj/init.o
    #24 23.05 Compiling  channel.cc                          > /opt/pytorch/build/nccl/obj/channel.o
    #24 23.05 Compiling  bootstrap.cc                        > /opt/pytorch/build/nccl/obj/bootstrap.o
    #24 23.05 Compiling  transport.cc                        > /opt/pytorch/build/nccl/obj/transport.o
    #24 23.05 Compiling  enqueue.cc                          > /opt/pytorch/build/nccl/obj/enqueue.o
    #24 23.05 Compiling  group.cc                            > /opt/pytorch/build/nccl/obj/group.o
    #24 23.05 Compiling  debug.cc                            > /opt/pytorch/build/nccl/obj/debug.o
    #24 23.05 Compiling  proxy.cc                            > /opt/pytorch/build/nccl/obj/proxy.o
    #24 23.05 Compiling  misc/nvmlwrap.cc                    > /opt/pytorch/build/nccl/obj/misc/nvmlwrap.o
    #24 23.05 Compiling  misc/ibvwrap.cc                     > /opt/pytorch/build/nccl/obj/misc/ibvwrap.o
    #24 23.05 Compiling  misc/utils.cc                       > /opt/pytorch/build/nccl/obj/misc/utils.o
    #24 23.05 Compiling  misc/argcheck.cc                    > /opt/pytorch/build/nccl/obj/misc/argcheck.o
    #24 23.05 Compiling  transport/p2p.cc                    > /opt/pytorch/build/nccl/obj/transport/p2p.o
    #24 23.05 Compiling  transport/shm.cc                    > /opt/pytorch/build/nccl/obj/transport/shm.o
    #24 23.05 Compiling  transport/net.cc                    > /opt/pytorch/build/nccl/obj/transport/net.o
    #24 23.05 Compiling  transport/net_socket.cc             > /opt/pytorch/build/nccl/obj/transport/net_socket.o
    #24 23.05 Compiling  transport/net_ib.cc                 > /opt/pytorch/build/nccl/obj/transport/net_ib.o
    #24 23.05 Compiling  transport/coll_net.cc               > /opt/pytorch/build/nccl/obj/transport/coll_net.o
    #24 23.05 Compiling  collectives/sendrecv.cc             > /opt/pytorch/build/nccl/obj/collectives/sendrecv.o
    #24 23.05 Compiling  collectives/all_reduce.cc           > /opt/pytorch/build/nccl/obj/collectives/all_reduce.o
    #24 23.05 Compiling  collectives/all_gather.cc           > /opt/pytorch/build/nccl/obj/collectives/all_gather.o
    #24 23.05 Compiling  collectives/broadcast.cc            > /opt/pytorch/build/nccl/obj/collectives/broadcast.o
    #24 23.05 Compiling  collectives/reduce.cc               > /opt/pytorch/build/nccl/obj/collectives/reduce.o
    #24 23.05 Compiling  collectives/reduce_scatter.cc       > /opt/pytorch/build/nccl/obj/collectives/reduce_scatter.o
    #24 23.05 Compiling  graph/topo.cc                       > /opt/pytorch/build/nccl/obj/graph/topo.o
    #24 23.05 Compiling  graph/paths.cc                      > /opt/pytorch/build/nccl/obj/graph/paths.o
    #24 23.05 Compiling  graph/search.cc                     > /opt/pytorch/build/nccl/obj/graph/search.o
    #24 23.05 Compiling  graph/connect.cc                    > /opt/pytorch/build/nccl/obj/graph/connect.o
    #24 23.05 Compiling  graph/rings.cc                      > /opt/pytorch/build/nccl/obj/graph/rings.o
    #24 23.05 Compiling  graph/trees.cc                      > /opt/pytorch/build/nccl/obj/graph/trees.o
    #24 23.05 Compiling  graph/tuning.cc                     > /opt/pytorch/build/nccl/obj/graph/tuning.o
    #24 23.05 Compiling  graph/xml.cc                        > /opt/pytorch/build/nccl/obj/graph/xml.o
    #24 23.05 make[2]: Entering directory '/opt/pytorch/third_party/nccl/nccl/src/collectives/device'
    #24 23.05 Generating rules                               > /opt/pytorch/build/nccl/obj/collectives/device/Makefile.rules
    #24 23.05 nvcc fatal   : Unsupported gpu architecture 'compute_80'
    #24 23.05 nvcc fatal   : Unsupported gpu architecture 'compute_80'
    #24 23.05 Makefile:52: recipe for target '/opt/pytorch/build/nccl/obj/collectives/device/sendrecv.dep' failed
    #24 23.05 make[2]: *** [/opt/pytorch/build/nccl/obj/collectives/device/sendrecv.dep] Error 1
    #24 23.05 make[2]: *** Waiting for unfinished jobs....
    #24 23.05 Makefile:52: recipe for target '/opt/pytorch/build/nccl/obj/collectives/device/all_reduce.dep' failed
    #24 23.05 make[2]: *** [/opt/pytorch/build/nccl/obj/collectives/device/all_reduce.dep] Error 1
    #24 23.05 nvcc fatal   : Unsupported gpu architecture 'compute_80'
    #24 23.05 Makefile:52: recipe for target '/opt/pytorch/build/nccl/obj/collectives/device/all_gather.dep' failed
    #24 23.05 make[2]: *** [/opt/pytorch/build/nccl/obj/collectives/device/all_gather.dep] Error 1
    #24 23.05 make[2]: *** wait: No child processes.  Stop.
    #24 23.05 Makefile:50: recipe for target '/opt/pytorch/build/nccl/obj/collectives/device/colldevice.a' failed
    #24 23.05 make[1]: *** [/opt/pytorch/build/nccl/obj/collectives/device/colldevice.a] Error 2
    #24 23.05 make[1]: *** Waiting for unfinished jobs....
    #24 23.05 make[1]: Leaving directory '/opt/pytorch/third_party/nccl/nccl/src'
    #24 23.05 Makefile:25: recipe for target 'src.build' failed
    #24 23.05 make: *** [src.build] Error 2
    #24 23.05 [620/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-minmax-scalar-x8.c.o
    #24 23.05 [621/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-scalar-x8.c.o
    #24 23.05 [622/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-relu-scalar-x2.c.o
    #24 23.06 [623/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-scalar-x1.c.o
    #24 23.06 [624/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-relu-scalar-x8.c.o
    #24 23.06 [625/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmaxc-scalar-x8.c.o
    #24 23.06 [626/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-minmax-scalar-x1.c.o
    #24 23.06 [627/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmax-scalar-x2.c.o
    #24 23.06 [628/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-minmax-scalar-x2.c.o
    #24 23.06 [629/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmaxc-scalar-x2.c.o
    #24 23.06 [630/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vminc-scalar-x2.c.o
    #24 23.07 [631/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmax-scalar-x1.c.o
    #24 23.07 [632/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmin-scalar-x1.c.o
    #24 23.07 [633/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmax-scalar-x4.c.o
    #24 23.07 [634/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-minmax-scalar-x4.c.o
    #24 23.07 [635/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-spmm/gen/8x4-minmax-scalar.c.o
    #24 23.07 [636/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vdivc-scalar-x4.c.o
    #24 23.08 [637/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmaxc-scalar-x4.c.o
    #24 23.08 [638/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-relu-scalar-x1.c.o
    #24 23.08 [639/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmax-scalar-x8.c.o
    #24 23.08 [640/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmin-scalar-x2.c.o
    #24 23.08 [641/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-relu-scalar-x4.c.o
    #24 23.08 [642/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmin-scalar-x4.c.o
    #24 23.08 [643/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vminc-scalar-x8.c.o
    #24 23.08 [644/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vminc-scalar-x1.c.o
    #24 23.09 [645/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-scalar-x8.c.o
    #24 23.09 [646/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-minmax-scalar-x4.c.o
    #24 23.09 [647/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-scalar-x4.c.o
    #24 23.09 [648/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-scalar-x1.c.o
    #24 23.09 [649/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-minmax-scalar-x2.c.o
    #24 23.09 [650/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-relu-scalar-x1.c.o
    #24 23.10 [651/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-relu-scalar-x2.c.o
    #24 23.10 [652/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-minmax-scalar-x1.c.o
    #24 23.10 [653/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-scalar-x2.c.o
    #24 23.10 [654/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-minmax-scalar-x8.c.o
    #24 23.10 [655/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vminc-scalar-x4.c.o
    #24 23.10 [656/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmin-scalar-x8.c.o
    #24 23.10 [657/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-relu-scalar-x8.c.o
    #24 23.10 [658/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-relu-scalar-x2.c.o
    #24 23.11 [659/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmul-minmax-scalar-x8.c.o
    #24 23.11 [660/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-relu-scalar-x4.c.o
    #24 23.11 [661/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-relu-scalar-x8.c.o
    #24 23.11 [662/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vrdivc-minmax-scalar-x1.c.o
    #24 23.12 [663/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-scalar-x1.c.o
    #24 23.12 [664/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-scalar-x8.c.o
    #24 23.12 [665/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-scalar-x4.c.o
    #24 23.12 [666/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vmulc-scalar-x2.c.o
    #24 23.12 [667/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/struct.pb.cc.o
    #24 23.12 [668/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vrdivc-minmax-scalar-x2.c.o
    #24 23.13 [669/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vrdivc-minmax-scalar-x4.c.o
    #24 23.13 [670/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-vbinary/gen/vrdivc-minmax-scalar-x8.c.o
    #24 23.13 [671/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv2d-chw/gen/3x3p1-minmax-sse-6x4.c.o
    #24 23.15 [672/6115] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/packing.c.o
    #24 23.20 [673/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wire_format.cc.o
    #24 23.21 [674/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_table_driven.cc.o
    #24 23.21 [675/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor_database.cc.o
    #24 23.37 [676/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/compiler/parser.cc.o
    #24 23.51 [677/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_reflection.cc.o
    #24 23.70 [678/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/message_differencer.cc.o
    #24 23.90 [679/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/text_format.cc.o
    #24 24.51 [680/6115] Generating src/x86_64-fma/2d-fourier-8x8.py.o
    #24 25.33 [681/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor.pb.cc.o
    #24 28.10 [682/6115] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor.cc.o
    #24 28.10 ninja: build stopped: subcommand failed.
    ------
    executor failed running [/bin/sh -c USE_CUDA=1 USE_CUDNN=1     TORCH_NVCC_FLAGS=${TORCH_NVCC_FLAGS}     TORCH_CUDA_ARCH_LIST=${TORCH_CUDA_ARCH_LIST}     CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"     python setup.py bdist_wheel -d /tmp/dist]: exit code: 1
    make: *** [Makefile:104: build-torch-full] Error 1
    

    Is this normal? what should I do? Thank you in advance!

    opened by pvnieo 11
  • GPUs Not Detected (RTX 3090; Torch 1.9.1)

    GPUs Not Detected (RTX 3090; Torch 1.9.1)

    Hello, and thank you for putting together this resource!

    I tried following the steps in the README without deviation, but using the resulting docker container doesn't detect the GPUs in a Python context. I'm using a system with 3x RTX 3090s

    Steps to Reproduce

    1. Clone this directory
    2. Run the following command (taken straight from the README)
    make all CC="8.6" PYTORCH_VERSION_TAG=v1.9.1 TORCHVISION_VERSION_TAG=v0.10.1 TORCHTEXT_VERSION_TAG=v0.10.1 TORCHAUDIO_VERSION_TAG=v0.9.1
    
    1. Install docker-compose v2
    2. Edit docker-compose.yaml Before Editing: https://github.com/veritas9872/PyTorch-Universal-Docker-Template/blob/f8708d44c9d3a013155cd91985534da1a4eee7fb/docker-compose.yaml#L28 After Editing:
            TORCH_CUDA_ARCH_LIST: ${CC:-'8.6+PTX'}
    

    PS - also edited the GID and UID as suggested but not posting here as that's system specific 5. Run the following commands (taken from the README):

    docker compose up -d train
    docker compose exec train /bin/bash
    
    1. Open up ipython and run the following
    import torch
    torch.cuda.is_available()
    

    to get this error:

    /opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at  ../c10/cuda/CUDAFunctions.cpp:115.)
      return torch._C._cuda_getDeviceCount() > 0
    

    Thank you!

    opened by rsomani95 10
  • git clone --job 에러

    git clone --job 에러

    git clone 명령어에 대하여 unknown option `jobs' 에러가 발생합니다. 혹시 해결방법이 있을까요?


    [build-install 3/6] RUN git clone --recursive --jobs 0 https://github.com/pytorch/pytorch.git:
    #16 0.633 error: unknown option `jobs'
    #16 0.633 usage: git clone [] [--] [

    ]
    #16 0.633 #16 0.633 -v, --verbose be more verbose #16 0.633 -q, --quiet be more quiet #16 0.633 --progress force progress reporting #16 0.633 -n, --no-checkout don't create a checkout #16 0.633 --bare create a bare repository #16 0.633 --mirror create a mirror repository (implies bare) #16 0.633 -l, --local to clone from a local repository #16 0.633 --no-hardlinks don't use local hardlinks, always copy #16 0.633 -s, --shared setup as shared repository #16 0.633 --recursive initialize submodules in the clone #16 0.633 --recurse-submodules initialize submodules in the clone #16 0.633 --template #16 0.633 directory from which templates will be used #16 0.633 --reference reference repository #16 0.633 --dissociate use --reference only while cloning #16 0.633 -o, --origin use instead of 'origin' to track upstream #16 0.633 -b, --branch #16 0.633 checkout instead of the remote's HEAD #16 0.633 -u, --upload-pack #16 0.633 path to git-upload-pack on the remote #16 0.633 --depth create a shallow clone of that depth #16 0.633 --single-branch clone only one branch, HEAD or --branch #16 0.633 --separate-git-dir #16 0.633 separate git dir from working tree #16 0.633 -c, --config <key=value> #16 0.633 set config inside the new repository #16 0.633


    executor failed running [/bin/sh -c git clone --recursive --jobs 0 https://github.com/pytorch/pytorch.git]: exit code: 129 Makefile:23: recipe for target 'build-install' failed make: *** [build-install] Error 1

    opened by kboseong 8
  • A minimal way to adopt the method in an existing project?

    A minimal way to adopt the method in an existing project?

    Hello,

    I came across this template when having trouble installing the environment of an existing project, given as a conda environment, but without more details about system dependencies. I wanted to build a portable environment with Docker but existing PyTorch images do not support my old GPU kernel, so I was looking for a way to compile Pytorch and Torchvision (CUDA enabled) from source in a Docker environment.

    This project template addresses my problem and doubles down with best practices, however, I found it overwhelming to use/adapt it to my current project. Mainly because of the high complexity of building the environment that is transferred to the user (E.g. setting up the different env variables).

    Is there a minimalist way to take this template and integrate it in an existing project?

    Thanks!

    opened by skandermoalla 7
  • Image size is 756GB.

    Image size is 756GB.

    The image size is 756GB with the command "make all-full CC="7.5" on the server with 40 cores and 8 Gtx2080-Ti s.

    I am not sure if the image is taking really this big size, but when I tried to push it to Docker Hub, it really went 756GB. (I am pushing it now, currently, 41.48GB/756GB pushed)

    opened by WantToLearnJapanese 7
  • Torchvision does not build PYTORCH_VERSION_TAG:-v1.10.2 TORCHVISION_VERSION_TAG:-v0.11.3

    Torchvision does not build PYTORCH_VERSION_TAG:-v1.10.2 TORCHVISION_VERSION_TAG:-v0.11.3

    Hello,

    I'm building the docker image with the following config (all others being default)

    BUILD_MODE=include
    CCA=3.5
    CUDA_VERSION:-11.3.1
    PYTHON_VERSION:-3.8
    PYTORCH_VERSION_TAG:-v1.10.2
    TORCHVISION_VERSION_TAG:-v0.11.3
    

    The build fails at the build-vision stage with the following error:

    #0 85.06 [14/16] c++ -MMD -MF /opt/vision/build/temp.linux-x86_64-3.8/opt/vision/torchvision/csrc/io/video_reader/video_reader.o.d -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall
    -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/vision/torchvision/csrc/io/decoder -I/opt/vision/torchvision/csrc/io/video_reader -I/opt/vision/torchvision/csrc/io/video -I/opt/vision/torchv
    ision/csrc -I/opt/conda/include -I/opt/conda/include/x86_64-linux-gnu -I/opt/vision/torchvision/csrc -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/inclu
    de -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/sit
    e-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /opt/vision/torchvision/csrc/
    io/video_reader/video_reader.cpp -o /opt/vision/build/temp.linux-x86_64-3.8/opt/vision/torchvision/csrc/io/video_reader/video_reader.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB=
    "_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=video_reader -D_GLIBCXX_USE_CXX11_ABI=1
    #0 85.06 ninja: build stopped: subcommand failed.
    #0 85.06 Traceback (most recent call last):
    #0 85.06   File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1717, in _run_ninja_build
    #0 85.06     subprocess.run(
    #0 85.06   File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
    #0 85.06     raise CalledProcessError(retcode, process.args,
    #0 85.06 subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
    #0 85.06
    #0 85.06 The above exception was the direct cause of the following exception:                                                    
    #0 85.06
    #0 85.06 Traceback (most recent call last):
    #0 85.06   File "setup.py", line 484, in <module>
    #0 85.06     setup(
    #0 85.06   File "/opt/conda/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
    #0 85.06     return distutils.core.setup(**attrs)
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/core.py", line 148, in setup
    #0 85.06     dist.run_commands()
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/dist.py", line 966, in run_commands
    #0 85.06     self.run_command(cmd)
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    #0 85.06     cmd_obj.run()
    #0 85.06   File "/opt/conda/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
    #0 85.06     self.run_command('build')
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
    #0 85.06     self.distribution.run_command(command)
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    #0 85.06     cmd_obj.run()
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/command/build.py", line 135, in run
    #0 85.06     self.run_command(cmd_name)
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
    #0 85.06     self.distribution.run_command(command)
    #0 85.06   File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    #0 85.06     cmd_obj.run()
    #0 85.06   File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
    #0 85.07     _build_ext.run(self)
    #0 85.07   File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 340, in run
    #0 85.07     self.build_extensions()
    #0 85.07   File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions
    #0 85.07     build_ext.build_extensions(self)
    #0 85.07   File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
    #0 85.07     self._build_extensions_serial()
    #0 85.07   File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    #0 85.07     self.build_extension(ext)
    #0 85.07   File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
    #0 85.07     _build_ext.build_extension(self, ext)
    #0 85.07   File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
    #0 85.07     objects = self.compiler.compile(sources,
    #0 85.07   File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 556, in unix_wrap_ninja_compile
    #0 85.07     _write_ninja_file_and_compile_objects(
    #0 85.07   File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1399, in _write_ninja_file_and_compile_objects
    #0 85.07     _run_ninja_build(
    #0 85.07   File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
    #0 85.07     raise RuntimeError(message) from e
    #0 85.07 RuntimeError: Error compiling objects for extension
    ------
    failed to solve: executor failed running [/bin/sh -c python setup.py bdist_wheel -d /tmp/dist]: exit code: 1
    make: *** [Makefile:67: build] Error 17
    

    When ignoring the build-vision stage, the build works fine. I.e by changing the train-builds-include as

    FROM ${BUILD_IMAGE} AS train-builds-include
    
    ...
    
    COPY --link --from=install-base /opt/conda /opt/conda
    COPY --link --from=build-pillow /tmp/dist  /tmp/dist
    
    COPY --link --from=build-torch /tmp/dist  /tmp/dist <--- here 
    
    COPY --link --from=fetch-pure   /opt/zsh   /opt
    

    I get the same error with different versions of torchvision. I tried 0.11.3, 0.11.2 and 0.11.1.

    Could you suggest any help?

    Thanks a lot!

    opened by skandermoalla 5
  • Conda dependencies in `train` stage

    Conda dependencies in `train` stage

    Hello,

    At the moment the template supports installing dependencies withpip in the train stage specified in pip-train.requirements.txt. Is there a way to add conda dependencies as well? Best practices are to install the conda dependencies first then the pip ones.

    Thanks!

    opened by skandermoalla 5
  • Building the template for x86_64 or VM

    Building the template for x86_64 or VM

    Hi

    Thanks for the detailed steps on building and documentation. Can I make the build image for intel desktop or VM ? In case I use (build/train) on a VM what are the TARGET_ARCH options that needs to be given ?

    Thank you so much Srini

    opened by skanduru 5
  • Building from source having x4 speedup vs. naive install

    Building from source having x4 speedup vs. naive install

    Hi. Thank you for releasing this library.

    I had a question about this tidbit in the README:

    PyTorch built from source can be x4 faster than a naïve PyTorch install.

    Is there any past discussion or justification about this speedup? Is source install faster both on CPU and GPU? Could it be attributed to better optimization of BLAS or MKL?

    Would love to know more about this.

    opened by seongminp 5
  • Fix PyTorch download URL variables

    Fix PyTorch download URL variables

    Fix inconsistencies in the PyTorch download URL variables:

    • Dockerfile defined PYTORCH_INDEX_URL
    • but compose file defines PYTORCH_URL (here)

    Also, delete default values in Dockerfile as the compose file specifies them.

    opened by skandermoalla 4
  • make build failed

    make build failed

    .env is below:

    CUDA_VERSION=10.2
    CUDNN_VERSION=8
    PYTHON_VERSION=3.7
    TORCH_CUDA_ARCH_LIST=8.0
    PYTORCH_VERSION_TAG=v1.11.0
    

    make build met error:

     > [internal] load metadata for docker.io/nvidia/cuda:11.2-cudnn8-devel-ubuntu18.04:
    ------
    failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = docker.io/nvidia/cuda:11.2-cudnn8-devel-ubuntu18.04: not found
    Makefile:55: recipe for target 'build' failed
    

    seems nvidia changes image names?

    opened by Jack47 4
Releases(v0.5.0)
  • v0.5.0(Jan 4, 2023)

    Changed the training environment so that conda is now the preferred package manager instead of just the virtual environment. Fixed many bugs and issues to make the project easier to use.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Sep 12, 2022)

    Separate out installing PyTorch and related libraries. Rename download-only stages as fetch for better contrast with build stages. Updates in the documentation.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Aug 30, 2022)

  • v0.3.0(Jun 1, 2022)

    Support for external files for installation. The NSight systems debian binary cannot be installed via the command line. An external .deb file is therefore used. Also, Git Large File System (LFS) is used to prevent the .git directory from bloating. The build guidelines have also changed in order to make the meaning clearer (I think). CCA is no longer mandatory and the .env file is guarded by a separate recipe. Python package installation is now fully parallel with apt package installation. Finally figured out how to get separate volume paths for different hosts. Using docker-compose.override.yaml was decided to be the best approach. Documentation still needs work.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Apr 30, 2022)

  • v0.2.1(Apr 3, 2022)

    This is a hack release forced by the sudden failure of URLs in TorchAudio and the Kakao mirror for PyPI. Although both have simple fixes, these issues mean that users must become more involved with the details. To ensure that the build does not fail for new users, TorchAudio source builds and using PyPI mirrors have been temporarily disabled. These functionalities will be restored ASAP.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 26, 2022)

    Summary

    Total makeover of the entire project from the ground up. Most documentation concerning how to use the Makefile for building Docker images and speed gains have been removed. More documentation and examples will come soon.

    What's Changed

    • Hotfix password by @veritas9872 in https://github.com/cresset-template/cresset/pull/24
    • Updates for Cresset by @veritas9872 in https://github.com/cresset-template/cresset/pull/25
    • Version 0.2 breaking update. by @veritas9872 in https://github.com/cresset-template/cresset/pull/27

    Full Changelog: https://github.com/cresset-template/cresset/compare/v0.1.2...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0-rc2(Jan 28, 2022)

    What's Changed

    • Hotfix password by @veritas9872 in https://github.com/cresset-template/cresset/pull/24
    • Updates for Cresset by @veritas9872 in https://github.com/cresset-template/cresset/pull/25
    • Version 0.2 breaking update. by @veritas9872 in https://github.com/cresset-template/cresset/pull/27

    Full Changelog: https://github.com/cresset-template/cresset/compare/v0.1.2...v0.2.0-rc2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0-rc1(Jan 23, 2022)

    New Features:

    1. Ubuntu 16.04 LTS support (not tested rigorously).
    2. CentOS and UBI support (not tested at all).
    3. MKL removal support for non-intel CPUs.
    4. Implementation of apt and pip package requirements files.
    5. Build-time optimizations for much faster build speeds. Caffe2 is disabled by default. Build tests are also disabled by default.
    6. LLD v11.2 used for faster linking during compilation.
    7. Jemalloc 5.2.1 used as the memory allocator for faster build and runtimes.
    8. Utilization of conda-forge and intel channels for optimized libraries.
    9. Addition of new docker-compose services for development.
    10. Inclusion of missing variable guards for the compute capability variable CCA during initialization of Docker Compose and Makefile build.

    Breaking Changes:

    1. The Compute Capability variable CC has been renamed CCA to avoid clashing with the C Compiler CC variable.
    2. Project renamed Cresset for greater memorability.

    Also fixed many bugs that I found.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 30, 2021)

    Added new zsh functionality such as auto-complete and syntax highlighting. If these distract more than they help, simply comment out the installation. The NGC image had a bug where shell was fixed to bash and could not be changed to sh. This was (sort-of) fixed by using printf instead of echo. Also, the NGC image was not using the mirror link due to an additional pip.conf file inside /opt/conda. The code was accommodated to manage. Finally, the docker-compose.yaml full service had a bug in listing image to cache from. This too was fixed.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Dec 27, 2021)

    Mostly stylistic updates and some minor bugfixes. Benchmarking code is now cleaner, with CPU benchmarking also available. Explanation of how to use the .env file in make commands now included.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 26, 2021)

    Initial Stable Release

    First release with semantic versioning.

    The project is now rather stable and thoroughly debugged, seeing as how there have been no bug reports for over a month.

    I have decided to adopt semantic versioning, even though I am worried that some people will not use the project now that there is a scary v0.x in front. Rest assured that the project works very well and that bugs are most likely due to the host environment or something the user did, not the project code itself.

    What's changed

    Unlike the previous versions (which I will call the alpha version from now on), the default shell is now zsh instead of bash. I have set the pure prompt as the default because of its compatibility with most terminals.

    The project can now use requirements.txt files, which was a feature in very high demand.

    I have also included benchmark.py for easy benchmarking. This was the most highly sought after functionality due to my claims of speed enhancements. Now anyone can measure the speed differences for themselves.

    Slightly friendlier README organization. No images though.

    Source code(tar.gz)
    Source code(zip)
Owner
Joonhyung Lee/이준형
Sapere Aude! Dare to know. Dare to understand.
Joonhyung Lee/이준형
Overview of architecture and implementation of TEDS-Net, as described in MICCAI 2021: "TEDS-Net: Enforcing Diffeomorphisms in Spatial Transformers to Guarantee TopologyPreservation in Segmentations"

TEDS-Net Overview of architecture and implementation of TEDS-Net, as described in MICCAI 2021: "TEDS-Net: Enforcing Diffeomorphisms in Spatial Transfo

Madeleine K Wyburd 14 Jan 04, 2023
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 02, 2023
RL Algorithms with examples in Python / Pytorch / Unity ML agents

Reinforcement Learning Project This project was created to make it easier to get started with Reinforcement Learning. It now contains: An implementati

Rogier Wachters 3 Aug 19, 2022
All course materials for the Zero to Mastery Machine Learning and Data Science course.

Zero to Mastery Machine Learning Welcome! This repository contains all of the code, notebooks, images and other materials related to the Zero to Maste

Daniel Bourke 1.6k Jan 08, 2023
PyElastica is the Python implementation of Elastica, an open-source software for the simulation of assemblies of slender, one-dimensional structures using Cosserat Rod theory.

PyElastica PyElastica is the python implementation of Elastica: an open-source project for simulating assemblies of slender, one-dimensional structure

Gazzola Lab 105 Jan 09, 2023
Spatial-Location-Constraint-Prototype-Loss-for-Open-Set-Recognition

Spatial Location Constraint Prototype Loss for Open Set Recognition Official PyTorch implementation of "Spatial Location Constraint Prototype Loss for

Xia Ziheng 12 Jun 24, 2022
A GOOD REPRESENTATION DETECTS NOISY LABELS

A GOOD REPRESENTATION DETECTS NOISY LABELS This code is a PyTorch implementation of the paper: Prerequisites Python 3.6.9 PyTorch 1.7.1 Torchvision 0.

<a href=[email protected]"> 64 Jan 04, 2023
A C implementation for creating 2D voronoi diagrams

Branch OSX/Linux Windows master dev jc_voronoi A fast C/C++ header only implementation for creating 2D Voronoi diagrams from a point set Uses Fortune'

Mathias Westerdahl 481 Dec 29, 2022
Fast methods to work with hydro- and topography data in pure Python.

PyFlwDir Intro PyFlwDir contains a series of methods to work with gridded DEM and flow direction datasets, which are key to many workflows in many ear

Deltares 27 Dec 07, 2022
The sixth place winning solution (6/220) in 2021 Gaofen Challenge.

SwinTransformer + OBBDet The sixth place winning solution (6/220) in the track of Fine-grained Object Recognition in High-Resolution Optical Images, 2

ming71 46 Dec 02, 2022
Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Romain Mormont 27 Oct 14, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 08, 2023
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
A way to store images in YAML.

YAMLImg A way to store images in YAML. I made this after seeing Roadcrosser's JSON-G because it was too inspiring to ignore this opportunity. Installa

5 Mar 14, 2022
CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

Frederick Wang 3 Apr 26, 2022
Python implementation of "Elliptic Fourier Features of a Closed Contour"

PyEFD An Python/NumPy implementation of a method for approximating a contour with a Fourier series, as described in [1]. Installation pip install pyef

Henrik Blidh 71 Dec 09, 2022