====== pyNVML ====== *** Patched to support Python 3 (and Python 2) *** ------------------------------------------------ Python bindings to the NVIDIA Management Library ------------------------------------------------ Provides a Python interface to GPU management and monitoring functions. This is a wrapper around the NVML library. For information about the NVML library, see the NVML developer page http://developer.nvidia.com/nvidia-management-library-nvml Download the latest package from: https://pypi.python.org/pypi/nvidia-ml-py3 Note this file can be run with 'python -m doctest -v README.txt' although the results are system dependent REQUIRES -------- Python 2.5, or an earlier version with the ctypes module. INSTALLATION ------------ sudo python setup.py install or pip install nvidia-ml-py3 USAGE ----- >>> from pynvml import * >>> nvmlInit() >>> print "Driver Version:", nvmlSystemGetDriverVersion() Driver Version: 352.00 >>> deviceCount = nvmlDeviceGetCount() >>> for i in range(deviceCount): ... handle = nvmlDeviceGetHandleByIndex(i) ... print "Device", i, ":", nvmlDeviceGetName(handle) ... Device 0 : Tesla K40c >>> nvmlShutdown() Additionally, see nvidia_smi.py. A sample application. FUNCTIONS --------- Python methods wrap NVML functions, implemented in a C shared library. Each function's use is the same with the following exceptions: - Instead of returning error codes, failing error codes are raised as Python exceptions. >>> try: ... nvmlDeviceGetCount() ... except NVMLError as error: ... print error ... Uninitialized - C function output parameters are returned from the corresponding Python function left to right. :: nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device, nvmlEnableState_t *current, nvmlEnableState_t *pending); >>> nvmlInit() >>> handle = nvmlDeviceGetHandleByIndex(0) >>> (current, pending) = nvmlDeviceGetEccMode(handle) - C structs are converted into Python classes. :: nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device, nvmlMemory_t *memory); typedef struct nvmlMemory_st { unsigned long long total; unsigned long long free; unsigned long long used; } nvmlMemory_t; >>> info = nvmlDeviceGetMemoryInfo(handle) >>> print "Total memory:", info.total Total memory: 5636292608 >>> print "Free memory:", info.free Free memory: 5578420224 >>> print "Used memory:", info.used Used memory: 57872384 - Python handles string buffer creation. :: nvmlReturn_t nvmlSystemGetDriverVersion(char* version, unsigned int length); >>> version = nvmlSystemGetDriverVersion(); >>> nvmlShutdown() For usage information see the NVML documentation. VARIABLES --------- All meaningful NVML constants and enums are exposed in Python. The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field. RELEASE NOTES ------------- Version 2.285.0 - Added new functions for NVML 2.285. See NVML documentation for more information. - Ported to support Python 3.0 and Python 2.0 syntax. - Added nvidia_smi.py tool as a sample app. Version 3.295.0 - Added new functions for NVML 3.295. See NVML documentation for more information. - Updated nvidia_smi.py tool - Includes additional error handling Version 4.304.0 - Added new functions for NVML 4.304. See NVML documentation for more information. - Updated nvidia_smi.py tool Version 4.304.3 - Fixing nvmlUnitGetDeviceCount bug Version 5.319.0 - Added new functions for NVML 5.319. See NVML documentation for more information. Version 6.340.0 - Added new functions for NVML 6.340. See NVML documentation for more information. Version 7.346.0 - Added new functions for NVML 7.346. See NVML documentation for more information. Version 7.352.0 - Added new functions for NVML 7.352. See NVML documentation for more information. COPYRIGHT --------- Copyright (c) 2011-2015, NVIDIA Corporation. All rights reserved. LICENSE ------- Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of the NVIDIA Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Python 3 Bindings for the NVIDIA Management Library
Overview
Owner
Nicolas Hennion
CUDA integration for Python, plus shiny features
PyCUDA lets you access Nvidia's CUDA parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about P
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi
QPT-Quick packaging tool 前项式Python环境快捷封装工具
QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,一行命令即可将普通的Python脚本打包成EXE可执行程序,与此同时还可轻松引入CUDA等深度学习加速库, 尽可能在用户使用时复现您的开发环境。
jupyter/ipython experiment containers for GPU and general RAM re-use
ipyexperiments jupyter/ipython experiment containers and utils for profiling and reclaiming GPU and general RAM, and detecting memory leaks. About Thi
cuSignal - RAPIDS Signal Processing Library
cuSignal The RAPIDS cuSignal project leverages CuPy, Numba, and the RAPIDS ecosystem for GPU accelerated signal processing. In some cases, cuSignal is
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Introduction This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code her
📊 A simple command-line utility for querying and monitoring GPU status
gpustat Just less than nvidia-smi? NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome! Self-Promo
A Python function for Slurm, to monitor the GPU information
Gpu-Monitor A Python function for Slurm, where I couldn't use nvidia-smi to monitor the GPU information. whole repo is not finish Installation TODO Mo
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python
GPUtil GPUtil is a Python module for getting the GPU status from NVIDA GPUs using nvidia-smi. GPUtil locates all GPUs on the computer, determines thei
Python 3 Bindings for NVML library. Get NVIDIA GPU status inside your program.
py3nvml Documentation also available at readthedocs. Python 3 compatible bindings to the NVIDIA Management Library. Can be used to query the state of
cuGraph - RAPIDS Graph Analytics Library
cuGraph - GPU Graph Analytics The RAPIDS cuGraph library is a collection of GPU accelerated graph algorithms that process data found in GPU DataFrames
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases.
Vulkan Kompute The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabl
Python interface to GPU-powered libraries
Package Description scikit-cuda provides Python interfaces to many of the functions in the CUDA device/runtime, CUBLAS, CUFFT, and CUSOLVER libraries
cuML - RAPIDS Machine Learning Library
cuML - GPU Machine Learning Algorithms cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions t
ArrayFire: a general purpose GPU library.
ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i
Python 3 Bindings for the NVIDIA Management Library
====== pyNVML ====== *** Patched to support Python 3 (and Python 2) *** ------------------------------------------------ Python bindings to the NVID
cuDF - GPU DataFrame Library
cuDF - GPU DataFrames NOTE: For the latest stable README.md ensure you are on the main branch. Resources cuDF Reference Documentation: Python API refe
Library for faster pinned CPU <-> GPU transfer in Pytorch
SpeedTorch Faster pinned CPU tensor - GPU Pytorch variabe transfer and GPU tensor - GPU Pytorch variable transfer, in certain cases. Update 9-29-1
BlazingSQL is a lightweight, GPU accelerated, SQL engine for Python. Built on RAPIDS cuDF.
A lightweight, GPU accelerated, SQL engine built on the RAPIDS.ai ecosystem. Get Started on app.blazingsql.com Getting Started | Documentation | Examp
A NumPy-compatible array library accelerated by CUDA
CuPy : A NumPy-compatible array library accelerated by CUDA Website | Docs | Install Guide | Tutorial | Examples | API Reference | Forum CuPy is an im