当前位置:网站首页>Orin installs CUDA environment

Orin installs CUDA environment

2022-07-05 06:43:00 Enlaihe

There are two ways to install CUDA Environmental Science

1. Command line mode

Finished in the machine Orin, Execute the following command :

a. sudo apt  update  

b. sudo apt upgrade

c. sudo apt install nvidia-jetpack -y

If the report is wrong , View version :/etc/apt/sources.list.d/nvidia-l4t-apt-source.list In file , The latest is 34.1

Change it to

After execution, you can install , If you use l4t_for_tegra The latest version , adopt flash.sh No error will be reported after brushing the machine .

a. CUDA Check if the installation is successful

nvcc -V

If you make a mistake , Need to put nvcc Add to environment variables .

The following indicates that the installation is correct :

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_11_23:44:05_PST_2021
Cuda compilation tools, release 11.4, V11.4.166
Build cuda_11.4.r11.4/compiler.30645359_0

b.cuDNN

dpkg -l libcudnn8

Display the following information :

	Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version             Architecture Description
+++-==============-===================-============-======================
ii  libcudnn8      8.3.2.49-1+cuda11.4 arm64        cuDNN runtime libraries

c.TensorRT:

Display the following information :

Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version             Architecture Description
+++-==============-===================-============-=====================
ii  tensorrt       8.4.0.11-1+cuda11.4 arm64        Meta package of TensorRT

d.OpenCV:

Display the following information :

Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version             Architecture Description
+++-==============-===================-============-=======================
ii  libopencv      4.5.4-8-g3e4c170df4 arm64        Open Computer Vision Library

2. Use SDKmanager install

stay SDKmanager Download the required files

perform step1/2/3/4 that will do , Simple operation .

View the installed version :

TensorRT 

TensorRT It's a high-performance deep learning reasoning (Inference) Optimizer , Can provide low latency for deep learning applications 、 High throughput deployment reasoning .TensorRT Can be used for very large data centers 、 Embedded platform or autopilot platform for reasoning acceleration . TensorRT Now we can support TensorFlow、Caffe、Mxnet、Pytorch And almost all the deep learning frameworks , take TensorRT and NVIDIA Of GPU Combine , Rapid and efficient deployment reasoning in almost all frameworks .

cuDNN 

cuDNN It is used for deep neural network GPU Acceleration Library . It emphasizes performance 、 Ease of use and low memory overhead .NVIDIA cuDNN Can be integrated into a higher-level machine learning framework , Like Google's Tensorflow、 The popularity of the University of California, Berkeley caffe Software . Simple plug-in design allows developers to focus on designing and implementing neural network models , Instead of simply adjusting performance , At the same time, it can be in GPU To achieve high performance modern parallel computing .

CUDA

CUDA(ComputeUnified Device Architecture), It's the video card manufacturer NVIDIA The new computing platform . CUDA It's a kind of NVIDIA General parallel computing architecture , The architecture enables GPU Able to solve complex computing problems .

CUDA And cuDNN The relationship between

CUDA Think of it as a workbench , There are many tools on it , Like a hammer 、 Screwdrivers, etc .cuDNN Is based on CUDA Deep learning GPU Acceleration Library , With it, we can be in GPU Complete the calculation of deep learning . It's like a tool for work , For example, it's a wrench . however CUDA When I bought this workbench , No wrench . Want to be in CUDA Run deep neural network on , Just install cuDNN, It's like if you want to screw a nut, you have to buy a wrench back . Only in this way can GPU Do deep neural network work , Work faster than CPU Much faster .

原网站

版权声明
本文为[Enlaihe]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/186/202207050627492075.html