当前位置:网站首页>NVIDIA Jetson test installation yolox process record
NVIDIA Jetson test installation yolox process record
2022-07-08 00:46:00 【submarineas】
introduction
This article wants to summarize about nvidia jetson Some use processes of edge box , The main idea is to connect the box ssh Environment to Runtong yolox Make a record of the process , If there are related businesses in the future, you can refer to .
jetson Content introduction
The main test environment in this article is not on the host , It's a choice nvidia in the light of jetson Out of image , There are many reasons , The first is environmental isolation , Host computer because I'm not alone , The second is that there are many problems encountered on the host bug, For the bag I want to use , There are too many environments that need to be modified , The pre installed host is based on arm Architecturally python3 There are too many environments in it , And most of them rely on , such as shapely About geography libgeos_c.so Dynamic libraries are basically missing , So I chose the official docker Mirror image . As for the host , I just started pretending arm Of conda When I met the pit, I turned docker, So there are not many stepping holes .
Preinstallation environment
The pre installed environment here is generally available after the edge box is obtained , Just the two I got so far jetson xavier nx That's true , That is to say jtop And some corresponding basic dependencies ,vim You need to install it yourself , The others are basically gone , Mainly jtop This performance monitoring tool needs to be installed , Like on the host server rtx Graphics card monitoring tool nvtop, These two are excellent plug-ins .
If there is no , I also based on some information on the Internet , from python3 Service to jtop Install notes , The fastest installation method is as follows :
"""1. The foundation depends on """
sudo apt-get install git cmake
sudo apt-get install python3-dev
sudo apt-get install libhdf5-serial-dev hdf5-tools
sudo apt-get install libatlas-base-dev gfortran
"""2. pip3 install """
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3-pip
"""3. pip install """
pip3 install jetson-stats
"""4. Source code compilation jtop( If 3 Failure )"""
git clone https://github.com/rbonghi/jetson_stats.git
cd jetson_stats/
sudo python3 ./setup.py install
The above process is the whole process of installing plug-ins , Of course, if all can be used apt perhaps pip It's certainly good to get through , But some people like source code compilation and installation , So from python Start to jetson, You can also compile all the source code , The process depends apt Which is faster than the compilation speed . After installed , Input... At the terminal jtop You can go to the page :
The page you just entered is a number key 1 According to the all page , The corresponding number keys 2 GPU Visualization page , Number keys 3 CPU Visualization page , Number keys 4 Memory management page , Number keys 5 Control the setting pages of fan, power and speed , final 6 It is a page that displays the information of the edge box :
jetson Environmental installation
There will be three parts , The first is to enter from the host docker The mirroring process , The second is in the container pytorch and torchvision Update of these two environments , Because there is basically no version matching the current code inside the container , The last one is to run directly yolox.
docker Mirror image
Generally, the edge box will also be pre installed docker Of , But it could be 19.03 The following version , If you encounter library function problems during use , It can also be reinstalled , Because in docker Under the official website ,arm The architecture and x86 The installation method of is basically the same , That is to say, according to the order of the official website, it can be based on the current ubuntu The system selects the appropriate architecture for installation .
At present, I am deepin System installation docker There have been cases where the server architecture cannot be recognized , You need to do it yourself ,nvidia jetson After my update, there is basically no problem . The installation process can be seen from my previous docker Use notes (1):docker Introduction and installation After successful installation , You can view version information and info, If docker --version No problem , Display version correctly , So long can go nvidia The official website chose to mirror :
Enter under the official website nvidia Of container Catalog , Search for jetson, There will be many based on L4T Mirror image , These can be directly used in jetson Of , The page link is :
https://catalog.ngc.nvidia.com/containers?filters=&orderBy=dateModifiedDESC&query=jetson%20
I chose it here NVIDIA L4T ML, Because after loading docker after , I updated it docker The path to save ,nvidia jetson The disk of the box is relatively small , Add a mechanical disk , Then I lost all the storage paths , therefore L4T ML This image seems to be close to me 10 individual G, But the larger the size, the more complete the environment , The harder it is to get out bug, Different people have different opinions ,pull tag The order is :
docker pull nvcr.io/nvidia/l4t-ml:r34.1.1-py3
After pulling it down , Follow x86 The startup command of the architecture is the same , I just have to add -e Parameter will gpu Just join in , See what I wrote before for details docker Learning notes (9):nvidia-docker install 、 Deployment and use :
My orders are :
docker run -it --net=host --name jetson --runtime nvidia --restart always --privileged=true -e LD_LIBRARY_PATH=/usr/local/ffmpeg/lib/ --entrypoint="./home//program/xxx/start.sh" submarineas/nvidia:v5.0 /bin/bash
There may be another cuda Version of the problem , however jetson The default host and image are based on cuda 10.2 Of , I didn't think of it until I finished loading it , The results are completely consistent , You can check the image and host drive environment after installation :
nvcc -V
cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
pytorch as well as torchvision install
Get into docker, Because the container I selected above basically contains all dependencies and ml Relevant pip package , Here I went in and saw torch Version is 0.6.0, and torchvision Higher than it 0.1, This is adaptive , however yolox Needed torch Version must be 0.9.0 Here it is , I remember , Then you need to uninstall and reload , The order here is to install first pytorch, Put it on again torchvision, Because if you change the order ,torchvision The installation is still based on CPU Version of ,pip By default pypi, I made a mistake . because pypi There are compiled on the webpage arm Architecturally pytorch Installation package , I didn't see it at that time. I pulled it down and installed it directly :
The right way is still nvidia Official website , It provides everything pytorch edition , Link to :
https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048
to pull down pytorch Of whl Install the offline version , Then install further torchvision, The steps here come from torchvision Of GitHub, Source code compilation and installation , Otherwise, it will be the same as above me , direct pip install Can be installed , but torch.cuda.is_available() by false.
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
$ git clone --branch <version> https://github.com/pytorch/vision torchvision # see below for version of torchvision to download
$ cd torchvision
$ export BUILD_VERSION=0.x.0 # where 0.x.0 is the torchvision version
$ python3 setup.py install --user
$ cd ../ # attempting to load torchvision from build dir will result in import error
$ pip install 'pillow<7' # always needed for Python 2.7, not needed torchvision v0.5.0+ with Python 3.6
yolox Environmental installation
Installed torch And torchvision after , The rest of the package basically won't have any problems . Same as GitHub in :
git clone [email protected].com:Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -v -e . # or python3 setup.py develop
thus , The image environment is installed .
边栏推荐
- ReentrantLock 公平锁源码 第0篇
- 52歲的周鴻禕,還年輕嗎?
- 浪潮云溪分布式数据库 Tracing(二)—— 源码解析
- 新库上线 | 中国记者信息数据
- Course of causality, taught by Jonas Peters, University of Copenhagen
- 【obs】官方是配置USE_GPU_PRIORITY 效果为TRUE的
- jemter分布式
- 【obs】Impossible to find entrance point CreateDirect3D11DeviceFromDXGIDevice
- Where is the big data open source project, one-stop fully automated full life cycle operation and maintenance steward Chengying (background)?
- 【愚公系列】2022年7月 Go教学课程 006-自动推导类型和输入输出
猜你喜欢
Cancel the down arrow of the default style of select and set the default word of select
新库上线 | CnOpenData中华老字号企业名录
《因果性Causality》教程,哥本哈根大学Jonas Peters讲授
华为交换机S5735S-L24T4S-QA2无法telnet远程访问
[研发人员必备]paddle 如何制作自己的数据集,并显示。
Installation and configuration of sublime Text3
Interface test advanced interface script use - apipost (pre / post execution script)
Class head up rate detection based on face recognition
快速上手使用本地测试工具postman
Deep dive kotlin synergy (XXII): flow treatment
随机推荐
服务器防御DDOS的方法,杭州高防IP段103.219.39.x
股票开户免费办理佣金最低的券商,手机上开户安全吗
SDNU_ ACM_ ICPC_ 2022_ Summer_ Practice(1~2)
51 communicates with the Bluetooth module, and 51 drives the Bluetooth app to light up
Lecture 1: the entry node of the link in the linked list
ABAP ALV LVC模板
Class head up rate detection based on face recognition
Operating system principle --- summary of interview knowledge points
22年秋招心得
"An excellent programmer is worth five ordinary programmers", and the gap lies in these seven key points
5G NR 系统消息
Experience of autumn recruitment in 22 years
paddle一个由三个卷积层组成的网络完成cifar10数据集的图像分类任务
How can CSDN indent the first line of a paragraph by 2 characters?
CVE-2022-28346:Django SQL注入漏洞
5g NR system messages
新库上线 | CnOpenData中华老字号企业名录
Development of a horse tourism website (optimization of servlet)
Play sonar
【愚公系列】2022年7月 Go教学课程 006-自动推导类型和输入输出