当前位置:网站首页>Ascend 910 realizes tensorflow1.15 to realize the Minist handwritten digit recognition of lenet network

Ascend 910 realizes tensorflow1.15 to realize the Minist handwritten digit recognition of lenet network

2022-07-07 14:40:00 Hua Weiyun

One 、 Environment and preparation

CPU/GPU Reappearance using Huawei cloud ModelArts-CodeLab platform
Ascend Reappearance using Huawei cloud ModelArts- development environment -Notebook
original Lenet Code link :https://gitee.com/lai-pengfei/LeNet

Two 、 stay CPU/GPU Run the original code in

First step : open CodeLab
1657162576592.png

notes : If you need to switch GPU resources , Can point
1657163555864.png

Resource selection GPU resources , have access to 1 Hours ,1 When I was young, I needed to renew my time manually

Click on the Terminal Enter the terminal interface :
1657162624622.png

Terminal interface :
1657162647171.png

The second step : Enter into work Catalog and git clone Related codes

git clone https://gitee.com/lai-pengfei/LeNet

You can see on the left git Down the code
1657162699591.png

The third step : Switch TensorFlow Running environment

source activate /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/

Here is 1.13 Of , But it's not a big problem , Big difference is not bad , You can also run and 1.15 almost

Step four : Enter the folder and execute the original code

cd cd LeNet/python Train.py

Running :
image.png

Running results :
1657163606127.png

3、 ... and 、 Model migration

The environment uses Huawei cloud - development environment Notebook

Create an environment

Mirror select the checked
1657163720689.png
The specifications and other settings are shown in the figure below :
1657163760843.png

Environment and code download reference CPU/GPU

Switch to Ascend Under the TensorFlow 1.15 Running environment

source activate /home/ma-user/anaconda3/envs/TensorFlow-1.15.0/

Code changes

Modify the Train.py
Document address :https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/51RC2alpha007/moddevg/tfmigr/atlasmprtg_13_0011.html

Add the code to introduce the package :

from npu_bridge.npu_init import *

Modify to create session And initialize the resource related code
This step is mainly in sess.run(tf.initialize_all_variables()) Add the following lines of code before

config = tf.ConfigProto()custom_op = config.graph_options.rewrite_options.custom_optimizers.add()custom_op.name = "NpuOptimizer"config.graph_options.rewrite_options.remapping = RewriterConfig.OFF  #  Must explicitly close config.graph_options.rewrite_options.memory_optimization = RewriterConfig.OFF  #  Must explicitly close sess = tf.Session(config=config)

This Demo You also need to change the first line of related library code
Original code :

import tensorflow.examples.tutorials.mnist.input_data as input_data

It is amended as follows :

from tensorflow.examples.tutorials.mnist import input_data

Run code

python Train.py

1657164296238.png

notice W tf_adapt Almost words indicate that NPU resources

Operation process :
1657164355562.png

Can open another Terminal Check whether it is really used Ascend, In the new Terminal Use the following command in :

npu-smi info

1657164405042.png

Running results :
1657165276744.png

summary

It's simple here TensorFlow transplant Ascend The code modification of the platform is completed , In fact, the whole process is relatively simple , It's not difficult to meet. , No, it may be difficult . The main difficulties of model migration are that some operators may not support and the accuracy performance optimization .

原网站

版权声明
本文为[Hua Weiyun]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071235115083.html