当前位置:网站首页>Machine learning plant leaf recognition
Machine learning plant leaf recognition
2022-07-06 06:39:00 【Nothing (sybh)】
Identification of plant leaves : Give the data set of blades ” Leaf shape .csv”, Describe the edges of plant leaves 、 shape 、 The numerical variables of these three features of texture have 64 individual ( common 64*3=192 A variable ). Besides , also 1 Taxonomic variables recording the plant species to which each leaf belongs , common 193 A variable . Please use the feature selection method for feature selection , And compare the similarities and differences of the feature selection results (20 branch ). Through data modeling , Complete the recognition of blade shape (30 branch ).
Catalog
Catalog
3 Conduct PCA Dimension reduction
4 KNN Grid search optimization ,PCA Before and after
Ideas
1. Data analysis visualization
2. establish Feature Engineering ( According to the correlation matrix , Select features for Feature Engineering . Including data preprocessing , Supplement missing values , Normalized data, etc )
3. Machine learning algorithm Model to verify the analysis
1 Import package
import pandas as pd
from sklearn import svm
import numpy as np
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
2 Draw correlation matrix ( According to the correlation matrix , Select features for Feature Engineering )
Train= pd.read_csv(" Leaf shape .csv")
X = Train.drop(['species'], axis=1)
Y = Train['species']
Train['species'].replace(map_dic.keys(), map_dic.values(), inplace=True)
Train.drop(['id'], inplace = True, axis = 1)
Train_ture = Train['species']
# Draw the correlation matrix
corr = Train.corr()
f, ax = plt.subplots(figsize=(25, 25))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5)
plt.show()
Supplement missing values
np.all(np.any(pd.isnull(Train)))
#false
Training set test set division (80% Training set 、20% Test set )
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
Normalize the data
standerScaler = StandardScaler()
x_train = standerScaler.fit_transform(x_train)
x_test = standerScaler.fit_transform(x_test)
3 Conduct PCA Dimension reduction
pca = PCA(n_components=0.9)
x_train_1 = pca.fit_transform(x_train)
x_test_1 = pca.transform(x_test)
## 44 Features
4 KNN grid Search optimization ,PCA Before and after
from sklearn.neighbors import KNeighborsClassifier
knn_clf0 = KNeighborsClassifier()
knn_clf0.fit(x_train, y_train)
print('KNeighborsClassifier')
y_predict = knn_clf0.predict(x_test)
score = accuracy_score(y_test, y_predict)
print("Accuracy: {:.4%}".format(score))
print("PCA after ")
knn_clf1 = KNeighborsClassifier()
knn_clf1.fit(x_train_1, y_train)
print('KNeighborsClassifier')
y_predict = knn_clf1.predict(x_test_1)
score = accuracy_score(y_test, y_predict)
print("Accuracy: {:.4%}".format(score))
5 SVC
svc_clf = SVC(probability=True)
svc_clf.fit(x_train, y_train)
print("*"*30)
print('SVC')
y_predict = svc_clf.predict(x_test)
score = accuracy_score(y_test, y_predict)
print("Accuracy: {:.4%}".format(score))
svc_clf1 = SVC(probability=True)
svc_clf1.fit(x_train_1, y_train)
print("*"*30)
print('SVC')
y_predict1 = svc_clf1.predict(x_test_1)
score = accuracy_score(y_test, y_predict1)
print("Accuracy: {:.4%}".format(score))
6. Logical regression
from sklearn.linear_model import LogisticRegressionCV
lr = LogisticRegressionCV(multi_class="ovr",
fit_intercept=True,
Cs=np.logspace(-2,2,20),
cv=2,
penalty="l2",
solver="lbfgs",
tol=0.01)
lr.fit(x_train,y_train)
print(' Logical regression ')
y_predict = lr.predict(x_test)
score = accuracy_score(y_test, y_predict)
print("Accuracy: {:.4%}".format(score))
The accuracy of logistic regression is the highest 98.65
After feature selection and principal component analysis, the accuracy will not necessarily be improved
边栏推荐
- Today's summer solstice
- [English] Grammar remodeling: the core framework of English Learning -- English rabbit learning notes (1)
- How much is the price for the seal of the certificate
- The pit encountered by keil over the years
- [Tera term] black cat takes you to learn TTL script -- serial port automation skill in embedded development
- A 27-year-old without a diploma, wants to work hard on self-study programming, and has the opportunity to become a programmer?
- 查询字段个数
- 今日夏至 Today‘s summer solstice
- 字幕翻译中翻英一分钟多少钱?
- Day 239/300 注册密码长度为8~14个字母数字以及标点符号至少包含2种校验
猜你喜欢
私人云盘部署
記一個基於JEECG-BOOT的比較複雜的增删改功能的實現
How much is the price for the seal of the certificate
Office doc add in - Online CS
金融德语翻译,北京专业的翻译公司
Cobalt strike feature modification
红蓝对抗之流量加密(Openssl加密传输、MSF流量加密、CS修改profile进行流量加密)
[web security] nodejs prototype chain pollution analysis
LeetCode每日一题(971. Flip Binary Tree To Match Preorder Traversal)
Luogu p2089 roast chicken
随机推荐
[English] Grammar remodeling: the core framework of English Learning -- English rabbit learning notes (1)
Distributed system basic (V) protocol (I)
[English] Verb Classification of grammatical reconstruction -- English rabbit learning notes (2)
How do programmers remember code and programming language?
How to convert flv file to MP4 file? A simple solution
CS-证书指纹修改
Luogu p2141 abacus mental arithmetic test
Changes in the number of words in English papers translated into Chinese
钓鱼&文件名反转&office远程模板
The pit encountered by keil over the years
Simulation volume leetcode [general] 1405 Longest happy string
私人云盘部署
CS通过(CDN+证书)powershell上线详细版
How to do a good job in financial literature translation?
利用快捷方式-LNK-上线CS
It is necessary to understand these characteristics in translating subtitles of film and television dramas
Facebook AI & Oxford proposed a video transformer with "track attention" to perform SOTA in video action recognition tasks
Esp32 esp-idf watchdog twdt
删除外部表源数据
Thesis abstract translation, multilingual pure human translation