当前位置:网站首页>Kaggle competition two Sigma connect: rental listing inquiries
Kaggle competition two Sigma connect: rental listing inquiries
2022-07-06 12:00:00 【Want to be a kite】
Kaggle competition , Website links :Two Sigma Connect: Rental Listing Inquiries

According to the data information on the rental website , Predict the popularity of the house .( This is a question of classification , Contains the following data , Variable with category 、 Integer variable 、 Text variable ).
Random forest model
Use sklearn Complete modeling and prediction . The data set can be downloaded from the official website of the competition .
import numpy as np
import pandas as pd
import zipfile # The official website data set is zip type , Use zipfile open
import os
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
for dirname, _, filenames in os.walk(r'E:\Kaggle\Kaggle_dataset01\two_sigma'): # Change your path
for filename in filenames:
print(os.path.join(dirname, filename))
train_df = pd.read_json(zipfile.ZipFile(r'E:\Kaggle\Kaggle_dataset01\two_sigma\train.json.zip').open('train.json'))
test_df = pd.read_json(zipfile.ZipFile(r'E:\Kaggle\Kaggle_dataset01\two_sigma\test.json.zip').open('test.json'))
# Here is a customized data processing function .
def data_preprocessing(data):
data['created_year'] = pd.to_datetime(data['created']).dt.year
data['created_month'] = pd.to_datetime(data['created']).dt.month
data['created_day'] = pd.to_datetime(data['created']).dt.day
data['num_description_words'] = data['description'].apply(lambda x:len(x.split(' ')))
data['num_features'] = data['features'].apply(len)
data['num_photos'] = data['photos'].apply(len)
New_data = data[['created_year','created_month','created_day','num_description_words','num_features','num_photos','bathrooms','bedrooms','latitude','longitude','price']]
return New_data
train_x = data_preprocessing(train_df)
train_y = train_df['interest_level']
test_x = data_preprocessing(test_df)
X_train,X_val,y_train,y_val = train_test_split(train_x,train_y,test_size=0.33) # Data segmentation
clf = RandomForestClassifier(n_estimators=1000) # Random forest model
clf.fit(X_train,y_train)
y_val_pred = clf.predict_proba(X_val)
log_loss(y_val,y_val_pred)
y_test_predict = clf.predict_proba(test_x)
labels2idx = {
label:i for i,label in enumerate(clf.classes_)}
sub = pd.DataFrame()
sub['listing_id'] = df['listing_id']
for label in labels2idx.keys():
sub[label] = y[:,labels2idx[label]]
# Save the submission
#sub.to_csv('submission.csv',index=False) # Competition submission !
Run the above code , The effect of random forest is not very good . Some people will ask why there is no normalization preprocessing for data ? In fact, there is no need to normalize the data when using random forest , So I didn't do . If you want to do it , Try to verify it yourself . If you want to use random forest to improve the robustness of the model , Consider improving the feature engineering part , Get better features !
边栏推荐
- Linux Yum install MySQL
- 荣耀Magic 3Pro 充电架构分析
- [mrctf2020] dolls
- ESP8266通过Arduino IDE连接Onenet云平台(MQTT)
- MySQL realizes read-write separation
- ESP8266使用arduino连接阿里云物联网
- Hutool中那些常用的工具类和方法
- Word typesetting (subtotal)
- 2020网鼎杯_朱雀组_Web_nmap
- Comparaison des solutions pour la plate - forme mobile Qualcomm & MTK & Kirin USB 3.0
猜你喜欢
随机推荐
Stage 4 MySQL database
高通&MTK&麒麟 手机平台USB3.0方案对比
4、安装部署Spark(Spark on Yarn模式)
Word排版(小計)
MySQL realizes read-write separation
数据分析之缺失值填充(重点讲解多重插值法Miceforest)
【Flink】CDH/CDP Flink on Yarn 日志配置
ESP8266通过arduino IED连接巴法云(TCP创客云)
IOT system framework learning
Détails du Protocole Internet
冒泡排序【C语言】
C语言,log打印文件名、函数名、行号、日期时间
mysql实现读写分离
wangeditor富文本组件-复制可用
arduino UNO R3的寄存器写法(1)-----引脚电平状态变化
[Kerberos] deeply understand the Kerberos ticket life cycle
C语言函数之可变参数原理:va_start、va_arg及va_end
Small L's test paper
物联网系统框架学习
【flink】flink学习









