当前位置:网站首页>Financial data acquisition (III) when a crawler encounters a web page that needs to scroll with the mouse wheel to refresh the data (nanny level tutorial)
Financial data acquisition (III) when a crawler encounters a web page that needs to scroll with the mouse wheel to refresh the data (nanny level tutorial)
2022-07-07 12:28:00 【Simon Cao】
Catalog
1. Who would give me the whole job like this
2. Selenium Simulate web browser crawling
2.1 Installation and preparation
4: Complete code , Result display
1. Who would give me the whole job like this
what , Sina's stock historical data has not been directly provided !
I need to find some data about the Australian market a few days ago , How API Didn't come to Australia to take root , Helpless, I had to place my hope on reptiles . When I click the relevant data on Sina Finance and Economics , I was surprised to find that it was already empty . I want to see others A Share data , The familiar trading data has long disappeared , Instead, a thing called data center , There is no data I want .
Data that used to be easy to find are now human beings, things and non things , I can't refer to the previous code , It's sad. . I can't help sighing that it's no wonder there are fewer and fewer crawling data , After all, API Who would do such a thankless job with such a thing .
The author immediately changed Yahoo Finance, Sure enough, I found the data I wanted . So the author happily wrote a little reptile .
import requests
import pandas as pd
headers = {"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36 Edg/103.0.1264.37"}
url = "https://finance.yahoo.com/quote/%5EDJI/history?period1=1601510400&period2=1656460800&interval=1d&filter=history&frequency=1d&includeAdjustedClose=true"
re = requests.get(url, headers = headers)
print(pd.read_html(re.text)[0])
However, it doesn't wait for the author to be happy , Immediately found that the data crawled was only a short 100 That's ok ???
The author is puzzled ,re What is clearly requested is three-year data URL, Why is there only 100 All right .
For a long time , Finally, I found that it was because Yahoo data had to be swiped down with the mouse wheel , Only on the page of the original request 100 Row data . Now check the problem , So what's the solution ?
2. Selenium Simulate web browser crawling
Selenium It provides us with a good solution , Our traditional requests The module request can only request the content returned by the fixed web page , But the data that needs to be clicked or scrolled with the mouse wheel when the author encounters it seems pale .
I only used it when I first learned reptiles selenium, After all, I don't often encounter such a difficult web page . Therefore, the author will start from 0 Start a nanny level tutorial for this web crawler .
2.1 Installation and preparation
Please install it first , The import module
pip install selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
Next is the beginning , after C Station search , This kind of webpage should be used first selenium The simulated browser opens :
url = "https://finance.yahoo.com/quote/%5EDJI/history?period1=1601510400&period2=1656460800&interval=1d&filter=history&frequency=1d&includeAdjustedClose=true"
driver = webdriver.Chrome() # Start the simulation browser
driver.get(url) # Open the web address to crawl
Running here, the author stepped on the first thunder , Report errors WebDriverException: Message: 'chromedriver' executable needs to be in PATH:
It's another toss. The author finds a solution : First, download a startup file that simulates the browser , Address :ChromeDriver - WebDriver for Chrome - Downloads (chromium.org)https://chromedriver.chromium.org/downloads
It should be noted that , According to your Chrome Browser version download :
Decompression can be , But the installation position must be python.exe Under the same folder of that file :
Then add chromedriver.exe File address , The author puts it directly on D pan :
After confirming in turn, do not restart the computer , Direct opening CMD start-up chromedriver.exe. If as shown in the figure below successfuly It means success , The previous code can run successfully
After running the code, you will directly open the web page , The prompt is under the control of automatic software :
2.2 Slide the web page with the mouse
Of course, it's not true to slide the web page with the mouse , But through selenium Achieve control , Yes to( Row to ) and by( How much to row ) There are two ways to paddle , Input xy Parameters can realize control :
driver.execute_script('window.scrollBy(x, y)') # Slide sideways x, Longitudinal sliding y
driver.execute_script('window.scrollTo(x,y)') # Slide to x, y Location
Through the write cycle, you can control it to move down to the bottom to obtain all the data , Below, the author provides two kinds of paddle strategies , One is written by others , One is written by the author :
2.2.1 Height judgment
The idea is to get the height of the page ——To Rowing —— Get the page height again —— Compare the two heights , If == Proof slipped to the bottom , End of cycle .
while True:
h_before = driver.execute_script('return document.body.scrollHeight;')
time.sleep(2)
driver.execute_script(f'window.scrollTo(0,{h_before})')
time.sleep(2)
h_after = driver.execute_script('return document.body.scrollHeight;')
if h_before == h_after:
break
however ! Using this strategy, the author found that Yahoo It's useless , No matter how Yahoo moves , Page height is always a fixed value .
For the purpose of reference, the author put it up , Look at the following comments , Maybe other web pages can be used .
2.2.2 Top distance judgment
The author wrote another , Use the distance to the top to determine whether it is in the end :
driver.execute_script('return document.documentElement.scrollTop')
Actually, it's similar to just now , It's also a cycle : Get the distance from the current position to the top of the page ——To Rowing —— Get the distance to the top again —— Compare the distance twice —— Fixed units increased , If == Proved it , End of cycle .
roll = 500
while True:
h_before = driver.execute_script('return document.documentElement.scrollTop')
time.sleep(1)
driver.execute_script(f'window.scrollTo(0,{roll})')
time.sleep(1)
h_after = driver.execute_script('return document.documentElement.scrollTop')
roll += 500
print(h_after, h_before)
if h_before == h_after:
break
One slide 500 Pixels may be a little slow , You can change the parameters of each stroke by yourself .
This solution is useful for Yahoo , You can see that it is indeed sliding down , You can also see it on the simulation browser . thus , All problems of not displaying data are solved .
3: Crawling content
adopt page_source You can export all the data drawn , The data returned is str A bunch of web page tags .
driver.page_source
The most difficult ridge to slide in front has passed , The rest is all about basic crawler operations , Because my goal this time is tabular data , direct pandas read. First, store the data you slide to in variables , then pandas Just analyze , If you are crawling text data, you need to use BeautifulSoup Or canonical further parsing :
content = driver.page_source
data = pd.read_html(content)
table = pd.DataFrame(data[0])
4: Complete code , Result display
url = # The website you need to crawl
driver = webdriver.Chrome()
driver.get(url)
roll = 1000
while True:
h_before = driver.execute_script('return document.documentElement.scrollTop')
time.sleep(1)
driver.execute_script(f'window.scrollTo(0,{roll})')
time.sleep(1)
h_after = driver.execute_script('return document.documentElement.scrollTop')
roll += 1000
print(h_after, h_before)
if h_before == h_after:
break
content = driver.page_source
data = pd.read_html(content)
table = pd.DataFrame(data[0])
print(table)
table.to_csv("market_data.csv")
You can see , The author has got all the data of Dow Jones industrial average in recent years :
Such a simple page stroke , you , Is learning useless ?
Like comments + Pay attention to the third company , If you don't abandon , We stand together through the storm .
边栏推荐
- 【统计学习方法】学习笔记——第四章:朴素贝叶斯法
- 111. Network security penetration test - [privilege escalation 9] - [windows 2008 R2 kernel overflow privilege escalation]
- 数据库系统原理与应用教程(009)—— 概念模型与数据模型
- File upload vulnerability - upload labs (1~2)
- EPP+DIS学习之路(1)——Hello world!
- 对话PPIO联合创始人王闻宇:整合边缘算力资源,开拓更多音视频服务场景
- <No. 8> 1816. 截断句子 (简单)
- Routing strategy of multi-point republication [Huawei]
- 数据库系统原理与应用教程(010)—— 概念模型与数据模型练习题
- wallys/Qualcomm IPQ8072A networking SBC supports dual 10GbE, WiFi 6
猜你喜欢
数据库系统原理与应用教程(011)—— 关系数据库
idm服务器响应显示您没有权限下载解决教程
【PyTorch实战】用RNN写诗
Sort out the garbage collection of JVM, and don't involve high-quality things such as performance tuning for the time being
Attack and defense world - PWN learning notes
EPP+DIS学习之路(1)——Hello world!
【统计学习方法】学习笔记——支持向量机(上)
【深度学习】图像多标签分类任务,百度PaddleClas
[play RT thread] RT thread Studio - key control motor forward and reverse rotation, buzzer
VSCode的学习使用
随机推荐
Completion report of communication software development and Application
Configure an encrypted web server
When OSPF specifies that the connection type is P2P, it enables devices on both ends that are not in the same subnet to Ping each other
密码学系列之:在线证书状态协议OCSP详解
30. Feed shot named entity recognition with self describing networks reading notes
开发一个小程序商城需要多少钱?
即刻报名|飞桨黑客马拉松第三期盛夏登场,等你挑战
Apache installation problem: configure: error: APR not found Please read the documentation
@Bean与@Component用在同一个类上,会怎么样?
110. Network security penetration test - [privilege promotion 8] - [windows sqlserver xp_cmdshell stored procedure authorization]
Sign up now | oar hacker marathon phase III midsummer debut, waiting for you to challenge
112. Network security penetration test - [privilege promotion article 10] - [Windows 2003 lpk.ddl hijacking rights lifting & MSF local rights lifting]
OSPF exercise Report
对话PPIO联合创始人王闻宇:整合边缘算力资源,开拓更多音视频服务场景
College entrance examination composition, high-frequency mention of science and Technology
关于 Web Content-Security-Policy Directive 通过 meta 元素指定的一些测试用例
Utiliser la pile pour convertir le binaire en décimal
DOM parsing XML error: content is not allowed in Prolog
Static routing assignment of network reachable and telent connections
SQL head injection -- injection principle and essence