当前位置:网站首页>The super document downloading tool scihub CN teaches you to download documents with one line of command
The super document downloading tool scihub CN teaches you to download documents with one line of command
2022-06-11 12:24:00 【m0_ sixty-one million eight hundred and ninety-nine thousand on】
introduction
Project address :GitHub - Ckend/scihub-cn: Available in domestic environment scihub Thesis downloader
What can the project do , Just look at the picture . You can download the required documents in one sentence .

1. Get ready
Before the start , Ensure existing Python and pip Environmental Science .( Direct installation anaconda3, It is better to build a virtual environment )
Windows open Cmd( Start — function —CMD), Or open anaconda3 Virtual environment running .
The apple system is turned on Terminal(command+ Space input Terminal), Enter the command to install the dependency :
pip install scihub-cnWill download a series of installation packages , notice Successfully installed ... It means successful installation scihub-cn.
scihub-cn It's dependence aiohttp Module for concurrent download , Supported by The minimum Python Version is 3.6.
Project source code :GitHub - Ckend/scihub-cn: Available in domestic environment scihub Thesis downloader
2.Scihub-cn Usage method
2.1 Use DOI Download Paper No
First, let's try according to DOI Number Download literature :
scihub-cn -d 10.1038/s41524-017-0032-0The downloaded paper will be automatically generated in the current folder :

Choose to download it to any directory , Just add -o Parameters :
scihub-cn -d 10.1038/s41524-017-0032-0 -o D:\papersThis will download the paper to D Discoid papers In the folder .
2.2 Download the paper according to the key words
Use -w Parameter specifies a keyword , You can download papers through keywords :
scihub-cn -w reinforcementThe effect is as follows :

Again , It also supports -o Parameter specifies the folder . Besides , The default search engine used here is Baidu academic , You can also use Google academic 、publons、science_direct etc. . By designation -e Parameters can be :
scihub-cn -w reinforcement -e google_scholarfor fear of Google Academic cannot be connected , You can add agents -p Parameters :
scihub-cn -w reinforcement -e google_scholar -p http://127.0.0.1:10808When accessing an external data source , Adding agents can avoid Connection closed Other questions .
Besides , You can also limit the number of downloads , For example, I want to download 100 An article :
scihub-cn -w reinforcement -l 1002.3 according to url Download the paper
Given any paper address , It can make scihub-cn Try to download the paper : Use -u Parameter specifies the paper link .
scihub-cn -u https://ieeexplore.ieee.org/document/26502
3. Download papers in batches
Several new batch download methods have been added :
1. According to given Name of all papers Of txt Text file download paper .
2. According to given All papers url Of txt File download thesis .
3. According to given All papers DOI Number Of txt Text file download paper .
4. According to given bibtex file Download the paper .
such as , Give all papers according to URL Of txt File download thesis :
scihub-cn -i urls.txt --urlThe effect is as follows :

You can see , There are 4 A paper link , Also successfully downloaded to this 4 Papers .
Try again DOI The no. txt Batch download of files :
scihub-cn -i dois.txt --doi
The effect is as follows :

Parameter description
You can enter scihub-cn --help See more parameter descriptions :
$scihub-cn --help
... ...
optional arguments:
-h, --help show this help message and exit
-u URL input the download url
-d DOI input the download doi
--input INPUTFILE, -i INPUTFILE
input download file
-w WORDS, --words WORDS
download from some key words,keywords are linked by
_,like machine_learning.
--title download from paper titles file
-p PROXY, --proxy PROXY
use proxy to download papers
--output OUTPUT, -o OUTPUT
setting output path
--doi download paper from dois file
--bib download papers from bibtex file
--url download paper from url file
-e SEARCH_ENGINE, --engine SEARCH_ENGINE
set the search engine
-l LIMIT, --limit LIMIT
limit the number of search resultOverview of main usage
Use doi, Paper title , perhaps bibtex Download papers in batches .
Support Python3.6 And above .
install :
pip install scihub-cn
How to use it is as follows :
1. give bibtex file
$scihub-cn -i input.bib --bib
2. Give papers doi name
$scihub-cn -d 10.1038/s41524-017-0032-0
3. Give papers url
$scihub-cn -u https://ieeexplore.ieee.org/document/9429985
4. Give the key words of the thesis ( Between keywords _ link , Such as machine_learning)
$scihub-cn -w word1_words2_words3
5. Give papers doi Of txt text file , such as
10.1038/s41524-017-0032-0
10.1063/1.3149495
$scihub-cn -i dois.txt --doi
6. Give the names of all papers txt text file
Some Title 1
Some Title 2
$scihub-cn -i titles.txt --title
7. Give all papers url Of txt file
url 1
url 2
$scihub-cn -i urls.txt --url
You can add... At the end -p(--proxy),-o(--output),-e(--engine),-l(--limit) To specify a proxy , Output folder 、 Search engines and the number of restricted search entries Search engines include google_scholar、baidu_xueshu、publons、 as well as science_direct.
For specific usage, you can access the source project :GitHub - Ckend/scihub-cn: Available in domestic environment scihub Thesis downloader
Reference resources
边栏推荐
- 8、原子操作类之18罗汉增强
- Splunk Bucket 背後的秘密
- Live source code, floating window rolling gradient effect
- 12、AbstractQueuedSynchronizer之AQS
- 近期使用nodejs pinyin包时遇到的问题
- Command symbols commonly used by programmers
- Harmonyos application development -- mycalculator based on self-made grid layout [my calculator][api v6]
- Zabbix安装及配置应用
- oracle数据库导入数据步骤
- General O & M structure diagram
猜你喜欢

数据如何在 Splunk 中老化?

gocron 定时任务管理平台

flink 控制窗口行为(触发器、移除器、允许延迟、将迟到的数据放入侧输出流)

中国网络安全年会周鸿祎发言:360安全大脑构筑数据安全体系

Flink deployment mode and runtime architecture (session mode, single job mode, application mode, jobmanager, taskmanager, yarn mode deployment and runtime architecture)

How does data age in Splunk?

Oracle DatabaseLink 跨数据库连接

7. CAS

Troubleshoot Splunk kvstore "starting"

Splunk健康检查orphaned searches
随机推荐
leetcode-59. Spiral matrix II JS
Workload management of Splunk best practices
flink 数据流图、并行度、算子链、JobGraph与ExecutionGraph、任务和任务槽
Wechat authorization to obtain mobile phone number
InputStream读取文件OutputStream创建文件
Deep learning and CV tutorial (14) | image segmentation (FCN, segnet, u-net, pspnet, deeplab, refinenet)
What is QoS? (quality of service)
Zabbix安装及配置应用
Flick grouping sets multidimensional aggregation and setting table state expiration time
Splunk certificate expired, making kV store unable to start
Splunk best practices - lighten the burden on Captain
gocron 定时任务管理平台
centos安装mysql5.7
Splunk 健康检查之关闭THP
纯数据业务的机器打电话进来时回落到了2G/3G
Error occurred when MySQL imported the database data in pagoda as 0000-00-00 and enum as null
美创科技数据安全管理平台荣获2022数博会“领先科技成果奖”
JMeter learning experience
ftp服务器:serv-u 的下载及使用
JMeter 学习心得