Console application for downloading images from Reddit in Python

Overview

RedditImageScraper

Console application for downloading images from Reddit in Python

Screenshot

Introduction

This short Python script was created for the mass-downloading of images from Reddit. It will be used later for creating data-sets for several Machine Learning projects.

In order to use the script, you will have to have a Reddit account sign-up to create a developer account. You will be assigned a client_id and client_secret which you have to enter in config.ini before you run the script.

Usage

The -r parameter provides a list of sub-reddits to search.
The -st parameter can specify the maximum number of images to download from each sub-reddit. Defaults to 1000.
The -t parameter can specify the total number of images to download across all sub-reddits before we abort. Defaults to 10000.
The -f parameter can specify the folder into which we download the images. Defaults to download.

For example, to download at-most 20 pictures from the dogpictures, dogswithjobs, GuiltyDogs, and dogs sub-reddits, aborting when we have 50 files in total, and saving the files to a folder titled dog, we would use the following:

python RedditImageScraper.py -r dogpictures dogswithjobs GuiltyDogs dog -st 20 -t 50 -f dogs

This should produce something like the following:

Downloading from dogpictures
3tdfpayjp1k51.jpg
cuz4a5np90k51.jpg
1e1g882z40k51.jpg
xX3OQgP.jpg
fuatv49mizj51.jpg
tk9khtspb1k51.jpg
pgbxxakm63k51.jpg
oeUI8Iy.jpg
vqklutghc1k51.jpg
1ctn7f0390k51.jpg
r7995a1dd3k51.jpg
qbjj06vaa0k51.jpg
q6nl05omyzj51.jpg
bl8bi5tsu2k51.jpg
gxc78spvxxj51.jpg
w0pdsr1hsyj51.jpg
9h19nq1k5vj51.jpg
y67tpittfyj51.jpg
Downloaded 18 from dogpictures

Downloading from dogswithjobs
5xxpn6xs7xj51.png
3mrgwnlum1k51.jpg
rs2uecgnb1k51.jpg
y077mg1974k51.jpg
kci6u8pc02k51.jpg
iho9wex0qrj51.jpg
109eyp6kjyj51.jpg
i86x3o6dutj51.jpg
Downloaded 8 from dogswithjobs

Downloading from GuiltyDogs
8z89s7a89dj51.jpg
c9rf2r516li51.jpg
pbdqr853rsh51.jpg
e9xihfbqdeh51.jpg
53gamygu9ch51.jpg
d3tq02dbbyg51.jpg
ifsmwutou2h51.jpg
Downloaded 7 from GuiltyDogs

Downloading from dog
1kloilrhc1k51.jpg
bwe1go65h1k51.jpg
8118vyqeg1k51.jpg
bajprhddg0k51.jpg
rlc7n4m6q0k51.jpg
z9p8llkuyyj51.jpg
dhdi10myx2k51.jpg
6zflnt9hozj51.jpg
niptrbxzf2k51.jpg
jxi3vrd901k51.jpg
u8eykob35yj51.jpg
5hwj8cce6zj51.png
9nr2t4f0vzj51.jpg
ozs8tuu7mzj51.jpg
0h1fwfqhh3k51.jpg
Downloaded 15 from dog

Downloaded 48 in total
Owner
James
Busily reinventing the wheel
James
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 08, 2022
Web scrapping

Project Setup Table of Contents Project Setup Table of Contents Run project locally Install Requirements Run script Run project locally Install Requir

Charles 3 Feb 04, 2022
This program scrapes information and images for movies and TV shows.

Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_

1 Dec 05, 2021
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

2 Apr 29, 2022
A python script to extract answers to any question on Quora (Quora+ included)

quora-plus-bypass A python script to extract answers to any question on Quora (Quora+ included) Requirements Python 3.x

Nitin Narayanan 10 Aug 18, 2022
Subscrape - A Python scraper for substrate chains

subscrape A Python scraper for substrate chains that uses Subscan. Usage copy co

ChaosDAO 14 Dec 15, 2022
A Python package that scrapes Google News article data while remaining undetected by Google.

A Python package that scrapes Google News article data while remaining undetected by Google. Our scraper can scrape page data up until the last page and never trigger a CAPTCHA (download stats: https

Geminid Systems, Inc 6 Aug 10, 2022
Works very well and you can ask for the type of image you want the scrapper to collect.

Works very well and you can ask for the type of image you want the scrapper to collect. Also follows a specific urls path depending on keyword selection.

Memo Sim 1 Feb 17, 2022
A scalable frontier for web crawlers

Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

Scrapinghub 1.2k Jan 02, 2023
Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers

Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers.

Louie Cai 13 Oct 15, 2022
Screen scraping and web crawling framework

Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the

Evgeniy Tatarkin 61 Jun 21, 2021
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

0 Jan 07, 2022
Minimal set of tools to conduct stealthy scraping.

Stealthy Scraping Tools Do not use puppeteer and playwright for scraping. Explanation. We only use the CDP to obtain the page source and to get the ab

Nikolai Tschacher 88 Jan 04, 2023
mlscraper: Scrape data from HTML pages automatically with Machine Learning

🤖 Scrape data from HTML websites automatically with Machine Learning

Karl Lorey 798 Dec 29, 2022
Snowflake database loading utility with Scrapy integration

Snowflake Stage Exporter Snowflake database loading utility with Scrapy integration. Meant for streaming ingestion of JSON serializable objects into S

Oleg T. 0 Dec 06, 2021
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
Fundamentus scrapy

Fundamentus_scrapy Baixa informacões que os outros scrapys do fundamentus não realizam. Para iniciar (python main.py), sera criado um arquivo chamado

Guilherme Silva Uchoa 1 Oct 24, 2021
用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。

crawler_for_university 用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。 环境依赖 wxpy,requests,bs4等库 功能描述 该项目基于python,通过爬虫爬各高校的就业信息网,爬取招聘信

8 Aug 16, 2021
Crawl BookCorpus

These are scripts to reproduce BookCorpus by yourself.

Sosuke Kobayashi 590 Jan 03, 2023
:arrow_double_down: Dumb downloader that scrapes the web

You-Get NOTICE: Read this if you are looking for the conventional "Issues" tab. You-Get is a tiny command-line utility to download media contents (vid

Mort Yao 46.4k Jan 03, 2023