Scraping followers of an instagram account

Overview

ScrapInsta

A script to scraping data from Instagram

Install

First of all you can run:

pip install scrapinsta

After that you need to install these requirements:

    You can install one-by-one:
  • selenium
    pip install selenium
  • webdriver_manager
    pip install webdriver_manager
  • cryptography
    pip install cryptography
  • Or install by requirements.txt
  • pip install -r requirements.txt

Scraping user followers

Usage

  1. from scrapinsta import Scrapinsta and instantiate

  2. Call function Scrapinsta.get_user_follower(account, amount, method, print_followers)

    account: Account which want to get user followers
    amount: Number of followers to scraping
    method: By default is 'list'(returns a list), but can be 'txt' this will write a .txt with user followers
    print_followers: By default is 'false'(don't print followers), but can be 'true' this will print followers

    Example code:
     
        from scrapinsta import Scrapinsta
    account = 'nasa' # Account to get info amount = 50
    # Instantiate Scrapinsta s = Scrapinsta()
    # Testing: method = 'list' list_followers = s.get_user_followers(account, amount, method='list', print_followers='true')
    # Testing: method = 'txt' s.get_user_followers(account, amount, method='txt', print_followers='true')

Scraping user following account

Usage

  1. from scrapinsta import Scrapinsta and instantiate

  2. Call function Scrapinsta.get_user_followings(account, amount, method, print_following)

    account: Account which wants to get followed users
    amount: Number of followed users to scraping
    method: By default is 'list'(returns a list), but can be 'txt' this will write a .txt with followed users
    print_following: By default is 'false'(don't print followed users), but can be 'true', this will print followed users

    Example code:
     
        from scrapinsta import Scrapinsta
    account = 'nasa' # Account to get info amount = 50
    # Instantiate Scrapinsta s = Scrapinsta()
    # Testing: method = 'list' list_following = s.get_user_followings(account, amount, method='list', print_following='true') # Testing: method = 'txt' s.get_user_followings(account, amount, method='txt', print_following='true')

Contact:

LinkedIn

Twitter

Instagram

E-mail

Owner
Matheus Kolln
Computer Engineering student at UFSC (Universidade Federal de Santa Catarina - Brasil). be happy coding :)
Matheus Kolln
DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques

DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques, based in France Only. The particularity of this program i

Dalunacrobate 347 Jan 07, 2023
Scrape plants scientific name information from Agroforestry Species Switchboard 2.0.

Agroforestry Species Switchboard 2.0 Scraper Scrape plants scientific name information from Species Switchboard 2.0. Requirements python = 3.10 (you

Mgs. M. Rizqi Fadhlurrahman 2 Dec 23, 2021
WebScrapping Project - G1 Latest News

Web Scrapping com Python Esse projeto consiste em um código para o usuário buscar as últimas nóticias sobre um termo qualquer, no site G1. Para esse p

Eduardo Henrique 2 Feb 13, 2022
Explore scraping with BeautifulSoup!

beautifulsoup-scrape Explore scraping with BeautifulSoup! Part One: Start from Shakespeare As my professor is a poet (yes, and he teaches me data and

Chuqin 2 Oct 05, 2022
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
Twitter Claimer / Swapper / Turbo - Proxyless - Multithreading

Twitter Turbo / Auto Claimer / Swapper Version: 1.0 Last Update: 01/26/2022 Use this at your own descretion. I've only used this on test accounts and

Underscores 6 May 02, 2022
联通手机营业厅自动做任务、签到、领流量、领积分等。

联通手机营业厅自动完成每日任务,领流量、签到获取积分等,月底流量不发愁。 功能 沃之树领流量、浇水(12M日流量) 每日签到(1积分+翻倍4积分+第七天1G流量日包) 天天抽奖,每天三次免费机会(随机奖励) 游戏中心每日打卡(连续打卡,积分递增至最高

2k May 06, 2021
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 08, 2022
A simple app to scrap data from Twitter.

Twitter-Scraping-App A simple app to scrap data from Twitter. Available Features Search query. Select number of data you want to fetch from twitter. C

Davis David 2 Oct 31, 2022
Simple proxy scraper made by using ProxyScrape's api.

What is Moon? Moon is a lightweight and fast proxy scraper made by using ProxyScrape's api. What can i do with this? You can use proxies for varietys

1 Jul 04, 2022
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.8k Jan 02, 2023
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 05, 2022
WebScraping - Scrapes Job website for python developer jobs and exports the data to a csv file

WebScraping Web scraping Pyton program that scrapes Job website for python devel

Michelle 2 Jul 22, 2022
Twitter Scraper

Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely

Tayyab Kharl 45 Dec 30, 2022
A Scrapper with python

Scrapper-en-python Scrapper des données signifie récuperer des données pour les traiter ou les analyser. En python, il y'a 2 grands moyens de scrapper

Lun4rIum 1 Dec 05, 2021
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
Dictionary - Application focused on word search through web scraping

Dictionary - Application focused on word search through web scraping, in addition to other functions such as dictation, spell and conjugation of syllables.

Juan Manuel 2 May 09, 2022
mlscraper: Scrape data from HTML pages automatically with Machine Learning

🤖 Scrape data from HTML websites automatically with Machine Learning

Karl Lorey 798 Dec 29, 2022
Pyrics is a tool to scrape lyrics, get rhymes, generate relevant lyrics with rhymes.

Pyrics Pyrics is a tool to scrape lyrics, get rhymes, generate relevant lyrics with rhymes. ./test/run.py provides the full function in terminal cmd

MisterDK 1 Feb 12, 2022
Grab the changelog from releases on Github

release-notes-scraper This simple script can be used to grab the release notes for projects from github that do not keep a CHANGELOG, but publish thei

Dan Čermák 4 Apr 01, 2022