Pseudo API for Google Trends

Related tags

Web Crawlingpytrends
Overview

pytrends

Introduction

Unofficial API for Google Trends

Allows simple interface for automating downloading of reports from Google Trends. Only good until Google changes their backend again :-P. When that happens feel free to contribute!

Looking for maintainers!

Table of contens

Installation

pip install pytrends

Requirements

  • Written for Python 3.3+
  • Requires Requests, lxml, Pandas

back to top

API

Connect to Google

from pytrends.request import TrendReq

pytrends = TrendReq(hl='en-US', tz=360)

or if you want to use proxies as you are blocked due to Google rate limit:

from pytrends.request import TrendReq

pytrends = TrendReq(hl='en-US', tz=360, timeout=(10,25), proxies=['https://34.203.233.13:80',], retries=2, backoff_factor=0.1, requests_args={'verify':False})
  • timeout(connect, read)

  • tz

    • Timezone Offset
    • For example US CST is '360' (note NOT -360, Google uses timezone this way...)
  • proxies

    • https proxies Google passed ONLY
    • list ['https://34.203.233.13:80','https://35.201.123.31:880', ..., ...]
  • retries

    • number of retries total/connect/read all represented by one scalar
  • backoff_factor

    • A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for: {backoff factor} * (2 ^ ({number of total retries} - 1)) seconds. If the backoff_factor is 0.1, then sleep() will sleep for [0.0s, 0.2s, 0.4s, …] between retries. It will never be longer than Retry.BACKOFF_MAX. By default, backoff is disabled (set to 0).
  • requests_args

    • A dict with additional parameters to pass along to the underlying requests library, for example verify=False to ignore SSL errors

Note: the parameter hl specifies host language for accessing Google Trends. Note: only https proxies will work, and you need to add the port number after the proxy ip address

Build Payload

kw_list = ["Blockchain"]
pytrends.build_payload(kw_list, cat=0, timeframe='today 5-y', geo='', gprop='')

Parameters

  • kw_list

    • Required
    • Keywords to get data for

back to top

API Methods

The following API methods are available:

  • Interest Over Time: returns historical, indexed data for when the keyword was searched most as shown on Google Trends' Interest Over Time section.

  • Historical Hourly Interest: returns historical, indexed, hourly data for when the keyword was searched most as shown on Google Trends' Interest Over Time section. It sends multiple requests to Google, each retrieving one week of hourly data. It seems like this would be the only way to get historical, hourly data.

  • Interest by Region: returns data for where the keyword is most searched as shown on Google Trends' Interest by Region section.

  • Related Topics: returns data for the related keywords to a provided keyword shown on Google Trends' Related Topics section.

  • Related Queries: returns data for the related keywords to a provided keyword shown on Google Trends' Related Queries section.

  • Trending Searches: returns data for latest trending searches shown on Google Trends' Trending Searches section.

  • Top Charts: returns the data for a given topic shown in Google Trends' Top Charts section.

  • Suggestions: returns a list of additional suggested keywords that can be used to refine a trend search.

back to top

Common API parameters

Many API methods use the following:

  • kw_list

    • keywords to get data for

    • Example ['Pizza']

    • Up to five terms in a list: ['Pizza', 'Italian', 'Spaghetti', 'Breadsticks', 'Sausage']

      • Advanced Keywords

        • When using Google Trends dashboard Google may provide suggested narrowed search terms.
        • For example "iron" will have a drop down of "Iron Chemical Element, Iron Cross, Iron Man, etc".
        • Find the encoded topic by using the get_suggestions() function and choose the most relevant one for you.
        • For example: https://www.google.com/trends/explore#q=%2Fm%2F025rw19&cmpt=q
        • "%2Fm%2F025rw19" is the topic "Iron Chemical Element" to use this with pytrends
        • You can also use pytrends.suggestions() to automate this.
  • cat

    • Category to narrow results
    • Find available cateogies by inspecting the url when manually using Google Trends. The category starts after cat= and ends before the next & or view this wiki page containing all available categories
    • For example: "https://www.google.com/trends/explore#q=pizza&cat=71"
    • '71' is the category
    • Defaults to no category
  • geo

    • Two letter country abbreviation
    • For example United States is 'US'
    • Defaults to World
    • More detail available for States/Provinces by specifying additonal abbreviations
    • For example: Alabama would be 'US-AL'
    • For example: England would be 'GB-ENG'
  • tz

  • timeframe

    • Date to start from

    • Defaults to last 5yrs, 'today 5-y'.

    • Everything 'all'

    • Specific dates, 'YYYY-MM-DD YYYY-MM-DD' example '2016-12-14 2017-01-25'

    • Specific datetimes, 'YYYY-MM-DDTHH YYYY-MM-DDTHH' example '2017-02-06T10 2017-02-12T07'

      • Note Time component is based off UTC
    • Current Time Minus Time Pattern:

      • By Month: 'today #-m' where # is the number of months from that date to pull data for

        • For example: 'today 3-m' would get data from today to 3months ago
        • NOTE Google uses UTC date as 'today'
        • Works for 1, 3, 12 months only!
      • Daily: 'now #-d' where # is the number of days from that date to pull data for

        • For example: 'now 7-d' would get data from the last week
        • Works for 1, 7 days only!
      • Hourly: 'now #-H' where # is the number of hours from that date to pull data for

        • For example: 'now 1-H' would get data from the last hour
        • Works for 1, 4 hours only!
  • gprop

    • What Google property to filter to
    • Example 'images'
    • Defaults to web searches
    • Can be images, news, youtube or froogle (for Google Shopping results)

back to top

Interest Over Time

pytrends.interest_over_time()

Returns pandas.Dataframe

back to top

Historical Hourly Interest

pytrends.get_historical_interest(kw_list, year_start=2018, month_start=1, day_start=1, hour_start=0, year_end=2018, month_end=2, day_end=1, hour_end=0, cat=0, geo='', gprop='', sleep=0)

Parameters

  • kw_list

    • Required
    • list of keywords that you would like the historical data
  • year_start, month_start, day_start, hour_start, year_end, month_end, day_end, hour_end

    • the time period for which you would like the historical data
  • sleep

    • If you are rate-limited by Google, you should set this parameter to something (i.e. 60) to space off each API call.

Returns pandas.Dataframe

back to top

Interest by Region

pytrends.interest_by_region(resolution='COUNTRY', inc_low_vol=True, inc_geo_code=False)

Parameters

  • resolution

    • 'CITY' returns city level data
    • 'COUNTRY' returns country level data
    • 'DMA' returns Metro level data
    • 'REGION' returns Region level data
  • inc_low_vol

    • True/False (includes google trends data for low volume countries/regions as well)
  • inc_geo_code

    • True/False (includes ISO codes of countries along with the names in the data)

Returns pandas.DataFrame

back to top

Related Topics

pytrends.related_topics()

Returns dictionary of pandas.DataFrames

back to top

Related Queries

pytrends.related_queries()

Returns dictionary of pandas.DataFrames

back to top

Trending Searches

pytrends.trending_searches(pn='united_states') # trending searches in real time for United States
pytrends.trending_searches(pn='japan') # Japan

Returns pandas.DataFrame

back to top

Top Charts

pytrends.top_charts(date, hl='en-US', tz=300, geo='GLOBAL')

Parameters

  • date

    • Required
    • YYYY integer
    • Example 2019 for the year 2019 Top Chart data
    • Note Google removed support for monthly queries (e.g. YYYY-MM)
    • Note Google does not return data for the current year

Returns pandas.DataFrame

back to top

Suggestions

pytrends.suggestions(keyword)

Parameters

  • keyword

    • Required
    • keyword to get suggestions for

Returns dictionary

back to top

Categories

pytrends.categories()

Returns dictionary

back to top

Caveats

  • This is not an official or supported API
  • Google may change aggregation level for items with very large or very small search volume
  • Rate Limit is not publicly known, let me know if you have a consistent estimate
    • One user reports that 1,400 sequential requests of a 4 hours timeframe got them to the limit. (Replicated on 2 networks)
    • It has been tested, and 60 seconds of sleep between requests (successful or not) is the correct amount once you reach the limit.
  • For certain configurations the dependency lib certifi requires the environment variable REQUESTS_CA_BUNDLE to be explicitly set and exported. This variable must contain the path where the ca-certificates are saved or a SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] error is given at runtime.

Credits

Comments
  • 429 After first request?

    429 After first request?

    Howdy,

    I'm attempting to use the library and I'm getting hit with a 429 error after copying the example code. Here's my script:

    from pytrends.request import TrendReq
    
    pytrends = TrendReq(hl='en-US', tz=360)
    kw_list = ["Blockchain"]
    pytrends.build_payload(kw_list, cat=0, timeframe='today 5-y', geo='', gprop='')
    pytrends.interest_over_time()
    

    I can visit the trends website fine, and I can copy and paste the URL produced by the API and get a json file just fine. It's hard for me to imagine being rate limited on my first request and still being able to visit the site normally.

    Any ideas?

    opened by alexsullivan114 41
  • RateLimitError

    RateLimitError

    Hi!

    Thank you for the updates of the code. I tried to run the new updated version. After about 10 downloads, I receive the following traceback:

    Traceback (most recent call last): File "C:/Users/Documents/Python Scripts/collect_gtrends.py", line 34, in trend=pytrend.trend(trend_payload, return_type='dataframe') File "C:\Users\AppData\Roaming\Python\Python27\site-packages\pytrends\request.py", line 62, in trend raise RateLimitError pytrends.request.RateLimitError

    I don't think this is the quota limit problem. Maybe I was downloading too frequently? How may seconds do you guys wait in between requests? My current program lets it sleep for 5-10 seconds. Is that not enough? Thank you!

    opened by sarahjohns 29
  • Is it just me or have 429 errors really increase these past few days?

    Is it just me or have 429 errors really increase these past few days?

    I used to get 429 error every time I request more than 6 items per hour or so.

    But recently, especially today I am not able to request more than 1 per hour without getting 429. Is it just my IP acting up?

    opened by igalci 23
  • ModuleNotFoundError: No module named 'pandas.io.json.normalize'

    ModuleNotFoundError: No module named 'pandas.io.json.normalize'

    I have pandas, lxml, numpy, json modules. But i've got this error when i run example codes of pytrends.

    ModuleNotFoundError: No module named 'pandas.io.json.normalize'

    opened by kubilaykilinc 20
  • Python 2 compatibility issues

    Python 2 compatibility issues

    Hi all,

    I'm having troubles using this library with pyhton2 when there is an error in the response. In fact JSONDecodeError that is being caught when parsing the response is not defined in Python 2 (as stated in https://docs.python.org/3/library/json.html#json.JSONDecodeError). It is also stated that JSONDecodeError is a subclass of ValueError, which could be used for the python2 version.

    Thanks, Luca

    bug 
    opened by covix 20
  • Google Quota limit - IP Address changer

    Google Quota limit - IP Address changer

    Hi, As mentionned before in previous issues I face the Google quota limit after barely 10 requests. I tried to change my IP address by routing the requests through Tor. However I have not been able yet to bypass the limitation. I raise the issue in the following page: http://stackoverflow.com/questions/40406458/google-trends-quota-limit-ip-address-changer

    opened by jblemoine 18
  • interest_over_time doesn't work

    interest_over_time doesn't work

    Hi, I have the following issue:

    Using your example I execute the following code: pytrend.build_payload(kw_list=['pizza', 'bagel']) pytrend.interest_over_time()

    After the last one I have an answer "ValueError: year is out of range"

    And the following: pytrend.interest_by_region() gives me : ValueError: No JSON object could be decoded

    At the same time pytrend.related_queries() works well.

    What could be wrong here?

    opened by FourthWiz 15
  • request for tests

    request for tests

    Merging PRs is difficult for the maintainers because of a lack of robust tests. If somebody writes a broad set of tests, it will significantly improve the ability to merge updates with less risk.

    Thank you for your help!

    help wanted good first issue 
    opened by emlazzarin 14
  • Script stopped working.. 400 Bad Request error

    Script stopped working.. 400 Bad Request error

    I have been using this python script for long now... suddenly today this script stopped working. I am getting 400 Bad Request error and now able to download any Google Trend CSV file from the script..

    Getting error for Connector as well. "connector = pyGTrends(google_username, google_password)"

    I think this is the main issue.

    opened by ravimevcha 14
  • Trends with daily granularity

    Trends with daily granularity

    I think that this is not an issue because Google decides the granularity of the results (daily, weekly or monthly) depending on the search time frame. So I decided to implement a method that splits big time frames into smaller ones (90 days i.e.) with a one day overlap to normalize the scale between the data. Do you think this could be a good improvement for pytrends?

    help wanted 
    opened by bigme666 13
  • SSL cert failure on VPN

    SSL cert failure on VPN

    When I try to login using TrendReq(..) I am getting a SSL error. From what I've been able to figure out is that the form from https://accounts.google.com/ServiceLogin doesn't seem to accept the Passwd arguments until you make the request with the email and then you have to make a new request again with the password. Not sure if this is on the right track or I'm just being an idiot.

    opened by ZenW00kie 13
  • Combination of interest_over_time() and interest_by_region()

    Combination of interest_over_time() and interest_by_region()

    Hey there, I was wondering if it is possible to request data over time with a defined geographical resolution. Currently, it is only possible to have either a temporal or a spatial differentiation, but not both at the same time. Since different Google Trends API URLs are used for the two requests, I think Google Trends may restrict this option. Thanks!

    opened by MoritzDPTV 0
  • Getting incomplete data requesting timeframe=all

    Getting incomplete data requesting timeframe=all

    So Im searching for a specific term and my request gets back with a data gap, while checking the data on the google website and donwloading data comes complete. Also when requesting data for the specific gap I get the correct data points.

    ` kw_list = ['Arbeitslosigkeit'] pytrends.build_payload(kw_list, cat=0, timeframe='2010-01-01 2012-12-25', geo='DE', gprop='')

    pytrends.build_payload(kw_list, cat=0, timeframe='all', geo='DE', gprop='')

    `

    pic1 marcc

    opened by RogerRendon 1
  •  Cannot get the same result as the webpage

    Cannot get the same result as the webpage

    I use interest_over_time() can not get the same result as the webpage, I notice the webpage's headers['req'] is diffrent from the requests, i change it as the webpage's, but still cannot get the same result, what should i do?

    the webpage's headers['req'] is in below, Some do not seem to exist before? Is this the reason? req: {"time":"2004-01-01 2022-11-17","resolution":"MONTH","locale":"zh-CN","comparisonItem":[{"geo":{"country":"BR"},"complexKeywordsRestriction":{"keyword":[{"type":"ENTITY","value":"/m/01hpbc"}]}},{"geo":{"country":"BR"},"complexKeywordsRestriction":{"keyword":[{"type":"ENTITY","value":"/g/11dymw9wxl"}]}}],"requestOptions":{"property":"","backend":"IZG","category":0},"userConfig":{"userType":"USER_TYPE_LEGIT_USER"}}

    opened by jmz1996 2
  •  Interest_over_time missing data

    Interest_over_time missing data

    Today I started facing an issue with the Interest_over_time missing data.

    The trend data just drops to 0 for about a year or so then the data picks back up.

    Last night I had no issues then this morning it started. Tested on multiple machines and different networks.

    For example, try running Interest_over_time for the keyword "barefoot shoes" you'll see around 2020 the data goes to 0 and then returns to normal.

    It only happens for some keywords while others are fine.

    Anyone else facing this issue?

    opened by nicktba 3
  • Newbie: specification of years

    Newbie: specification of years "today 5-y" works but not "today 10-y"

    This could be a newbie issue. Are there restrictions on the years valid in the timescale parameter ? I can get the payload to work with "today 5-y" but not today 10-y". The "all" parameter works "all" - I note that in other parts of the api there are specific limits - are years restricted to 5 or all ? thanks

    opened by loquor 0
  • No way to know what changed between versions

    No way to know what changed between versions

    Currently there is no way to know what changed between versions except to download both versions from pypi and check the differences in the source code, this makes very risky to depend on this library for anything non-amateurish.

    Please consider adding one or more of the following:

    • Release in Github with changelog.
    • Annotated tags in the commit where the version is released.
    • Add a CHANGELOG.md file to the repo with a header for every version released; bonus points if you follow the Keep a changelog format.
    • Add a changelog section in the README.md with a header for every version released.
    opened by Terseus 2
Releases(v4.8.0)
Owner
General Mills
General Mills
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
This tool crawls a list of websites and download all PDF and office documents

This tool crawls a list of websites and download all PDF and office documents. Then it analyses the PDF documents and tries to detect accessibility issues.

AccessibilityLU 7 Sep 30, 2022
Python Web Scrapper Project

Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e

Jordan Ítalo Amaral 2 Jan 04, 2022
Examine.com supplement research scraper!

ExamineScraper Examine.com supplement research scraper! Why I want to be able to search pages for a specific term. For example, I want to be able to s

Tyler 15 Dec 06, 2022
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
Pseudo API for Google Trends

pytrends Introduction Unofficial API for Google Trends Allows simple interface for automating downloading of reports from Google Trends. Only good unt

General Mills 2.6k Dec 28, 2022
This is a web crawler that works on employ email data by gmane.org and visualizes it in different ways.

crawler_to_visual_gmane Analyzing an EMAIL Archive from gmane and vizualizing the data using the D3 JavaScript library. This is a set of tools that al

Saim Zafar 1 Dec 20, 2021
Creating Scrapy scrapers via the Django admin interface

django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and

Holger Drewes 1.1k Dec 17, 2022
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
A social networking service scraper in Python

snscrape snscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the disco

2.4k Jan 01, 2023
Scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info

SpaceX Sofware I developed software to scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info to use the software you need Python a

Maxence Rémy 16 Aug 02, 2022
Get-web-images - A python code that get images from any site

image retrieval This is a python code to retrieve an image from the internet, a

CODE 1 Dec 30, 2021
中国大学生在线 四史自动答题刷分(现仅支持英雄篇)

中国大学生在线 “四史”学习教育竞答 自动答题 刷分 (现仅支持英雄篇,已更新可用) 若对您有所帮助,记得点个Star 🌟 !!! 中国大学生在线 “四史”学习教育竞答 自动答题 刷分 (现仅支持英雄篇,已更新可用) 🥰 🥰 🥰 依赖 本项目依赖的第三方库: requests 在终端执行以下

XWhite 229 Dec 12, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022
Scrapes mcc-mnc.com and outputs 3 files with the data (JSON, CSV & XLSX)

mcc-mnc.com-webscraper Scrapes mcc-mnc.com and outputs 3 files with the data (JSON, CSV & XLSX) A Python script for web scraping mcc-mnc.com Link: mcc

Anton Ivarsson 1 Nov 07, 2021
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
Dailyiptvlist.com Scraper With Python

Dailyiptvlist.com scraper Info Made in python Linux only script Script requires to have wget installed Running script Clone repository with: git clone

1 Oct 16, 2021
河南工业大学 完美校园 自动校外打卡

HAUT-checkin 河南工业大学自动校外打卡 由于github actions存在明显延迟,建议直接使用腾讯云函数 特点 多人打卡 使用简单,仅需账号密码以及用于微信推送的uid 自动获取上一次打卡信息用于打卡 向所有成员微信单独推送打卡状态 完美校园服务器繁忙时造成打卡失败会自动重新打卡

36 Oct 27, 2022
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022