Web Scraping Framework

Overview

Grab Framework Documentation

pytest status coveralls documentation

Installation

    $ pip install -U grab

See details about installing Grab on different platforms here http://docs.grablib.org/en/latest/usage/installation.html

Support

Documentation: https://grablab.org/docs/

Russian telegram chat: https://t.me/grablab_ru

English telegram chat: https://t.me/grablab

To report bug please use GitHub issue tracker: https://github.com/lorien/grab/issues

What is Grab?

Grab is a python web scraping framework. Grab provides a number of helpful methods to perform network requests, scrape web sites and process the scraped content:

  • Automatic cookies (session) support
  • HTTP and SOCKS proxy with/without authorization
  • Keep-Alive support
  • IDN support
  • Tools to work with web forms
  • Easy multipart file uploading
  • Flexible customization of HTTP requests
  • Automatic charset detection
  • Powerful API to extract data from DOM tree of HTML documents with XPATH queries
  • Asynchronous API to make thousands of simultaneous queries. This part of library called Spider. See list of spider fetures below.
  • Python 3 ready

Spider is a framework for writing web-site scrapers. Features:

  • Rules and conventions to organize the request/parse logic in separate blocks of codes
  • Multiple parallel network requests
  • Automatic processing of network errors (failed tasks go back to task queue)
  • You can create network requests and parse responses with Grab API (see above)
  • HTTP proxy support
  • Caching network results in permanent storage
  • Different backends for task queue (in-memory, redis, mongodb)
  • Tools to debug and collect statistics

Grab Example

    import logging

    from grab import Grab

    logging.basicConfig(level=logging.DEBUG)

    g = Grab()

    g.go('https://github.com/login')
    g.doc.set_input('login', '****')
    g.doc.set_input('password', '****')
    g.doc.submit()

    g.doc.save('/tmp/x.html')

    g.doc('//ul[@id="user-links"]//button[contains(@class, "signout")]').assert_exists()

    home_url = g.doc('//a[contains(@class, "header-nav-link name")]/@href').text()
    repo_url = home_url + '?tab=repositories'

    g.go(repo_url)

    for elem in g.doc.select('//h3[@class="repo-list-name"]/a'):
        print('%s: %s' % (elem.text(),
                          g.make_url_absolute(elem.attr('href'))))

Grab::Spider Example

    import logging

    from grab.spider import Spider, Task

    logging.basicConfig(level=logging.DEBUG)


    class ExampleSpider(Spider):
        def task_generator(self):
            for lang in 'python', 'ruby', 'perl':
                url = 'https://www.google.com/search?q=%s' % lang
                yield Task('search', url=url, lang=lang)

        def task_search(self, grab, task):
            print('%s: %s' % (task.lang,
                              grab.doc('//div[@class="s"]//cite').text()))


    bot = ExampleSpider(thread_number=2)
    bot.run()
Comments
  • Spider hangs during work randomly

    Spider hangs during work randomly

    now I have an issue I can't crack myself, last log entries looks this way:

    [12.05.2018] [15:23:07] [DEBUG] [1535-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:08] [DEBUG] RPS: 2.02 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:08] [DEBUG] RPS: 0.51 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:08] [DEBUG] [1536-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:08] [DEBUG] [1537-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] RPS: 0.76 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:10] [DEBUG] RPS: 0.76 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:10] [DEBUG] [1538-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] [1539-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] [1540-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:11] [DEBUG] RPS: 1.17 [error:grab-connection-error=284, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:11] [DEBUG] RPS: 0.59 [error:grab-connection-error=284, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:11] [DEBUG] [1541-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:11] [DEBUG] [1542-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] [1543-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] RPS: 1.73 [error:grab-connection-error=285, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:12] [DEBUG] [1544-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] [1545-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:14] [DEBUG] RPS: 3.69 [error:grab-connection-error=285, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    

    or this:

    [13.05.2018] [07:13:53] [DEBUG] RPS: 1.49 [fatal=24]
    [13.05.2018] [07:13:53] [DEBUG] RPS: 0.75 [fatal=24]
    [13.05.2018] [07:13:53] [DEBUG] [363-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:53] [DEBUG] [364-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:53] [DEBUG] [365-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:54] [DEBUG] RPS: 1.29 [fatal=24]
    [13.05.2018] [07:13:54] [DEBUG] RPS: 0.65 [fatal=24]
    [13.05.2018] [07:13:54] [DEBUG] [366-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:54] [DEBUG] [367-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:55] [DEBUG] [368-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] RPS: 1.21 [fatal=24]
    [13.05.2018] [07:13:56] [DEBUG] RPS: 0.61 [fatal=24]
    [13.05.2018] [07:13:56] [DEBUG] [369-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] [370-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] [371-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] RPS: 1.38 [fatal=24]
    [13.05.2018] [07:13:58] [DEBUG] RPS: 0.69 [fatal=24]
    [13.05.2018] [07:13:58] [DEBUG] [372-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] [373-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] [374-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] RPS: 1.77 [fatal=24]
    [13.05.2018] [07:13:59] [DEBUG] RPS: 0.00 [fatal=24]
    [13.05.2018] [07:13:59] [DEBUG] [375-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] [376-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] [377-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:14:01] [DEBUG] RPS: 1.21 [fatal=24]
    [13.05.2018] [07:14:01] [DEBUG] RPS: 0.60 [fatal=24]
    [13.05.2018] [07:14:01] [DEBUG] [378-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:14:02] [DEBUG] RPS: 2.20 [fatal=24]
    

    it's final entries I can see in log. no mater how much time to wait (I was waiting for 20 hours to see result), spider just eat 1 CPU core at 100% and do absolutely nothing.

    my bot have 5 stages of work and it can be hanged on any stage.

    it's very difficult to debug this issue because no any errors in log, and it happens randomly

    I am running bot this way:

    bot = ExampleSpider(thread_number=3, network_service='threaded', grab_transport='pycurl')
    bot.load_proxylist("./proxy_tor.txt", "text_file", "socks5")
    bot.run()
    

    any ideas guys? :confused:

    bug 
    opened by EnzoRondo 25
  • Ошибка: pycurl.error: (0, '') при попытке отправки формы

    Ошибка: pycurl.error: (0, '') при попытке отправки формы

    Имеется html с формой:

    <form enctype="multipart/form-data" action="http://example.com/" method="post" accept-charset="UTF-8">
        <textarea name="body">Beställa</textarea>
        <input type="submit" name="op" value="Save"/>
    </form>
    

    Есть который отправляет форму:

    from grab import Grab
    
    g = Grab()
    g.setup(debug=True)
    
    g.go('file:///C:/page.html') # тут вставьте путь до файла что указан выше
    
    g.doc.set_input('op', 'Save')
    g.doc.submit(submit_name='op')
    

    При отправке формы получаем ошибку:

    pycurl.error: (0, '')
    

    Но если заменить внутри textarea код на другой, например вот так сделать в textarea:

    <textarea name="body">123</textarea>
    

    То всё отправится нормально.

    Как это исправить?

    opened by InputError 23
  • Python 3.5 - Unable to build DOM tree.

    Python 3.5 - Unable to build DOM tree.

    File "src/lxml/lxml.etree.pyx", line 3427, in lxml.etree.parse (src/lxml/lxml.etree.c:79801)
      File "src/lxml/parser.pxi", line 1799, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:116219)
      File "src/lxml/parser.pxi", line 1819, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:116494)
      File "src/lxml/parser.pxi", line 1700, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:115040)
      File "src/lxml/parser.pxi", line 1040, in lxml.etree._BaseParser._parseUnicodeDoc (src/lxml/lxml.etree.c:109165)
      File "src/lxml/parser.pxi", line 573, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:103404)
      File "src/lxml/parser.pxi", line 683, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:105058)
      File "src/lxml/parser.pxi", line 613, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:103967)
      File "<string>", line None
    lxml.etree.XMLSyntaxError: switching encoding: encoder error, line 1, column 1
    

    With preceding:

    encoding error : input conversion failed due to input error, bytes 0x21 0x00 0x00 0x00
    encoding error : input conversion failed due to input error, bytes 0x44 0x00 0x00 0x00
    I/O error : encoder error
    

    Example:

    class Scraper(Spider):
        def task_generator(self):
            urls = [
                'https://au.linkedin.com/directory/people-a/',
                'https://www.linkedin.com/directory/people-a/'
            ]
            for url in urls:
                yield Task('url', url=url)
    
        def task_url(self, grab, task):
            links = grab.doc('//div[@class="columns"]//ul/li[@class="content"]/a')
    
    
    bot = Scraper()
    bot.run()
    

    That's happened on some pages, perhaps lxml failed to detect correct encoding.

    bug 
    opened by oiwn 23
  • Пустые cookie

    Пустые cookie

    from grab import Grab
    url = 'https://www.fiverr.com/'
    user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36'
    session_f_name1 = 'fiverr_session1.txt'
    session_f_name2 = 'fiverr_session2.txt'
    
    g1 = Grab()
    g1.setup(cookiefile=session_f_name1)
    g1.go(url)
    print 'g1', g1.cookies.cookiejar
    
    g2 = Grab()
    g2.setup(cookiefile=session_f_name2, user_agent=user_agent)
    g2.go(url)
    print 'g2', g2.cookies.cookiejar
    

    в первом случае кука есть, во втором нет

    opened by nodermann 22
  • Cookies issue on windows with pycurl version pycurl 7.43.0.1

    Cookies issue on windows with pycurl version pycurl 7.43.0.1

    Проверял на:

    Microsoft Windows Server 2012 Standard
    Microsoft Windows 7 Ultimate
    

    Версия питона на обоих машинах: Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32

    grab=0.6.3.8

    Код:

    from grab import Grab, error
    import sys
    import logging
    import base64
    
    g = Grab()
    g.setup(timeout=60)
    g.setup(debug=True, debug_post=True)
    
    logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
    
    # запрятано в интересах приватности
    url, login, pwd = base64.b64decode("v1WZktzbtVGZ7AHaw5CelRmbp9icvRXYyR3cp5WatRWYvwGcu0Wdw12LvoDc0RHa"[::-1])\
        .decode("utf-8").split(";")
    
    g.go(url)
    g.doc.set_input('username', login)
    g.doc.set_input('passwd', pwd)
    try:
        g.doc.set_input('lang', 'en-GB')
    except:
        pass
    g.doc.submit()
    
    is_logged = g.doc.text_search("task=logout")
    if not is_logged:
        raise error.GrabError("не вошли")
    
    print("all right!")
    

    Ставим pycurl-7.19.5.3, получаем результат: all right!

    Ставим последний pycurl-7.43.0.1, получаем: grab.error.GrabError: не вошли

    Это не особенность сайта, я проверял в других местах и там тоже самое. Сверял руками тело пост запроса с тем что отправляется в браузере, содержимое идентично.

    opened by InputError 20
  • GrabTimeoutError

    GrabTimeoutError

    Hello, I have exception like https://github.com/lorien/grab/issues/140, but I haven`t some DNS Errors. OS and network: OS: Windows 8 (x64) fixed DNS: 8.8.8.8

    Script:

    import pycurl; 
    from grab import Grab
    import logging
    
    print(pycurl.version); 
    print(pycurl.__file__);
    
    logging.basicConfig(level=logging.DEBUG)
    g = Grab(verbose_logging=True, debug=True)
    g.go('http://github.com')
    print g.xpath_text('//title')
    

    script output:

    PycURL/7.43.0 libcurl/7.47.0 OpenSSL/1.0.2f zlib/1.2.8 c-ares/1.10.0 libssh2/1.6.0
    c:\Python27\lib\site-packages\pycurl.pyd
    DEBUG:grab.network:[01] GET http://github.com
    DEBUG:grab.transport.curl:i: Rebuilt URL to: http://github.com/
    DEBUG:grab.transport.curl:i: Resolving timed out after 3000 milliseconds
    DEBUG:grab.transport.curl:i: Closing connection 0
    Traceback (most recent call last):
      File "D:\pr_files\source\python\planned\htmlParser\bgsParser\NewPythonProject\src\bgsParser.py", line 10, in <module>
        g.go('http://github.com')
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\base.py", line 377, in go
        return self.request(url=url, **kwargs)
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\base.py", line 450, in request
        self.transport.request()
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\transport\curl.py", line 489, in request
        raise error.GrabTimeoutError(ex.args[0], ex.args[1])
    grab.error.GrabTimeoutError: [Errno 28] Resolving timed out after 3000 milliseconds
    

    I tried to reinstall curl:

    C:\Python27\Scripts>pip install pycurl-7.43.0-cp27-none-win_amd64.whl --upgrade
    Processing c:\python27\scripts\pycurl-7.43.0-cp27-none-win_amd64.whl
    Installing collected packages: pycurl
      Found existing installation: pycurl 7.43.0
        Uninstalling pycurl-7.43.0:
          Successfully uninstalled pycurl-7.43.0
    Successfully installed pycurl-7.43.0
    

    but It doesn`t work. What can be wrong?

    Thank you for help.

    opened by tofflife 20
  • socks5 ip mismatch

    socks5 ip mismatch

    Hi, I am trying to use socks5 proxy list with grab.spider

    Here is my small test script:

    from grab.spider import Spider, Task
    import logging
    
    
    class TestSpider(Spider):
        def prepare(self):
            self.load_proxylist(
                'proxy.list',
                source_type='text_file', proxy_type='socks5',
                auto_change=True,
                read_timeout=180
            )
            self.set_proxy = set()
            self.real_proxy = set()
    
        def task_generator(self):
            for i in range(200):
                yield Task('2ip', 'http://2ip.ru/')
    
        def task_2ip(self, grab, task):
            ip = grab.doc.select('//big[@id="d_clip_button"]').text()
            self.real_proxy.add(ip)
    
            proxy = grab.config['proxy'].split(':')[0]
            self.set_proxy.add(proxy)
    
            # if proxy != ip:
            #     print proxy, ip
    
        def shutdown(self):
            print len(self.set_proxy), len(self.real_proxy)
    
    
    logging.basicConfig(level=logging.DEBUG)
    TestSpider(thread_number=16).run()
    

    The result is: 197 16

    As we can see, the real number of used proxies is only 16, the same as thread_number.

    I am using grab version from pip. The version of the curl is

    curl 7.38.0 (x86_64-pc-linux-gnu) libcurl/7.38.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
    Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp 
    Features: AsynchDNS IDN IPv6 Largefile GSS-API SPNEGO NTLM NTLM_WB SSL libz TLS-SRP 
    

    I think, the problem is with libcurl. What shall I do to fix it?

    opened by asluchevskiy 16
  • accidental work of grab.spider

    accidental work of grab.spider

    Hey there (@lorien), thanks a lot for great library :smiley:

    I am learning your library and now see unexpected behavior during work, here is my code sample which is based on example in documentation:

    import csv
    import logging
    import re
    
    from grab.spider import Spider, Task
    
    
    class ExampleSpider(Spider):
        def create_grab_instance(self, **kwargs):
            g = super(ExampleSpider, self).create_grab_instance(**kwargs)
            g.setup(proxy='127.0.0.1:8090', proxy_type='socks5', timeout=60, connect_timeout=15)
            return g
    
        def task_generator(self):
            for i in range(1, 1 + 1):
                page_url = "{}{}/".format("https://www.mourjan.com/properties/", i)
                # print("page url: {}".format(page_url))
                yield Task('stage_two', url=page_url)
    
        def prepare(self):
            # Prepare the file handler to save results.
            # The method `prepare` is called one time before the
            # spider has started working
            self.result_file = csv.writer(open('result.txt', 'w'))
    
            # This counter will be used to enumerate found images
            # to simplify image file naming
            self.result_counter = 0
    
        def task_stage_two(self, grab, task):
            for elem in grab.doc.select("//li[@itemprop='itemListElement']//p")[0:4]:
                part = elem.attr("onclick")
                url_part = re.search(r"(?<=wo\(\').*(?=\'\))", part).group()
                end_url = grab.make_url_absolute(url_part)
                yield Task('stage_three', url=end_url)
    
        def task_stage_three(self, grab, task):
            # First, save URL and title into dictionary
            post = {
                'url': task.url,
                'title': grab.doc.xpath_text("//title/text()"),
            }
            self.result_file.writerow([
                post['url'],
                post['title'],
            ])
            # Increment image counter
            self.result_counter += 1
    
    
    if __name__ == '__main__':
        logging.basicConfig(level=logging.DEBUG)
        # Let's start spider with two network concurrent streams
        bot = ExampleSpider(thread_number=2)
        bot.run()
    
    

    first run:

    DEBUG:grab.spider.base:Using memory backend for task queue
    DEBUG:grab.network:[01] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[02] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[03] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[04] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[05] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 7.35 [error:multi-added-already=5, network-count-rejected=1]
    DEBUG:grab.spider.parser_pipeline:Started shutdown of parser process: Thread-1
    DEBUG:grab.spider.parser_pipeline:Finished joining parser process: Thread-1
    DEBUG:grab.spider.base:Main process [pid=4064]: work done
    

    :confused:

    then I am running code again ~20 attempts and have same shit, but 21 time gives success and I see what I want to see:

    DEBUG:grab.spider.base:Using memory backend for task queue
    DEBUG:grab.network:[01] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 0.52 []
    DEBUG:grab.network:[02] GET https://www.mourjan.com/kw/kuwait/warehouses/rental/10854564/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[03] GET https://www.mourjan.com/ae/abu-dhabi/apartments/rental/11047384/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[04] GET https://www.mourjan.com/kw/kuwait/villas-and-houses/rental/11041455/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[05] GET https://www.mourjan.com/ae/abu-dhabi/apartments/rental/11009663/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 2.36 []
    DEBUG:grab.stat:RPS: 1.28 []
    DEBUG:grab.spider.parser_pipeline:Started shutdown of parser process: Thread-1
    DEBUG:grab.spider.parser_pipeline:Finished joining parser process: Thread-1
    DEBUG:grab.spider.base:Main process [pid=4860]: work done
    

    why it happens?

    bug 
    opened by EnzoRondo 15
  • Баг post запросов windows x64

    Баг post запросов windows x64

    с 32 битными системами все проще, один добрый человек собрал курл под него, есть ли у вас возможность сделать тоже самое для x64? Мы тут всей командой мучаемся из за этого бага. Библиотека мощная и труда в нее вложено не мало, но она становится бесполезной при таком баге. Или возможно ли сделать его не на curl, а на сокетах к примеру.

    opened by ArturFis 15
  • Добавление возможности накопления проксей между их обновлениями

    Добавление возможности накопления проксей между их обновлениями

    Раньше, при каждом обновлении прокси-листа, старый лист затирался новым. Добавил возможность держать старые прокси в списке, а при обновлении просто расширять список проксями, которых ещё нет в списке. (т.е. добавлять только свежие соксы, не дублируя старые) Также, ранее, при обновлении проксей, итератор каждый раз создавался заново, что влечёт за собой такую ситацию: если время обновления списка относительно маленькое, то итератор не успевает доходить до конца списка и обновляется и , в итоге, последние прокси в списке так и не юзаются. Поэтому сейчас он создаётся единожды, но вместо itertools.cycle использую свой cycle, потому что в itertools список кешируется, т.е. не получится динамически его обновлять.

    opened by temptask 13
  • Как избавиться от

    Как избавиться от "operation-timeouted"?

    Здравствуйте. Только начал использовать Grab, но все очень нравится.

    Проблема следующая: Парсю больше 3 миллионов инвентарей со стима. Инвентари - это просто json-файлы, бывают большие и маленькие. Маленькие инвентари парсятся Spyder'ом без проблем, но вот инвентари побольше как-то страшно подвешивают поток, а потом выдают ошибку вида:

    DEBUG:grab.stat:RPS: 0.26 [error:operation-timeouted=7]

    Искал везде: тут, на гитхабе, в гуглгруппах, в документации, но ничего не нашел про то, что это значит и как с этим бороться. Пробовал вручную создавать инстансы Grab'а, передавать им connection_timeout и timeout и сувать их в Task'и, но видимого эффекта не получил.

    opened by seniorjoinu 10
Releases(v0.6.40)
  • v0.6.40(May 14, 2018)

  • v0.6.39(May 10, 2018)

    Fixed

    • Fix bug: task generator works incorrectly
    • Fix bug: pypi package misses http api html file
    • Fix bug: dictionary changed size during iteration in stat logging
    • Fix bug: multiple errors in urllib3 transport and threaded network service
    • Fix short names of errors in stat logging
    • Improve error handling in urrllib3 transport
    • Fix #299: multi-added errors
    • Fix bug: pypi package misses http api html file
    • Fix #285: pyquery extension parses html incorrectly
    • Fix #267: normalize handling of too many redirect error
    • Fix #268: fix processing of utf cookies
    • Fix #241: form_fields() fails on some HTML forms
    • Fix normalize_unicode issue in debug post method
    • Fix #323: urllib3 transport fails with UnicodeError on some invalid URLs
    • Fix #31: support for multivalue form inputs
    • Fix #328, fix #67: remove hard link between document and grab
    • Fix #284: option headers affects content of common_headers
    • Fix #293: processing non-latin chars in Location header
    • Fix #324: refactor response header processing

    Changed

    • Refactor Spider into set of async. services
    • Add certifi dependency into grab[full] setup target
    • Fix #315: use psycopg2-binary package for postgres cache
    • Related to #206: do not use connection_reuse=False for proxy connections in spider

    Removed

    • Remove cache timeout option
    • Remove structured extension
    Source code(tar.gz)
    Source code(zip)
  • v0.6.38(May 10, 2018)

    Fixed

    • Fix "error:None" in spider rps logging
    • Fix race condition bug in task generator

    Added

    • Add original_exc attribute to GrabNetworkError (and subclasses) that points to original exception

    Changed

    • Remove IOError from the ancestors of GrabNetworkError
    • Add default values to --spider-transport and --grab-transport options of crawl script
    Source code(tar.gz)
    Source code(zip)
  • v0.6.37(May 10, 2018)

    Added

    • Add --spider-transport and --grab-transport options to crawl script
    • Add SOCKS5 proxy support in urllib3 transport

    Fixed

    • Fix #237: urllib3 transport fails without pycurl installed
    • Fix bug: incorrect spider request logging when cache is enabled
    • Fix bug: crawl script fails while trying to process a lock key
    • Fix bug: urllib3 transport fails while trying to throw GrabConnectionError exception
    • Fix bug: Spider add_task method fails while trying to log invalid URL error

    Removed

    • Remove obsoleted hammer_mode and hammer_timeout config options
    Source code(tar.gz)
    Source code(zip)
  • v0.6.36(May 10, 2018)

    Added

    • Add pylint to default test set

    Fixed

    • Fix #229: using deprecated response object inside Grab

    Removed

    • Remove spider project template and start_project script
    Source code(tar.gz)
    Source code(zip)
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Hitesh Rana 4 Jun 02, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
This is a sport analytics project that combines the knowledge of OOP and Webscraping

This is a sport analytics project that combines the knowledge of Object Oriented Programming (OOP) and Webscraping, the weekly scraping of the English Premier league table is carried out to assess th

Dolamu Oludare 1 Nov 26, 2021
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper GDev Profile Badge Scraper is a Google Developer Profile Web Scraper which scrapes for specific badges in a use

Siddhant Lad 7 Jan 10, 2022
UdemyBot - A Simple Udemy Free Courses Scrapper

UdemyBot - A Simple Udemy Free Courses Scrapper

Gautam Kumar 112 Nov 12, 2022
Download images from forum threads

Forum Image Scraper Downloads images from forum threads Only works with forums which doesn't require a login to view and have an incremental paginatio

9 Nov 16, 2022
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
feapder 是一款简单、快速、轻量级的爬虫框架。以开发快速、抓取快速、使用简单、功能强大为宗旨。支持分布式爬虫、批次爬虫、多模板爬虫,以及完善的爬虫报警机制。

feapder 是一款简单、快速、轻量级的爬虫框架。起名源于 fast、easy、air、pro、spider的缩写,以开发快速、抓取快速、使用简单、功能强大为宗旨,历时4年倾心打造。支持轻量爬虫、分布式爬虫、批次爬虫、爬虫集成,以及完善的爬虫报警机制。 之

boris 1.4k Dec 29, 2022
An helper library to scrape data from Instagram effortlessly, using the Influencer Hunters APIs.

Instagram Scraper An utility library to scrape data from Instagram hassle-free Go to the website » View Demo · Report Bug · Request Feature About The

2 Jul 06, 2022
A web scraper for nomadlist.com, made to avoid website restrictions.

Gypsylist gypsylist.py is a web scraper for nomadlist.com, made to avoid website restrictions. nomadlist.com is a website with a lot of information fo

Alessio Greggi 5 Nov 24, 2022
Scrap-mtg-top-8 - A top 8 mtg scraper using python

Scrap-mtg-top-8 - A top 8 mtg scraper using python

1 Jan 24, 2022
Automated data scraper for Thailand COVID-19 data

The Researcher COVID data Automated data scraper for Thailand COVID-19 data Accessing the Data 1st Dose Provincial Vaccination Data 2nd Dose Provincia

Porames Vatanaprasan 31 Apr 17, 2022
Bulk download tool for the MyMedia platform

MyMedia Bulk Content Downloader This is a bulk download tool for the MyMedia platform. USE ONLY WHERE ALLOWED BY THE COPYRIGHT OWNER. NOT AFFILIATED W

Ege Feyzioglu 3 Oct 14, 2022
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Joseph Lai 543 Jan 03, 2023
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
抢京东茅台脚本,定时自动触发,自动预约,自动停止

jd_maotai 抢京东茅台脚本,定时自动触发,自动预约,自动停止 小白信用 99.6,暂时还没抢到过,朋友 80 多抢到了一瓶,所以我感觉是跟信用分没啥关系,完全是看运气的。

Aruelius.L 117 Dec 22, 2022
让中国用户使用git从github下载的速度提高1000倍!

序言 github上有很多好项目,但是国内用户连github却非常的慢.每次都要用插件或者其他工具来解决. 这次自己做一个小工具,输入github原地址后,就可以自动替换为代理地址,方便大家更快速的下载. 安装 pip install cit 主要功能与用法 主要功能 change 将目标地址转换为

35 Aug 29, 2022