A minimal HTTP client. ⚙️

Related tags

HTTP Clientshttpcore
Overview

HTTP Core

Test Suite Package version

Do one thing, and do it well.

The HTTP Core package provides a minimal low-level HTTP client, which does one thing only. Sending HTTP requests.

It does not provide any high level model abstractions over the API, does not handle redirects, multipart uploads, building authentication headers, transparent HTTP caching, URL parsing, session cookie handling, content or charset decoding, handling JSON, environment based configuration defaults, or any of that Jazz.

Some things HTTP Core does do:

  • Sending HTTP requests.
  • Provides both sync and async interfaces.
  • Supports HTTP/1.1 and HTTP/2.
  • Async backend support for asyncio, trio and curio.
  • Automatic connection pooling.
  • HTTP(S) proxy support.

Installation

For HTTP/1.1 only support, install with...

$ pip install httpcore

For HTTP/1.1 and HTTP/2 support, install with...

$ pip install httpcore[http2]

Quickstart

Here's an example of making an HTTP GET request using httpcore...

with httpcore.SyncConnectionPool() as http:
    status_code, headers, stream, ext = http.request(
        method=b'GET',
        url=(b'https', b'example.org', 443, b'/'),
        headers=[(b'host', b'example.org'), (b'user-agent', 'httpcore')]
    )

    try:
        body = b''.join([chunk for chunk in stream])
    finally:
        stream.close()

    print(status_code, body)

Or, using async...

async with httpcore.AsyncConnectionPool() as http:
    status_code, headers, stream, ext = await http.arequest(
        method=b'GET',
        url=(b'https', b'example.org', 443, b'/'),
        headers=[(b'host', b'example.org'), (b'user-agent', 'httpcore')]
    )

    try:
        body = b''.join([chunk async for chunk in stream])
    finally:
        await stream.aclose()

    print(status_code, body)

Motivation

You probably don't want to be using HTTP Core directly. It might make sense if you're writing something like a proxy service in Python, and you just want something at the lowest possible level, but more typically you'll want to use a higher level client library, such as httpx.

The motivation for httpcore is:

  • To provide a reusable low-level client library, that other packages can then build on top of.
  • To provide a really clear interface split between the networking code and client logic, so that each is easier to understand and reason about in isolation.
Comments
  • anyio.BrokenResourceError exception not caught ?

    anyio.BrokenResourceError exception not caught ?

    Hello,

    I spotted several cases where an anyio exceptions made my program crash.

    The error that is mentionned every time is anyio.BrokenResourceError but the traceback may change.

    I collected the following ones:

    Future exception was never retrieved
    future: <Future finished exception=BrokenResourceError()>
    Traceback (most recent call last):
      File "/usr/lib64/python3.8/asyncio/selector_events.py", line 848, in _read_ready__data_received
        data = self._sock.recv(self.max_size)
    ConnectionResetError: [Errno 104] Connection reset by peer
    
    The above exception was the direct cause of the following exception:
    
    anyio.BrokenResourceError
    
    Future exception was never retrieved
    future: <Future finished exception=BrokenResourceError()>
    Traceback (most recent call last):
      File "/usr/lib64/python3.8/asyncio/selector_events.py", line 848, in _read_ready__data_received
        data = self._sock.recv(self.max_size)
    TimeoutError: [Errno 110] Connection timed out
    
    The above exception was the direct cause of the following exception:
    
    anyio.BrokenResourceError
    
    Future exception was never retrieved
    future: <Future finished exception=BrokenResourceError()>
    Traceback (most recent call last):
      File "/usr/lib64/python3.8/asyncio/selector_events.py", line 848, in _read_ready__data_received
        data = self._sock.recv(self.max_size)
    ConnectionResetError: [Errno 104] Connection reset by peer
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/devloop/.local/share/virtualenvs/wapiti-p7I6n6KS/lib/python3.8/site-packages/httpcore/_backends/anyio.py", line 60, in read
        return await self.stream.receive(n)
      File "/home/devloop/.local/share/virtualenvs/wapiti-p7I6n6KS/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1093, in receive
        raise self._protocol.exception
    anyio.BrokenResourceError
    

    What I was expecting: an httpx exception instead (RequestError).

    Used libraries and versions:

    aiocache==0.11.1
    anyio==3.2.1
    async-timeout==3.0.1
    httpcore==0.13.6
    httpx==0.18.2
    

    Python 3.8.10

    OS: Linux (openSUSE Tumbleweed)

    I managed to reproduce with the following script. Unfortunately it doesn't happen for every website.

    import asyncio
    import signal
    import sys
    
    import httpx
    
    MAX_TASKS = 30
    
    stop_event = asyncio.Event()
    
    
    def stop_attack_process():
        global stop_event
        print("Stopping tasks")
        stop_event.set()
    
    
    class Buster:
        def __init__(self, root_url: str, payloads_file: str, event: asyncio.Event):
            self._client = httpx.AsyncClient(timeout=10)
            self._root_url = root_url
            self._payloads_file = payloads_file
            self._stop_event = event
            self.network_errors = 0
    
        async def close(self):
            await self._client.aclose()
    
        async def check_url(self, url):
            response = await self._client.get(url)
            if response.status_code == 200:
                return True, url
            return False, url
    
        async def brute(self):
            tasks = set()
            pending_count = 0
            payload_iterator = open(self._payloads_file, errors="ignore")
    
            while True:
                if pending_count < MAX_TASKS and not self._stop_event.is_set():
                    try:
                        candidate = next(payload_iterator)
                    except StopIteration:
                        pass
                    else:
                        candidate = candidate.strip()
                        if not candidate:
                            continue
    
                        url = self._root_url + candidate
                        task = asyncio.create_task(self.check_url(url))
                        tasks.add(task)
    
                if not tasks:
                    break
    
                done_tasks, pending_tasks = await asyncio.wait(
                    tasks,
                    timeout=0.01,
                    return_when=asyncio.FIRST_COMPLETED
                )
                pending_count = len(pending_tasks)
                for task in done_tasks:
                    try:
                        result, url = await task
                    except httpx.RequestError:
                        self.network_errors += 1
                    else:
                        if result:
                            print(f"Found {url}")
                    tasks.remove(task)
    
                if self._stop_event.is_set():
                    print("pending tasks:", pending_count)
                    for task in pending_tasks:
                        task.cancel()
                        tasks.remove(task)
    
    
    async def main(root_url: str):
        global stop_event
        filename = "busterPayloads.txt"
        loop = asyncio.get_event_loop()
        loop.add_signal_handler(signal.SIGINT, stop_attack_process)
        buster = Buster(root_url, filename, stop_event)
        await buster.brute()
        loop.remove_signal_handler(signal.SIGINT)
        await buster.close()
    
    if __name__ == "__main__":
        asyncio.run(main(sys.argv[1]))
    
    opened by devl00p 43
  • Local address support.

    Local address support.

    Based on the discussion in #88 .

    This isn't really complete yet. The open issues are (at least).

    • There's no testing. I haven't even done any manual testing; I mostly wanted to verify that the changes look like a good start.

    • The raised exceptions should be improved.

    • mypy reports a bogus error. See https://github.com/python/typeshed/issues/4116

    • It would be nice to support passing a raw string as local_addr (if you just want to specify an address, and don't care about port). I haven't done that yet.

    enhancement 
    opened by bwelling 28
  • Write Error on reusing potentially closed connection

    Write Error on reusing potentially closed connection

    With a service deployed with httpx version with httpcore interface, I started receiving the following error:

    <class 'httpcore._exceptions.WriteError'>: [Errno 104] Connection reset by peer

    The only problem is that this is when connecting to a service that others applications have no problems. The connection is (should be) HTTP/1.1 in this case.

    bug 
    opened by victoraugustolls 25
  • Tweak dropped connection detection

    Tweak dropped connection detection

    Closes #182

    urllib3 had to deal with an issue similar to #182 a while ago: https://github.com/urllib3/urllib3/issues/589

    Their approach was to backport a bunch of 3.5+ logic to maintain 2.7 compatibility, but we're lucky enough to be able to use that logic from the stdlib in the form of the selectors module, a "higher-level" alternative to select that handles cross-platform subtleties among other things.

    This PR switches the sync backend to use selectors.select() instead of select.select() for detecting whether a connection was dropped (i.e. whether the underlying socket has become immediately readable).

    Confirmed that this solves #182, since the CI build passes here but fails on #219.

    bug 
    opened by florimondmanca 21
  • Support secure proxies by implementing HTTPS-in-HTTPS.

    Support secure proxies by implementing HTTPS-in-HTTPS.

    I can't estabilish connection to target via https proxy.

    :me: <-[ssh-tunnel]-> :protected-host: <-> :squid-proxy: <-> :target:

    Here is my proxy settings for AsyncClient:

    PROXIES = {
        'https': 'https://user:[email protected]:8443',
    }
    

    And i try to connect to 'https://target-hostname.com:8080' via that proxies.

    When i use uvloop all works fine. When i use asyncio connection can not be established and failed with traceback:

    Traceback
    Traceback (most recent call last):
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_async/http_proxy.py", line 239, in _tunnel_request
        await proxy_connection.start_tls(host, timeout)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_async/connection.py", line 177, in start_tls
        self.socket = await self.connection.start_tls(hostname, timeout)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_async/http11.py", line 87, in start_tls
        self.socket = await self.socket.start_tls(hostname, self.ssl_context, timeout)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_backends/asyncio.py", line 113, in start_tls
        transport = await asyncio.wait_for(
      File "/Users/spumer/.pyenv/versions/3.8.6/lib/python3.8/asyncio/tasks.py", line 455, in wait_for
        return await fut
      File "/Users/spumer/.pyenv/versions/3.8.6/lib/python3.8/asyncio/base_events.py", line 1181, in start_tls
        raise TypeError(
    TypeError: transport <asyncio.sslproto._SSLProtocolTransport object at 0x108d3bbe0> is not supported by start_tls()
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_exceptions.py", line 326, in map_exceptions
        yield
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1502, in _send_single_request
        (status_code, headers, stream, ext,) = await transport.arequest(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_async/http_proxy.py", line 124, in arequest
        return await self._tunnel_request(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpcore/_async/http_proxy.py", line 242, in _tunnel_request
        raise ProxyError(exc)
    httpcore.ProxyError: transport <asyncio.sslproto._SSLProtocolTransport object at 0x108d3bbe0> is not supported by start_tls()
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/Users/spumer/example/probe.py", line 23, in <module>
        asyncio.run(main())
      File "/Users/spumer/.pyenv/versions/3.8.6/lib/python3.8/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/Users/spumer/.pyenv/versions/3.8.6/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
        return future.result()
      File "/Users/spumer/example/probe.py", line 15, in main
        fns = await client_factory.create_client()
      File "/Users/spumer/example/src/npd/fns_npd/client.py", line 166, in create_client
        message = await self._send_auth_request()
      File "/Users/spumer/example/src/npd/fns_npd/client.py", line 180, in _send_auth_request
        message = await self._auth_client.service.GetMessage(Message=value)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/zeep/proxy.py", line 64, in __call__
        return await self._proxy._binding.send_async(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/zeep/wsdl/bindings/soap.py", line 156, in send_async
        response = await client.transport.post_xml(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/zeep/transports.py", line 230, in post_xml
        response = await self.post(address, message, headers)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/zeep/transports.py", line 215, in post
        response = await self.client.post(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1633, in post
        return await self.request(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1371, in request
        response = await self.send(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1406, in send
        response = await self._send_handling_auth(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1444, in _send_handling_auth
        response = await self._send_handling_redirects(
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1476, in _send_handling_redirects
        response = await self._send_single_request(request, timeout)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_client.py", line 1502, in _send_single_request
        (status_code, headers, stream, ext,) = await transport.arequest(
      File "/Users/spumer/.pyenv/versions/3.8.6/lib/python3.8/contextlib.py", line 131, in __exit__
        self.gen.throw(type, value, traceback)
      File "/Users/spumer/example/.venv/lib/python3.8/site-packages/httpx/_exceptions.py", line 343, in map_exceptions
        raise mapped_exc(message, **kwargs) from exc  # type: ignore
    httpx.ProxyError: transport <asyncio.sslproto._SSLProtocolTransport object at 0x108d3bbe0> is not supported by start_tls()
    
    wontfix 
    opened by spumer 19
  • Migrate docs to Sphinx + MyST

    Migrate docs to Sphinx + MyST

    Refs https://github.com/encode/httpx/discussions/1220

    There are pending discussions on "MkDocs vs Sphinx" for HTTPX.

    This PR goes ahead and experiments with migrating to modern Sphinx for HTTPCore.

    This is a fully functional PR. I find the result so much better from a reader experience perspective that I went ahead and deployed, see https://www.encode.io/httpcore/.

    Preview

    Capture d’écran 2021-03-21 à 15 34 40

    Pieces

    • Sphinx
    • sphinx-autobuild: "watch" behavior for the scripts/docs serve script.
    • sphinx.ext.autodoc: API autodoc
      • Type hints are supported — used description mode so type hints show in param descriptions (similarly to our ad-hoc style) rather than in signatures (which I find unreadable).
        • Caveat: type hints aren't added in class constructors params. But this is tracked by Sphinx and should be fixed soon by https://github.com/sphinx-doc/sphinx/pull/8539.
    • sphinx.ext.viewcode: automatic [source] links in API reference and source pages
    • MyST: Markdown support for Sphinx — allows us to keep writing docs in Markdown.
    • Furo (cc @pradyunsg): theming (inspired by mkdocs-material and gitbooks). Also seems to support light/dark theming?
    • ghp-import: GitHub Pages deployment (used by MkDocs too)

    Some resources I used to help me set things up:

    • https://noumenal.es/notes/sphinx/ - Carlton Gibson's notes on Sphinx + MyST.
    • MyST docs - They contain helpful "how-to" guides and hints for setting up and using eg autodoc and cross-references.
    • Lots of web searching to figure out the specifics of autodoc syntax / viewcode / etc. :-)

    Highlights

    We now get the following:

    • Cross-references (Sphinx "interlinks") to code items (classes, methods, etc) from prose, function signatures, etc. Just look at https://www.encode.io/httpcore/api.html — the API reference is so much more useful and usable as a result.
    • All of Sphinx's default niceties: zero-config syntax highlighting, source links, built-in search index, etc.

    What changed in the development workflow:

    • Docstrings: switched from our ad-hoc style to the NumPy style. I chose it because it is visually very close to our initial style, while still being easy to read (imo). The Sphinx napoleon extension parses these docstrings to generate the corresponding output.
      • Caveat: prose in docstrings is reST, not Markdown. So inline code should be written as ''code'' (double backticks), rather than 'code' (single backtick), and code fences should be written as indented code preceded by a line ending with :: (see Example:: in _bytestreams.py).
    • Docs prose: instead of mkautodoc blocks such as ::: mkautodoc, we now have to use MyST-Parser special :::{directive} style combined with Sphinx directive contents inside it. It's a bit odd at first, but it makes sense. MyST supports reStructuredText things like refs in MD-friendly forms, eg class references can be done via [SomeClass](httpcore.SomeClass) (regular links).

    What DID NOT change:

    • Scripts: scripts/docs serve and scripts/docs gh-deploy work the same way they did when using mkdocs. To deploy to GH Pages, we invoke ghp-import directly (MkDocs' gh-deploy is a wrapper around that) in the shell script. The resulting script remains pretty lightweight. :-)
    documentation 
    opened by florimondmanca 15
  • UDS Support

    UDS Support

    This httpx chanelog PR, as part of a movement to sunset usage of urllib3 in favor of httpcore, mentions UDS support as a temporary casualty of the process because "maintaining that would have meant having to push back our work towards a 1.0 release".

    Regarding putting this support back into httpcore, there has been recent work done in this proof of concept (thanks @florimondmanca ) that suggests that including UDS support inside the library would not be an overwhelming task.

    I personally use a lot of inter-service communication via unix sockets so this would be (opinion) a welcome addition as a first-class citizen of the new release.

    I am brand new to this library; I have only used pre-httpcore httpx. After upgrading past 0.12 of httpx, I was surprised that my code could no longer use the uds= keyword when creating clients, enough to blow up on the trio gitter (apologies). I now understand that keeping to a release schedule and making everyone happy is an extremely hard task!

    @tomchristie suggested this issue be created to start a discussion here. Go!

    enhancement 
    opened by parity3 15
  • Enable Mypy `--strict` internally

    Enable Mypy `--strict` internally

    I think being compatible with mypy --strict benefits end users who are also using it, if https://github.com/encode/httpcore/pull/513 is merged then there isn't much work left to be compatible with strict mode.

    See also https://github.com/encode/httpcore/issues/512

    opened by michaeloliverx 13
  • Switching to `anyio` as the default backend when running under `asyncio`.

    Switching to `anyio` as the default backend when running under `asyncio`.

    Having been bitten by some rough edges in asyncio's SSL support, and having taken a look over anyio's TLSStream implementation. (Which just plain makes sense *) I'm now very warm on the idea of us switching over to anyio as the default backend for the asyncio case.

    We could consider deprecating and later removing the native curio.py and asyncio.py modules, but we don't necessarily need to do that to start with.

    First step here would be switching the default, and issuing a release, in order to make sure we hit any unexpected issues that might crop up as a result.

    Paging @agronholm. 😀


    *: In contrast to this, what the heck is loop.start_tls? Why is that a property of the event loop? Etc.

    opened by tomchristie 13
  • Advanced connection options.

    Advanced connection options.

    Prompted by a comment from @hynek

    We ought to add more __init__ controls to AsyncConnectionPool(...) and SyncConnectionPool(...) for advanced connection options, including...

    • UDS support. https://github.com/encode/httpx/pull/511, https://github.com/encode/httpx/pull/726
    • System IP to use https://github.com/encode/httpx/issues/755
    • Socket family and related options.

    It'd be useful to do a comprehensive review of...

    • What connection options are offered by asyncio?
    • What connection options are offered by stdlib's standard sync networking API?
    • What connection options are offered by trio?

    Compare against controls available in urllib3, aiohttp.

    opened by tomchristie 13
  • Fix tunnel proxy: HTTP requests only

    Fix tunnel proxy: HTTP requests only

    Fixes #54 split from #55 excluding support for tunneling HTTPS requests. Changes:

    • First establish an HTTP connection to our proxy since CONNECT is an HTTP method handled by h11.
    • Once successful discard the connection but prevent closing the socket.
    • Allow passing a socket to the AsyncHTTPConnection and create a new h11 connection to the target.
    opened by yeraydiazdiaz 13
  • Add `DEBUG` level logging.

    Add `DEBUG` level logging.

    Related to https://github.com/encode/httpx/pull/2547

    Example...

    import httpcore
    import logging
    
    
    logging.basicConfig(
        format="%(levelname)s [%(asctime)s] %(name)s - %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S",
        level=logging.DEBUG
    )
    
    httpcore.request('GET', 'https://www.example.com')
    

    Output...

    DEBUG [2023-01-06 10:41:02] httpcore - connection.connect_tcp.started host='www.example.com' port=443 local_address=None timeout=None
    DEBUG [2023-01-06 10:41:02] httpcore - connection.connect_tcp.complete return_value=<httpcore.backends.sync.SyncStream object at 0x108a11978>
    DEBUG [2023-01-06 10:41:02] httpcore - connection.start_tls.started ssl_context=<ssl.SSLContext object at 0x1089ecd68> server_hostname='www.example.com' timeout=None
    DEBUG [2023-01-06 10:41:02] httpcore - connection.start_tls.complete return_value=<httpcore.backends.sync.SyncStream object at 0x108a11940>
    DEBUG [2023-01-06 10:41:02] httpcore - http11.send_request_headers.started request=<Request [b'GET']>
    DEBUG [2023-01-06 10:41:02] httpcore - http11.send_request_headers.complete
    DEBUG [2023-01-06 10:41:02] httpcore - http11.send_request_body.started request=<Request [b'GET']>
    DEBUG [2023-01-06 10:41:02] httpcore - http11.send_request_body.complete
    DEBUG [2023-01-06 10:41:02] httpcore - http11.receive_response_headers.started request=<Request [b'GET']>
    DEBUG [2023-01-06 10:41:02] httpcore - http11.receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Age', b'541826'), (b'Cache-Control', b'max-age=604800'), (b'Content-Type', b'text/html; charset=UTF-8'), (b'Date', b'Fri, 06 Jan 2023 10:41:02 GMT'), (b'Etag', b'"3147526947+ident"'), (b'Expires', b'Fri, 13 Jan 2023 10:41:02 GMT'), (b'Last-Modified', b'Thu, 17 Oct 2019 07:18:26 GMT'), (b'Server', b'ECS (nyb/1D1B)'), (b'Vary', b'Accept-Encoding'), (b'X-Cache', b'HIT'), (b'Content-Length', b'1256')])
    DEBUG [2023-01-06 10:41:02] httpcore - http11.receive_response_body.started request=<Request [b'GET']>
    DEBUG [2023-01-06 10:41:02] httpcore - http11.receive_response_body.complete
    DEBUG [2023-01-06 10:41:02] httpcore - http11.response_closed.started
    DEBUG [2023-01-06 10:41:02] httpcore - http11.response_closed.complete
    

    There's different design decisions that we could make here about...

    • The logger name. More granular logger names like httpcore.connection, httpcore.http11, httpcore.http2, would probably be preferable really.
    • Additional logging points. Connection close, and pool information stand out here.
    • INFO vs DEBUG. We could have INFO level logs just for the basic call-in/exit points of ConnectionPool.handle_request.
    • Improving the arguments passed. In plenty of places we're internally just passing request around, which is sufficient, but leads to less clear logging and trace info. As an example send_request_headers could be broken down to method, target, headers, timeout.
    • Improved __repr__ implementations. For example <httpcore.Request 'GET' 'https://www.example.com'>, or <httpcore.SyncStream [bytes read: 0, bytes written: 0, OPEN]>

    But this pull request is a minimal starting point that just hooks into our existing trace without making any further changes.

    enhancement 
    opened by tomchristie 0
  • Teardown errors when tasks are cancelled or time out

    Teardown errors when tasks are cancelled or time out

    Over at @anthropics we're keen users of httpx on Trio, and I've noticed that we sometimes have issues where cancelling a task doesn't get all the teardown logic right. For example, if you try to await (or async for/with) in an except BaseException: or finally: block, Trio will immediately re-raise a Cancelled exception instead of running your teardown logic.

    This can be pretty subtle, since it's happening as you abandon a task anyway, but can cause various subtle or unsubtle problems that motivated us to build flake8-trio, including the TRIO102 error for this case. I'm pretty sure that httpcore has multiple such issues, e.g.

    https://github.com/encode/httpcore/blob/f0657cb43cb707d1672b93b61bb53b4cfb166820/httpcore/_async/connection_pool.py#L251-L253

    doesn't look cancel-safe to me. Do you consider "this doesn't close the connection properly when cancelled" a bug? If so, please consider this a fairly general bug-report!

    (I've also found it infeasible to consistently get this right without tool support. If you want to try flake8-trio many of the checks are framework-agnostic; and if it'd be useful for httpcore and httpx we'd be happy to add anyio support to the linter 🙂)

    opened by Zac-HD 1
  • If writing the complete request fails, then still attempt to read a response.

    If writing the complete request fails, then still attempt to read a response.

    Handle cases like HTTP 413, where the request write fails part way through sending, but a properly formed HTTP error response is then sent.

    Raised in discussion https://github.com/encode/httpx/discussions/2503

    bug 
    opened by tomchristie 2
  • Document and cleanup `scripts`.

    Document and cleanup `scripts`.

    Prompted by @agronholm on our chat channel.

    We should add a README to our scripts directory, explaining that we're using the GitHub "scripts to pull them all" pattern, and show how that ties in with our test and publish workflows.

    That'd also be a good opportunity for a bit of clean-up.

    It might be neatest for our scripts directory to strictly have a 1:1 mapping with the steps we have in our workflows.

    Here's how our two workflows currently look...

          - name: "Install dependencies"
            run: "scripts/install"
          - name: "Run linting checks"
            run: "scripts/check"
          - name: "Build package & docs"
            run: "scripts/build"
          - name: "Run tests"
            run: "scripts/test"
          - name: "Enforce coverage"
            run: "scripts/coverage"
    
          - name: "Install dependencies"
            run: "scripts/install"
          - name: "Build package & docs"
            run: "scripts/build"
          - name: "Publish to PyPI & deploy docs"
            run: "scripts/publish"
    

    And our scripts...

    $ ls -1 ./scripts/
    build
    check
    clean  # This doesn't exist in a workflow.
    coverage  # Called into by `test` when running locally.
    install
    lint  # This doesn't exist in a workflow.
    publish
    test
    unasync  # Called into by `check` and `lint`
    

    Perhaps a neater approach would be...

    • scripts/install - When running locally this should always start by removing any existing venv, coverage and build directories. We don't need scripts/clean then.
    • scripts/check and scripts/lint - I'd suggest we just have scripts/lint. Run scripts/lint --fix to update files in-place.
    • scripts/unasync - Should be a separate step in the workflow. Run scripts/unasync --fix to update files in-place.

    So that each script is always a single workflow step.

    opened by tomchristie 0
  • Bump pytest-trio from 0.7.0 to 0.8.0

    Bump pytest-trio from 0.7.0 to 0.8.0

    Bumps pytest-trio from 0.7.0 to 0.8.0.

    Commits
    • 250ec45 Bump version to 0.8.0
    • 2ba99e6 Merge pull request #130 from Zac-HD/use-exceptiongroup
    • 6fbe63e Adjust warnings config
    • 648f9e9 Unwrap magic exceptions from single-leaf groups
    • ed732d3 Remove use of MultiError
    • cbd6197 Merge pull request #127 from python-trio/spdx-compliant-license
    • ea0cd1c Merge branch 'master' into spdx-compliant-license
    • 9f7dd7f Merge pull request #129 from Zac-HD/general-maintenance
    • f788fe7 TEMP: ignore warnings
    • ce850d6 Create 129.misc.rst
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • recv on httpcore/backends/sync.py raise BlockingIOError error when tcp keep alive enabled

    recv on httpcore/backends/sync.py raise BlockingIOError error when tcp keep alive enabled

    I'm using httpx with http2 that use this module as dependency (python 3.9.14 embedded on android app)

    in the past an httpx dev has suggest me to enable tcp keep alive in this way: https://github.com/CastagnaIT/plugin.video.netflix/commit/86e15b610f5bd6c30742f59cb876d58d5ea71d1a (i need it otherwise i receive always connection reset errors) this has always work good but the httpx version was the 0.18.2 and relative depends like httpcore was 0.13.6

    now i am updating httpx to last version 0.23.0, and this require httpcore 0.15.0 i have reapplied the tcp keep alive in httpcore as follows: https://github.com/CastagnaIT/plugin.video.netflix/pull/1485/commits/30f6f64b4cb27f707419662dbd88a6cc03c41da5

    but the problem is that httpcore now always raise BlockingIOError error where in the old version 0.13.6 not happens

    full stack trace here: https://paste.kodi.tv/urozubekiv stacktrace.txt

    i have no idea how to fix this seems that recent 0.14 rework may caused a kind of regression

    opened by CastagnaIT 2
Releases(0.16.3)
  • 0.16.3(Dec 20, 2022)

    0.16.3 (December 20th, 2022)

    • Allow ws and wss schemes. Allows us to properly support websocket upgrade connections. (#625)
    • Forwarding HTTP proxies use a connection-per-remote-host. Required by some proxy implementations. (#637)
    • Don't raise RuntimeError when closing a connection pool with active connections. Removes some error cases when cancellations are used. (#631)
    • Lazy import anyio, so that it's no longer a hard dependancy, and isn't imported if unused. (#639)
    Source code(tar.gz)
    Source code(zip)
  • 0.16.2(Nov 25, 2022)

    0.16.2 (November 25th, 2022)

    • Revert 'Fix async cancellation behaviour', which introduced race conditions. (#627)
    • Raise RuntimeError if attempting to us UNIX domain sockets on Windows. (#619)
    Source code(tar.gz)
    Source code(zip)
  • 0.16.1(Nov 17, 2022)

  • 0.16.0(Nov 7, 2022)

  • 0.15.0(May 17, 2022)

    0.15.0 (May 17th, 2022)

    • Drop Python 3.6 support (#535)
    • Ensure HTTP proxy CONNECT requests include timeout configuration. (#506)
    • Switch to explicit typing.Optional for type hints (#513)
    • For trio map OSError exceptions to ConnectError (#543)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.7(Feb 4, 2022)

    0.14.7 (February 4th, 2022)

    • Requests which raise a PoolTimeout need to be removed from the pool queue. (#502)
    • Fix AttributeError that happened when Socks5Connection were terminated. (#501)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.6(Feb 1, 2022)

    0.14.6 (February 1st, 2022)

    • Fix SOCKS support for http:// URLs. (#492)
    • Resolve race condition around exceptions during streaming a response. (#491)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.5(Jan 18, 2022)

    0.14.5 (January 18th, 2022)

    • SOCKS proxy support. (#478)
    • Add proxy_auth argument to HTTPProxy (#481)
    • Improve error message on 'RemoteProtocolError' exception when server disconnects without sending a response (#479)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.4(Jan 5, 2022)

    0.14.4 (January 5th, 2021)

    • Support HTTP/2 on HTTPS tunnelling proxies. (#468)
    • Fix proxy headers missing on HTTP forwarding. (#456)
    • Only instantiate SSL context if required. (#457)
    • More robust HTTP/2 handling. (#253, #439, #440, #441)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.3(Nov 17, 2021)

  • 0.14.2(Nov 16, 2021)

  • 0.14.1(Nov 12, 2021)

    0.14.1 (November 12th, 2021)

    • max_connections becomes optional. (Pull #429)
    • certifi is now included in the install dependancies. (Pull #428)
    • h2 is now strictly optional. (Pull #428)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.0(Nov 11, 2021)

    0.14.0 (November 11th, 2021)

    The 0.14 release is a complete reworking of httpcore, comprehensively addressing some underlying issues in the connection pooling, as well as substantially redesigning the API to be more user friendly.

    Some of the lower-level API design also makes the components more easily testable in isolation, and the package now has 100% test coverage.

    See discussion #419 for a little more background.

    There's some other neat bits in there too, such as the "trace" extension, which gives a hook into inspecting the internal events that occur during the request/response cycle. This extension is needed for the HTTPX cli, in order to...

    • Log the point at which the connection is established, and the IP/port on which it is made.
    • Determine if the outgoing request should log as HTTP/1.1 or HTTP/2, rather than having to assume it's HTTP/2 if the --http2 flag was passed. (Which may not actually be true.)
    • Log SSL version info / certificate info.

    Note that curio support is not currently available in 0.14.0. If you're using httpcore with curio please get in touch, so we can assess if we ought to prioritize it as a feature or not.

    Source code(tar.gz)
    Source code(zip)
  • 0.13.7(Sep 13, 2021)

  • 0.13.6(Jun 15, 2021)

  • 0.13.5(Jun 14, 2021)

  • 0.13.4(Jun 9, 2021)

    0.13.4 (June 9th, 2021)

    Added

    • Improved error messaging when URL scheme is missing, or a non HTTP(S) scheme is used. (Pull ##354)

    Fixed

    • Switched to anyio as the default backend implementation when running with asyncio. Resolves some awkward TLS timeout issues.
    Source code(tar.gz)
    Source code(zip)
  • 0.13.3(May 6, 2021)

    0.13.3 (May 6th, 2021)

    Added

    • Support HTTP/2 prior knowledge, using httpcore.SyncConnectionPool(http1=False). (Pull #333)

    Fixed

    • Handle cases where environment does not provide select.poll support. (Pull #331)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.2(Apr 29, 2021)

    0.13.2 (April 29th, 2021)

    Added

    • Improve error message for specific case of RemoteProtocolError where server disconnects without sending a response. (Pull #313)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.1(Apr 28, 2021)

    0.13.1 (April 28th, 2021)

    Fixed

    • More resiliant testing for closed connections. (Pull #311)
    • Don't raise exceptions on ungraceful connection closes. (Pull #310)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.0(Apr 21, 2021)

    0.13.0 (April 21st, 2021)

    The 0.13 release updates the core API in order to match the HTTPX Transport API, introduced in HTTPX 0.18 onwards.

    An example of making requests with the new interface is:

    with httpcore.SyncConnectionPool() as http:
        status_code, headers, stream, extensions = http.handle_request(
            method=b'GET',
            url=(b'https', b'example.org', 443, b'/'),
            headers=[(b'host', b'example.org'), (b'user-agent', b'httpcore')]
            stream=httpcore.ByteStream(b''),
            extensions={}
        )
        body = stream.read()
        print(status_code, body)
    

    Changed

    • The .request() method is now handle_request(). (Pull #296)
    • The .arequest() method is now .handle_async_request(). (Pull #296)
    • The headers argument is no longer optional. (Pull #296)
    • The stream argument is no longer optional. (Pull #296)
    • The ext argument is now named extensions, and is no longer optional. (Pull #296)
    • The "reason" extension keyword is now named "reason_phrase". (Pull #296)
    • The "reason_phrase" and "http_version" extensions now use byte strings for their values. (Pull #296)
    • The httpcore.PlainByteStream() class becomes httpcore.ByteStream(). (Pull #296)

    Added

    • Streams now support a .read() interface. (Pull #296)

    Fixed

    • Task cancelation no longer leaks connections from the connection pool. (Pull #305)
    Source code(tar.gz)
    Source code(zip)
  • 0.12.3(Jan 28, 2021)

    0.12.3 (December 7th, 2020)

    Fixed

    • Abort SSL connections on close rather than waiting for remote EOF when using asyncio. (Pull #167)
    • Fix exception raised in case of connect timeouts when using the anyio backend. (Pull #236)
    • Fix Host header precedence for :authority in HTTP/2. (Pull #241, #243)
    • Handle extra edge case when detecting for socket readability when using asyncio. (Pull #242, #244)
    • Fix asyncio SSL warning when using proxy tunneling. (Pull #249)
    Source code(tar.gz)
    Source code(zip)
  • 0.12.2(Nov 20, 2020)

    0.12.2 (November 20th, 2020)

    Fixed

    • Properly wrap connect errors on the asyncio backend. (Pull #235)
    • Fix ImportError occurring on Python 3.9 when using the HTTP/1.1 sync client in a multithreaded context. (Pull #237)
    Source code(tar.gz)
    Source code(zip)
  • 0.12.1(Nov 7, 2020)

    0.12.1 - 2020-11-07

    Added

    • Add connect retries. (Pull #221)

    Fixed

    • Tweak detection of dropped connections, resolving an issue with open files limits on Linux. (Pull #185)
    • Avoid leaking connections when establishing an HTTP tunnel to a proxy has failed. (Pull #223)
    • Properly wrap OS errors when using trio. (Pull #225)
    Source code(tar.gz)
    Source code(zip)
  • 0.12.0(Oct 6, 2020)

    0.12.0 (October 6th, 2020)

    Changed

    • HTTP header casing is now preserved, rather than always sent in lowercase. (#216 and python-hyper/h11#104)

    Added

    • Add Python 3.9 to officially supported versions.

    Fixed

    • Gracefully handle a stdlib asyncio bug when a connection is closed while it is in a paused-for-reading state. (#201)
    Source code(tar.gz)
    Source code(zip)
  • 0.11.1(Sep 28, 2020)

  • 0.11.0(Sep 22, 2020)

    0.11.0 (September 22nd, 2020)

    The Transport API with 0.11.0 has a couple of significant changes.

    Firstly we've moved changed the request interface in order to allow extensions, which will later enable us to support features such as trailing headers, HTTP/2 server push, and CONNECT/Upgrade connections.

    The interface changes from:

    def request(method, url, headers, stream, timeout):
        return (http_version, status_code, reason, headers, stream)
    

    To instead including an optional dictionary of extensions on the request and response:

    def request(method, url, headers, stream, ext):
        return (status_code, headers, stream, ext)
    

    Having an open-ended extensions point will allow us to add later support for various optional features, that wouldn't otherwise be supported without these API changes.

    In particular:

    • Trailing headers support.
    • HTTP/2 Server Push
    • sendfile.
    • Exposing raw connection on CONNECT, Upgrade, HTTP/2 bi-di streaming.
    • Exposing debug information out of the API, including template name, template context.

    Currently extensions are limited to:

    • request: timeout - Optional. Timeout dictionary.
    • response: http_version - Optional. Include the HTTP version used on the response.
    • response: reason - Optional. Include the reason phrase used on the response. Only valid with HTTP/1.*.

    See https://github.com/encode/httpx/issues/1274#issuecomment-694884553 for the history behind this.

    Secondly, the async version of request is now namespaced as arequest.

    This allows concrete transports to support both sync and async implementations on the same class.

    Added

    • Add curio support. (Pull #168)
    • Add anyio support, with backend="anyio". (Pull #169)

    Changed

    • Update the Transport API to use 'ext' for optional extensions. (Pull #190)
    • Update the Transport API to use .request and .arequest so implementations can support both sync and async. (Pull #189)
    Source code(tar.gz)
    Source code(zip)
  • 0.10.2(Aug 20, 2020)

    0.10.2 (August 20th, 2020)

    Added

    • Added Unix Domain Socket support. (Pull #139)

    Fixed

    • Always include the port on proxy CONNECT requests. (Pull #154)
    • Fix max_keepalive_connections configuration. (Pull #153)
    • Fixes behaviour in HTTP/1.1 where server disconnects can be used to signal the end of the response body. (Pull #164)
    Source code(tar.gz)
    Source code(zip)
  • 0.10.1(Aug 7, 2020)

  • 0.10.0(Aug 7, 2020)

    0.10.0 (August 7th, 2020)

    The most notable change in the 0.10.0 release is that HTTP/2 support is now fully optional.

    Use either pip install httpcore for HTTP/1.1 support only, or pip install httpcore[http2] for HTTP/1.1 and HTTP/2 support.

    Added

    • HTTP/2 support becomes optional. (Pull #121, #130)
    • Add local_address=... support. (Pull #100, #134)
    • Add PlainByteStream, IteratorByteStream, AsyncIteratorByteStream. The AsyncByteSteam and SyncByteStream classes are now pure interface classes. (#133)
    • Add LocalProtocolError, RemoteProtocolError exceptions. (Pull #129)
    • Add UnsupportedProtocol exception. (Pull #128)
    • Add .get_connection_info() method. (Pull #102, #137)
    • Add better TRACE logs. (Pull #101)

    Changed

    • max_keepalive is deprecated in favour of max_keepalive_connections. (Pull #140)

    Fixed

    • Improve handling of server disconnects. (Pull #112)
    Source code(tar.gz)
    Source code(zip)
Owner
Encode
Collaboratively funded software development.
Encode
A minimal HTTP client. ⚙️

HTTP Core Do one thing, and do it well. The HTTP Core package provides a minimal low-level HTTP client, which does one thing only. Sending HTTP reques

Encode 306 Dec 27, 2022
An interactive command-line HTTP and API testing client built on top of HTTPie featuring autocomplete, syntax highlighting, and more. https://twitter.com/httpie

HTTP Prompt HTTP Prompt is an interactive command-line HTTP client featuring autocomplete and syntax highlighting, built on HTTPie and prompt_toolkit.

HTTPie 8.6k Dec 31, 2022
Script to automate PUT HTTP method exploitation to get shell.

Script to automate PUT HTTP method exploitation to get shell.

devploit 116 Nov 10, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Jan 02, 2023
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Benoit Chesneau 334 Dec 24, 2022
HTTP Request Smuggling Detection Tool

HTTP Request Smuggling Detection Tool HTTP request smuggling is a high severity vulnerability which is a technique where an attacker smuggles an ambig

Anshuman Pattnaik 282 Jan 03, 2023
A toolbelt of useful classes and functions to be used with python-requests

The Requests Toolbelt This is just a collection of utilities for python-requests, but don't really belong in requests proper. The minimum tested reque

892 Jan 06, 2023
r - a small subset of Python Requests

r a small subset of Python Requests a few years ago, when I was first learning Python and looking for http functionality, i found the batteries-includ

Gabriel Sroka 4 Dec 15, 2022
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.1k Jan 01, 2023
curl statistics made simple

httpstat httpstat visualizes curl(1) statistics in a way of beauty and clarity. It is a single file 🌟 Python script that has no dependency 👏 and is

Xiao Meng 5.3k Jan 04, 2023
T-Reqs: A grammar-based HTTP Fuzzer

T-Reqs HTTP Fuzzer T-Reqs (Two Requests) is a grammar-based HTTP Fuzzer written as a part of the paper titled "T-Reqs: HTTP Request Smuggling with Dif

Bahruz Jabiyev 207 Dec 06, 2022
Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Introduction httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries. HTTP and HTTPS HTTP

457 Dec 10, 2022
EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, parallel or even single requests

EasyRequests EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, paralle

Avi 1 Jan 27, 2022
Aiohttp-openmetrics - OpenMetrics endpoint provider for aiohttp

aiohttp-openmetrics This project contains a simple middleware and /metrics route

Jelmer Vernooij 1 Dec 15, 2022
Python Client for the Etsy NodeJS Statsd Server

Introduction statsd is a client for Etsy's statsd server, a front end/proxy for the Graphite stats collection and graphing server. Links The source: h

Rick van Hattem 107 Jun 09, 2022
A Python obfuscator using HTTP Requests and Hastebin.

🔨 Jawbreaker 🔨 Jawbreaker is a Python obfuscator written in Python3, using double encoding in base16, base32, base64, HTTP requests and a Hastebin-l

Billy 50 Sep 28, 2022
Get the HTTP code of websites along with a cute cat picture

Cat Requests What is this? Cat requests allows you to both get the HTTP response code of the website you wish and it displays it to your screen as a c

Oakchris1955 3 Feb 27, 2022
suite de mocks http em json

Ritchie Formula Repo Documentation Contribute to the Ritchie community This repository contains rit formulas which can be executed by the ritchie-cli.

Kaio Fábio Prates Prudêncio 1 Nov 01, 2021
hackhttp2 make everything easier

hackhttp2 intro This repo is inspired by hackhttp, but it's out of date already. so, I create this repo to make simulation and Network request easier.

youbowen 5 Jun 15, 2022
Aiosonic - lightweight Python asyncio http client

aiosonic - lightweight Python asyncio http client Very fast, lightweight Python asyncio http client Here is some documentation. There is a performance

Johanderson Mogollon 93 Jan 06, 2023