Official Python low-level client for Elasticsearch

Overview

Python Elasticsearch Client

Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Python; because of this it tries to be opinion-free and very extendable.

Installation

Install the elasticsearch package with pip:

$ python -m pip install elasticsearch

If your application uses async/await in Python you can install with the async extra:

$ python -m pip install elasticsearch[async]

Read more about how to use asyncio with this project.

Compatibility

The library is compatible with all Elasticsearch versions since 0.90.x but you have to use a matching major version:

For Elasticsearch 7.0 and later, use the major version 7 (7.x.y) of the library.

For Elasticsearch 6.0 and later, use the major version 6 (6.x.y) of the library.

For Elasticsearch 5.0 and later, use the major version 5 (5.x.y) of the library.

For Elasticsearch 2.0 and later, use the major version 2 (2.x.y) of the library, and so on.

The recommended way to set your requirements in your setup.py or requirements.txt is:

# Elasticsearch 7.x
elasticsearch>=7.0.0,<8.0.0

# Elasticsearch 6.x
elasticsearch>=6.0.0,<7.0.0

# Elasticsearch 5.x
elasticsearch>=5.0.0,<6.0.0

# Elasticsearch 2.x
elasticsearch>=2.0.0,<3.0.0

If you have a need to have multiple versions installed at the same time older versions are also released as elasticsearch2 and elasticsearch5.

Example use

Simple use-case:

>>> from datetime import datetime
>>> from elasticsearch import Elasticsearch

# by default we connect to localhost:9200
>>> es = Elasticsearch()

# create an index in elasticsearch, ignore status code 400 (index already exists)
>>> es.indices.create(index='my-index', ignore=400)
{'acknowledged': True, 'shards_acknowledged': True, 'index': 'my-index'}

# datetimes will be serialized
>>> es.index(index="my-index", id=42, body={"any": "data", "timestamp": datetime.now()})
{'_index': 'my-index',
 '_type': '_doc',
 '_id': '42',
 '_version': 1,
 'result': 'created',
 '_shards': {'total': 2, 'successful': 1, 'failed': 0},
 '_seq_no': 0,
 '_primary_term': 1}

# but not deserialized
>>> es.get(index="my-index", id=42)['_source']
{'any': 'data', 'timestamp': '2019-05-17T17:28:10.329598'}

Full documentation.

Elastic Cloud (and SSL) use-case:

>>> from elasticsearch import Elasticsearch
>>> es = Elasticsearch(cloud_id="<some_long_cloud_id>", http_auth=('elastic','yourpassword'))
>>> es.info()

Using SSL Context with a self-signed cert use-case:

>>> from elasticsearch import Elasticsearch
>>> from ssl import create_default_context

>>> context = create_default_context(cafile="path/to/cafile.pem")
>>> es = Elasticsearch("https://elasticsearch.url:port", ssl_context=context, http_auth=('elastic','yourpassword'))
>>> es.info()

Features

The client's features include:

  • translating basic Python data types to and from json (datetimes are not decoded for performance reasons)
  • configurable automatic discovery of cluster nodes
  • persistent connections
  • load balancing (with pluggable selection strategy) across all available nodes
  • failed connection penalization (time based - failed connections won't be retried until a timeout is reached)
  • support for ssl and http authentication
  • thread safety
  • pluggable architecture

Elasticsearch-DSL

For a more high level client library with more limited scope, have a look at elasticsearch-dsl - a more pythonic library sitting on top of elasticsearch-py.

elasticsearch-dsl provides a more convenient and idiomatic way to write and manipulate queries by mirroring the terminology and structure of Elasticsearch JSON DSL while exposing the whole range of the DSL from Python either directly using defined classes or a queryset-like expressions.

It also provides an optional persistence layer for working with documents as Python objects in an ORM-like fashion: defining mappings, retrieving and saving documents, wrapping the document data in user-defined classes.

License

Copyright 2020 Elasticsearch B.V

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Build Status

https://readthedocs.org/projects/elasticsearch-py/badge/?version=latest&style=flat https://clients-ci.elastic.co/job/elastic+elasticsearch-py+master/badge/icon
Comments
  • urllib3 > 1.10 breaks connection

    urllib3 > 1.10 breaks connection

    When using latest urllib3 (1.11 as of now), http connection breaks

    AttributeError: 'module' object has no attribute 'HTTPMessage' WARNING:elasticsearch:GET http:/server:443/es/index/_search [status:N/A request:13.243s] Traceback (most recent call last): File "/Library/Python/2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 74, in perform_request response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw) File "/Library/Python/2.7/site-packages/urllib3/connectionpool.py", line 557, in urlopen body=body, headers=headers) File "/Library/Python/2.7/site-packages/urllib3/connectionpool.py", line 388, in _make_request assert_header_parsing(httplib_response.msg) File "/Library/Python/2.7/site-packages/urllib3/util/response.py", line 49, in assert_header_parsing if not isinstance(headers, httplib.HTTPMessage): AttributeError: 'module' object has no attribute 'HTTPMessage'

    Reverting back to urllib3==1.10.4 fixes the problem

    setup.py specifies: install_requires = [ 'urllib3>=1.8, <2.0', ]

    Perhaps it should be changed to install_requires = [ 'urllib3>=1.8, <1.11', ]

    until this is fixed.

    opened by katrielt 28
  • Search Template Example

    Search Template Example

    Hi! Could someone put here and example of how to put and use a search template, please? I need one with mustache conditional but I can't make it work

    Thanks!

    opened by Garito 27
  • Proxy settings

    Proxy settings

    hi there, i dont know if i am right here, but i have not found anything according to my problem in the web. i have to use elasticsearch in python from behind a proxy server. how can i pass down the proxy setting to elasticsearch. i tried something like that without success.

    es = Elasticsearch([es_url], _proxy = "http://proxyurl:port", _proxy_headers = { 'basic_auth': 'USERNAME:PASSWORD' })
    res = es.search(index=index, body=request, search_type="count")
    

    any help would be very nice. thanks

    opened by svamet 27
  • TransportError(406, 'Content-Type header [] is not supported') - where to find requirements.txt

    TransportError(406, 'Content-Type header [] is not supported') - where to find requirements.txt

    Hello guys, could you please help me how to set that library to use master version or Python ES module? Where can I modify that requirements.txt file?

    (Im now trying to learn how to use ES with Python to create some visualisations in Kibana so Im trying to import some data from online StarWars API :) ) IM getting transport error 406. I have found this solution but I don't know where that requiremets.txt file is located.

    Thank you in advance

    opened by WakeDown-M 26
  • ssl verification fails despite verify_certs=false

    ssl verification fails despite verify_certs=false

    In elasticsearch version 6.6.1 and elasticsearch-dsl version 6.1.0, ssl verification seems to ignore the verify_certs option. When set to True, the cert is still verified and fails on self-signed certs.

    In version elasticsearch 5.5.1, and elasticsearch-dsl version 5.4.0, the verify_certs options works as expected.

    client = Elasticsearch( hosts=['localhost'], verify_certs=False, timeout=60 )

    elasticsearch.exceptions.SSLError: ConnectionError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)) caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777))

    opened by gnarlyman 26
  • Deprecation warnings in 7.15.0 pre-releases

    Deprecation warnings in 7.15.0 pre-releases

    If you're seeing this you likely received a deprecation warning from a 7.15.0 pre-release. Thanks for trying out in-development software

    The v8.0.0 roadmap includes a list of breaking changes that will be implemented to make the Python Elasticsearch client easier to use and more discoverable. To make the upgrade from the 7.x client to the 8.0.0 client as smooth as possible for developers we're deprecating options and usages that will be removed in either 8.0 or 9.0.

    This also means that you'll get an early preview for the great things to come in the 8.0.0 client starting in 7.x, which we're pretty excited about!

    Which APIs are effected?

    All APIs will emit deprecation warnings for positional argument use in 7.15.0.

    The following APIs will start emitting deprecation warnings regarding the body parameters. This list may change leading up to the 7.15.0 release.

    • search
    • index
    • create
    • update
    • scroll
    • clear_scroll
    • search_mvt
    • indices.create

    The following APIs will start emitting deprecation warnings regarding doc_type parameters.

    • nodes.hot_threads
    • license.post_start_trial

    What is being deprecated?

    Starting in 7.15.0 the following features will be deprecated and are scheduled for removal in 9.0.0:

    Positional arguments for APIs are deprecated

    Using keyword arguments has always been recommended when using the client but now starting in 7.15.0 using any positional arguments will emit a deprecation warning.

    # ✅ Supported usage:
    es.search(index="index", ...)
    
    # ❌ Deprecated usage:
    es.search("index", ...)
    

    The body parameter for APIs are deprecated

    For JSON requests each field within the top-level JSON object will become it's own parameter of the API with full type hinting

    # ✅ New usage:
    es.search(query={...})
    
    # ❌ Deprecated usage:
    es.search(body={"query": {...}})
    

    For non-JSON requests or requests where the JSON body is itself an addressable object (like a document) each API will have the parameter renamed to a more semantic name:

    # ✅ New usage:
    es.index(document={...})
    
    # ❌ Deprecated usage:
    es.index(body={...})
    

    The doc_type parameter for non-Index APIs

    Using doc_type for APIs that aren't related to indices or search is deprecated. Instead you should use the type parameter. See https://github.com/elastic/elasticsearch-py/pull/1713 for more context for this change.

    For APIs that are related to indices or search the doc_type parameter isn't deprecated client-side, however mapping types are deprecated in Elasticsearch and will be removed in 8.0.

    # ✅ New usage:
    es.nodes.hot_threads(type="cpu")
    
    # ❌ Deprecated usage:
    es.nodes.hot_threads(doc_type="cpu")
    
    opened by sethmlarson 24
  • helpers.scan: TypeError: search() got an unexpected keyword argument 'doc_type'

    helpers.scan: TypeError: search() got an unexpected keyword argument 'doc_type'

    I'm using helpers.scan function to retrieve data. I passed in doc_type = log to it (following the online resource here). But I got this error:

    <ipython-input-53-dffeaecb48f3> in export(self, outputFiles, host, indexDbName, docType, queryString, queryJson, size, fields, delimiter)
         41             w.writeheader()
         42             try:
    ---> 43                 for row in scanResponse:
         44                     for key,value in row['_source'].iteritems():
         45                         row['_source'][key] = unicode(value)
    
    
    C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\helpers\actions.py in scan(client, query, scroll, raise_on_error, preserve_order, size, request_timeout, clear_scroll, scroll_kwargs, **kwargs)
        431     # initial search
        432     resp = client.search(
    --> 433         body=query, scroll=scroll, size=size, request_timeout=request_timeout, **kwargs
        434     )
        435     scroll_id = resp.get("_scroll_id")
    
    C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\client\utils.py in _wrapped(*args, **kwargs)
         82                 if p in kwargs:
         83                     params[p] = kwargs.pop(p)
    ---> 84             return func(*args, params=params, **kwargs)
         85 
         86         return _wrapped
    
    

    TypeError: search() got an unexpected keyword argument 'doc_type'

    I use 'log' as the doctype and I'm using elasticsearch server 6.3.0 Did I set the doctype wrong? Thank you!

    opened by qilds123 19
  • fix python 3.x str.decode exception

    fix python 3.x str.decode exception

    Python 3.x doesn't support str.decode() causing code to fail with AttributeError exception. This code change allows the python module to continue to work with older 2.x python while being 3.x friendly as well. Any alternate suggestions?

    opened by mmarshallgh 19
  • urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9e5db5d208>: Failed to establish a new connection: [Errno 113] No route to host

    urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host

    I often have this error, but script works :

    GET http://x.x.x.x:9200/_nodes/_all/http [status:N/A request:2.992s] Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw) File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 79, in create_connection raise err File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 69, in create_connection sock.connect(sa) OSError: [Errno 113] No route to host

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/elasticsearch/connection/http_urllib3.py", line 172, in perform_request response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw) File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/lib/python3.6/site-packages/urllib3/util/retry.py", line 343, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/lib/python3.6/site-packages/urllib3/packages/six.py", line 686, in reraise raise value File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen chunked=chunked) File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib64/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib64/python3.6/http/client.py", line 964, in send self.connect() File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 196, in connect conn = self._new_conn() File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn self, "Failed to establish a new connection: %s" % e) urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9e5db5d208>: Failed to establish a new connection: [Errno 113] No route to host

    opened by laurentvv 19
  • Set the Content-Type header on requests

    Set the Content-Type header on requests

    Maybe I am misunderstanding the code, but it seems as if the actual http request (either urllib3 or requests) do no set the Content-Type. I would expect that at the very least, if the JSONSerializer was used that the Content-Type header would be set to "application/json" and something like "text/plain" if the payload is not JSON.

    If my understanding is incorrect, then my apologies and feel free to close this ticket. If this is a valid concern and you are looking for a PR then I would be able to supply something in the coming weeks.

    As a side note, it looks like you have added support for arbitrary headers in the master branch, I think this could probably be used to set the Content-Type, but it seems to me like something more fundamental that you would always want set.

    opened by bgroff 19
  • Regression in 6.4: Scroll failes with large scroll_id

    Regression in 6.4: Scroll failes with large scroll_id

    changes to the scoll method in 6.4 submits the scroll id as part of the URL. This causes:

    elasticsearch.exceptions.RequestError: RequestError(400, 'too_long_frame_exception', 'An HTTP line is larger than 4096 bytes.')

    When there are a large number of shards involved creating a large scroll id.

    https://github.com/elastic/elasticsearch-py/blob/99effab913c29ce341b3199f042bcb45cf8291a2/elasticsearch/client/init.py#L1341

    opened by ChrisPortman 18
  • README copyright outdated

    README copyright outdated

    Elasticsearch version (N/A):

    elasticsearch-py version (8.4.0):

    Please make sure the major version matches the Elasticsearch server you are running.

    Description of the problem including expected versus actual behavior:

    README copyright is outdated

    Steps to reproduce:

    Open the README

    opened by tyleraharrison 0
  • msearch_template fails on request

    msearch_template fails on request

    Describe the feature:

    Elasticsearch version (bin/elasticsearch --version): 7.17.8

    elasticsearch-py version (elasticsearch.__versionstr__): 7.17.0

    Please make sure the major version matches the Elasticsearch server you are running.

    b = [{ "index": "some-index*" },
         { "id": "template_id", "params": { "keyword": "key"}},
         { "index": "some-other-index*" },
         { "id": "template_id2", "params": { "keyword": "key"}}]
    
    es.msearch_template(body=b)  // OR
    es.msearch_template(search_templates=b) 
    
    // result in the same error below:
    
    ---------------------------------------------------------------------------
    BadRequestError                           Traceback (most recent call last)
    <ipython-input-75-a87865ceb6d2> in <module>
          5 
          6 
    ----> 7 es.msearch_template(body=b)
    
    \python\python38\lib\site-packages\elasticsearch\_sync\client\utils.py in wrapped(*args, **kwargs)
        402                         pass
        403 
    --> 404             return api(*args, **kwargs)
        405 
        406         return wrapped  # type: ignore[return-value]
    
    \python\python38\lib\site-packages\elasticsearch\_sync\client\__init__.py in msearch_template(self, search_templates, index, ccs_minimize_roundtrips, error_trace, filter_path, human, max_concurrent_searches, pretty, rest_total_hits_as_int, search_type, typed_keys)
       2646             "content-type": "application/x-ndjson",
       2647         }
    -> 2648         return self.perform_request(  # type: ignore[return-value]
       2649             "POST", __path, params=__query, headers=__headers, body=__body
       2650         )
    
    \python\python38\lib\site-packages\elasticsearch\_sync\client\_base.py in perform_request(self, method, path, params, headers, body)
        319                     pass
        320 
    --> 321             raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
        322                 message=message, meta=meta, body=resp_body
        323             )
    
    BadRequestError: BadRequestError(400, 'action_request_validation_exception', 'Validation Failed: 1: no requests added;')
    
    
    
    opened by bashpound 0
  • Set ignored status via Elasticsearch.options is not worked.

    Set ignored status via Elasticsearch.options is not worked.

    Elasticsearch version (bin/elasticsearch --version): 8.3.1

    elasticsearch-py version (elasticsearch.__versionstr__): 8.4.3

    Set ignored status via Elasticsearch.options is not worked.

    Steps to reproduce:

    from elasticsearch8 import Elasticsearch
    
    es = Elasticsearch(
        hosts=[{'scheme': 'http', 'host': '127.0.0.1', 'port': 9200}],
    )
    es.options(ignore_status=(400, 404))
    es.indices.delete_alias(index='tmp', name='tmp')  # Not work (raised exception)
    es.indices.delete_alias(index='tmp', name='tmp', ignore=(400, 404))  # Work, but deprecation warning
    
    opened by vmarunov 0
  • ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

    ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

    I'm using elastic 7.16.3, from python when I do a bulk upload like

        try:
            es.bulk(rec_to_actions(df, elastic_index))
        except Exception as e:
            raise ElasticUpdateError("Transport Error: Can't Push data Artifacts to elastic '{}'".format(e))
    

    using

    def rec_to_actions(df, Index):
        for record in df.to_dict(orient="records"):
            id = str(uuid.uuid4())
            record['id'] = id
            yield ('{ "index" : { "_index" : "%s", "_id" : "%s" }}' % (Index, id))
            yield (json.dumps(record, default=int))
    

    where elastic client is already connected before this step, I get following warning, but its not breaking the flow

    image

    May I know how to handle this, I dont want to suppress it

    I looked on https://stackoverflow.com/questions/18832643/how-to-catch-this-python-exception-error-errno-10054-an-existing-connection

    and added below, but not working, I understand my code is not correct

        try:
            es.bulk(rec_to_actions(feature_importances, FEIndex))
        except Exception as e:
            raise ElasticUpdateError("Transport Error: Can't Push data Artifacts to elastic '{}'".format(e))
        except socket.error as error:
            if error.errno == errno.WSAECONNRESET:
                time.sleep(2)
    

    May I have some help.

    opened by hanzigs 0
Releases(v8.5.3)
A Telegram Bot to manage Redis Database.

A Telegram Bot to manage Redis database. Direct deploy on heroku Manual Deployment python3, git is required Clone repo git clone https://github.com/bu

Amit Sharma 4 Oct 21, 2022
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 03, 2022
sync/async MongoDB ODM, yes.

μMongo: sync/async ODM μMongo is a Python MongoDB ODM. It inception comes from two needs: the lack of async ODM and the difficulty to do document (un)

Scille 428 Dec 29, 2022
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 07, 2022
Simple Python demo app that connects to an Oracle DB.

Cloud Foundry Sample Python Application Connecting to Oracle Simple Python demo app that connects to an Oracle DB. The app is based on the example pro

Daniel Buchko 1 Jan 10, 2022
Pystackql - Python wrapper for StackQL

pystackql - Python Library for StackQL Python wrapper for StackQL Usage from pys

StackQL Studios 6 Jul 01, 2022
A database migrations tool for SQLAlchemy.

Alembic is a database migrations tool written by the author of SQLAlchemy. A migrations tool offers the following functionality: Can emit ALTER statem

SQLAlchemy 1.7k Jan 01, 2023
Estoult - a Python toolkit for data mapping with an integrated query builder for SQL databases

Estoult Estoult is a Python toolkit for data mapping with an integrated query builder for SQL databases. It currently supports MySQL, PostgreSQL, and

halcyon[nouveau] 15 Dec 29, 2022
MongoX is an async python ODM for MongoDB which is built on top Motor and Pydantic.

MongoX MongoX is an async python ODM (Object Document Mapper) for MongoDB which is built on top Motor and Pydantic. The main features include: Fully t

Amin Alaee 112 Dec 04, 2022
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 04, 2022
An asyncio compatible Redis driver, written purely in Python. This is really just a pet-project for me.

asyncredis An asyncio compatible Redis driver. Just a pet-project. Information asyncredis is, like I've said above, just a pet-project for me. I reall

Vish M 1 Dec 25, 2021
Class to connect to XAMPP MySQL Database

MySQL-DB-Connection-Class Class to connect to XAMPP MySQL Database Basta fazer o download o mysql_connect.py e modificar os parâmetros que quiser. E d

Alexandre Pimentel 4 Jul 12, 2021
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 07, 2023
A CRUD and REST api with mongodb atlas.

Movies_api A CRUD and REST api with mongodb atlas. Setup First import all the python dependencies in your virtual environment or globally by the follo

Pratyush Kongalla 0 Nov 09, 2022
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.8k Dec 31, 2022
Async database support for Python. 🗄

Databases Databases gives you simple asyncio support for a range of databases. It allows you to make queries using the powerful SQLAlchemy Core expres

Encode 3.2k Dec 30, 2022
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status 📦 Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 66 Dec 02, 2022
MySQL database connector for Python (with Python 3 support)

mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi

PyMySQL 2.2k Dec 25, 2022
Use SQL query in a jupyter notebook!

SQL-query Use SQL query in a jupyter notebook! The table I used can be found on UN Data. Or you can just click the link and download the file undata_s

Chuqin 2 Oct 05, 2022
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 05, 2022