Asynchronous Python client for InfluxDB

Overview

aioinflux

CI status Coverage PyPI package Supported Python versions Documentation status

Asynchronous Python client for InfluxDB. Built on top of aiohttp and asyncio. Aioinflux is an alternative to the official InfluxDB Python client.

Aioinflux supports interacting with InfluxDB in a non-blocking way by using aiohttp. It also supports writing and querying of Pandas dataframes, among other handy functionality.

Please refer to the documentation for more details.

Installation

Python 3.6+ is required. You also need to have access to a running instance of InfluxDB.

pip install aioinflux

Quick start

This sums most of what you can do with aioinflux:

import asyncio
from aioinflux import InfluxDBClient

point = {
    'time': '2009-11-10T23:00:00Z',
    'measurement': 'cpu_load_short',
    'tags': {'host': 'server01',
             'region': 'us-west'},
    'fields': {'value': 0.64}
}

async def main():
    async with InfluxDBClient(db='testdb') as client:
       await client.create_database(db='testdb')
       await client.write(point)
       resp = await client.query('SELECT value FROM cpu_load_short')
       print(resp)


asyncio.get_event_loop().run_until_complete(main())

See the documentation for more detailed usage.

Comments
  • Optional pandas/numpy dependencies

    Optional pandas/numpy dependencies

    Issue:

    Pandas/NumPy are not required for Influx interaction, but are dependencies for aioinflux.

    When developing for a Raspberry target, this becomes an issue, as Pandas/NumPy do not provide compiled packages for ARMv7. Compiling these packages on a Raspberry 3 takes +/- 1 hour. That's a bit much for an unused dependency.

    Desired behavior:

    No functional changes for clients that use dataframe serialization functionality. No functional changes for clients that don't - but they can drop the Pandas/Numpy packages from their dependency stack.

    Proposed solution:

    Pandas/NumPy dependencies in setup.py can move to an extras_require collection.

    client.py does not use NumPy, and only uses Pandas to define PointType.

    • The Pandas import can be contained inside a try/except block.
    • The PointType definition can equally easily be made conditional.

    serialization.py makes more extensive use of dependencies.

    • make_df() and parse_df() are Pandas-only functions, and can move to a conditional include.
    • The isinstance(data, pd.DataFrame) check can also be made conditional (if using_pd and isinstance(data, pd.DataFrame).
    • Same for type checks in _parse_fields(): the checks for np.integer and np.isnan() can be placed behind evaluation of a using_np variable, or pd is None check.

    Required effort:

    Between 2 hours and 2 days.

    Practical consideration:

    We're actively using aioinflux (apart from the compile time issues it works great), and I can make the time to make a PR. Bigger issue is whether this is a desired feature for the main repository. If not, I can fork and implement it downstream.

    opened by steersbob 6
  • `path` support in constructor

    `path` support in constructor

    Hi, thanks again for this great package, been very helpful.

    Sync InfluxDB client has in its constructor a path parameter:

    https://influxdb-python.readthedocs.io/en/latest/api-documentation.html#influxdb.InfluxDBClient

    path (str) – path of InfluxDB on the server to connect, defaults to ‘’
    

    And the URL is built as follows:

    https://github.com/influxdata/influxdb-python/blob/d5d12499f3755199d5eedd8b363450f1cf4073bd/influxdb/client.py#L123

            self.__baseurl = "{0}://{1}:{2}{3}".format(
                self._scheme,
                self._host,
                self._port,
                self._path)
    

    Although in aioinflux there is no path parameter, and the URL is built as follows:

    https://github.com/gusutabopb/aioinflux/blob/master/aioinflux/client.py#L163

            return f'{"https" if self.ssl else "http"}://{self.host}:{self.port}/{{endpoint}}'
    

    So, it seems that I cannot connect with aioinflux to our Influx deployment, as for reasons unknown to me, it is under a path.

    Currently, I created a quick monkey patch as follows:

    class MonkeyPatchedInfluxDBClient(InfluxDBClient):
        def __init__(self, *args, path='/', **kwargs):
            super().__init__(*args, **kwargs)
            self._path = path
    
        @property
        def path(self):
            return self._path
    
        @property
        def url(self):
            return '{protocol}://{host}:{port}{path}{{endpoint}}'.format(
                protocol='https' if self.ssl else 'http',
                host=self.host,
                port=self.port,
                path=self.path,
            )
    

    Thanks for placing the url in a property, that was useful.

    enhancement 
    opened by carlos-jenkins 4
  • GROUP BY with Dataframe output

    GROUP BY with Dataframe output

    When the query has group by clause other than time, for example

    SELECT COUNT(*) FROM "db"."rp"."measurement" WHERE time > now() - 7d GROUP BY "category"
    

    The dataframe output mode returns a dictionary instead of dataframe. The key seems to be a string with "measurement_name, category=A", "measurement_name, category=B",... and values of the dictionary are dataframes. Is this expected?

    docs 
    opened by allisonwang 4
  • issue for query

    issue for query

    Hi, Gustavo,

    I tried to follow the demo on our project which doesnt work, could you help me to figure out the reason?

    here is my code

    async def read_influxdb(userid, starttime, endtime):
        #logger = logging.getLogger("influxDB read demo")
        async with InfluxDBClient(host=localhost, port=8086, username='admin', password='123456',db=db_name) as client:
            user_id = '\'' + str(userid) + '\''
            sql_ecg = 'SELECT point FROM wave WHERE (person_zid = {}) AND (time > {}s) AND (time < {}s)'.format(user_id, starttime, endtime)
            await client.query(sql_ecg, chunked=True)
    
    if __name__ ==  '__main__':
        user_id = 973097
        starttime = '2018-09-26 18:08:48'
        endtime = '2018-09-27 18:08:48'
        starttime_posix = utc_to_local(starttime)
        endtime_posix = utc_to_local(endtime)
        asyncio.get_event_loop().run_until_complete(read_influxdb(user_id, starttime_posix, endtime_posix))
    

    We I run this code, I get the errors below:

    sys:1: RuntimeWarning: coroutine 'query' was never awaited
    Unclosed client session
    client_session: <aiohttp.client.ClientSession object at 0x10f78f630>
    

    Best

    question 
    opened by peiyaoli 3
  • Remove caching functionality

    Remove caching functionality

    Aioinflux used to provide a built-in caching local functionality using Redis. However, due to low perceived usage, vendor lock-in (Redis) and extra complexity added to Aioinflux, I have decided to remove it.

    Hopefully no one else besides my past self use this functionality. In case someone else did, or in case someone else didn't but may be interested in caching InfluxDB query results, I will add a simple implementation of a simple caching layer using pickle. If this affects you please let me know by commenting below.

    opened by gusutabopb 2
  • PEP 563 breaks user-defined class schema validation

    PEP 563 breaks user-defined class schema validation

    Background

    PEP 563 behavior is available from Python 3.7 (using from __future__ import annotations) and will become the default in Python 3.10.

    Description of the problem

    Among changes introduced by PEP 563, the type annotations in __annotations__ attribute of an object are stored in string form. This breaks in the function below because all the tests expect type objects. https://github.com/gusutabopb/aioinflux/blob/77f9d24f493365356298a1eb904a27ce046cec27/aioinflux/serialization/usertype.py#L57-L67

    Reproduction

    • Define a user-defined class, decorated with lineprotocol():
    from typing import NamedTuple
    
    import aioinflux
    
    
    @aioinflux.lineprotocol
    class Production(NamedTuple):
        total_line: aioinflux.INT
    
    # Works well as is
    
    • Add from __future__ import annotations at the top and you get: SchemaError: Must have one or more non-empty field-type attributes [~BOOL, ~INT, ~DECIMAL, ~FLOAT, ~STR, ~ENUM] at import time.

    Possible solution

    Using https://docs.python.org/3/library/typing.html#typing.get_type_hints has the same behavior (returns a dict with values as type objects) with or without from __future__ import annotations. Furthermore, the autor of PEP 563 advises to use it.

    opened by cailloumajor 2
  • iterpoints only return the first group where processing GROUP-BY queries

    iterpoints only return the first group where processing GROUP-BY queries

    Hi

    During processing this query:

    SELECT ROUND(LAST(Free_Megabytes) / 1024) AS free, ROUND(Free_Megabytes / 1024 / (Percent_Free_Space / 100)) AS total, ROUND(Free_Megabytes / 1024 * ((100 - Percent_Free_Space) / Percent_Free_Space)) AS used, (100 - Percent_Free_Space) as percent, instance as path FROM win_disk WHERE host = 'ais-pc-16003' GROUP BY instance
    

    This is the raw data that InfluxDBClient.query returned.

    {'results': [{'series': [{'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'C:'},
                              'values': [[1577419571000000000,
                                          94,
                                          238,
                                          144,
                                          60.49140930175781,
                                          'C:']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'D:'},
                              'values': [[1577419571000000000,
                                          1727,
                                          1863,
                                          136,
                                          7.3103790283203125,
                                          'D:']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'HarddiskVolume1'},
                              'values': [[1577419330000000000,
                                          0,
                                          0,
                                          0,
                                          29.292930603027344,
                                          'HarddiskVolume1']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': '_Total'},
                              'values': [[1577419571000000000,
                                          1821,
                                          2101,
                                          280,
                                          13.345237731933594,
                                          '_Total']]}],
                  'statement_id': 0}]}
    

    And I want to use this code to get parsed dicts:

    def dict_parser(*x, meta):
        return dict(zip(meta['columns'], x))
    
    g = fixed_iterpoints(r, dict_parser)
    

    But only got the first row ("instance": "C:"). And below is the source of iterpoints. As you can see, the for-loop returned at the first iteration.

    def iterpoints(resp: dict, parser: Optional[Callable] = None) -> Iterator[Any]:
        for statement in resp['results']:
            if 'series' not in statement:
                continue
            for series in statement['series']:
                if parser is None:
                    return (x for x in series['values'])
                elif 'meta' in inspect.signature(parser).parameters:
                    meta = {k: series[k] for k in series if k != 'values'}
                    meta['statement_id'] = statement['statement_id']
                    return (parser(*x, meta=meta) for x in series['values'])
                else:
                    return (parser(*x) for x in series['values'])
        return iter([])
    

    I modified this function as a workaround:

    def fixed_iterpoints(resp: dict, parser: Optional[Callable] = None):
        for statement in resp['results']:
            if 'series' not in statement:
                continue
    
            gs = []
            for series in statement['series']:
                if parser is None:
                    part = (x for x in series['values'])
                elif 'meta' in inspect.signature(parser).parameters:
                    meta = {k: series[k] for k in series if k != 'values'}
                    meta['statement_id'] = statement['statement_id']
                    part = (parser(x, meta=meta) for x in series['values'])
                else:
                    part = (parser(x) for x in series['values'])
    
                if len(statement['series']) == 1:
                    return part
    
                gs.append(part)
    
            return gs
        return iter([])
    
    

    It worked for me. But it returned nested generator which might be wierd. I want to know if you have a better idea.

    opened by Karmenzind 2
  • Properly escape extra_tags in user type

    Properly escape extra_tags in user type

    When adding extra_tags to a user defined object the extra tags are not properly escaped to line protocol. This PR ensures the tag value is escaped, reusing the existing tag_escape implementation. I did not alter the tag name but I'd be fine adding that in as well if requested.

    opened by iwoloschin 2
  • Chunked reponse to DataFrame

    Chunked reponse to DataFrame

    First of, thank you! Great repo with excellent documentation. I use it with a Starlette project I am working on.

    In the project I've implemented a simple way to parse a pandas.Dataframe from a chuncked response. It works, and I added it to my fork and I am wondering if you would welcome such a feature.

    Here is the MVP implementation in my fork

    I'll clean the code, remove exceptions, move it to serialization/dataframe.py and add tests if you're OK with it.

    enhancement 
    opened by dasdachs 2
  • Jupyter and Python 3.7 compatibility

    Jupyter and Python 3.7 compatibility

    Currently, the blocking mode won't work on Python 3.7 running on Jupyter. The code below:

    import aioinflux
    c = aioinflux.InfluxDBClient(db='mydb', mode='blocking')
    c.show_measurements()
    

    Raises RuntimeError: This event loop is already running

    This is caused by the fact that the latest versions of Tornado (which is used by Jupyter/ipykernel) runs an asyncio loop on the main thread by default:

    # Python 3.7
    import asyncio
    asyncio.get_event_loop()
    # <_UnixSelectorEventLoop running=True closed=False debug=False>
    
    # Python 3.6 (w/ tornado < 5)
    import asyncio
    asyncio.get_event_loop()
    # <_UnixSelectorEventLoop running=False closed=False debug=False>
    

    This is being discussed on https://github.com/jupyter/notebook/issues/3397

    From an aioinflux perspective, a possible work around would be to start a new event loop on a background thread and use asyncio.run_coroutine_threadsafe to run the coroutine and return a concurrent.futures.Future object that wraps the result.

    opened by gusutabopb 2
  • UDP inserts?

    UDP inserts?

    Thanks for creating this library? Does it support UDP inserts via asyncio?

    i.e. udp_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

    udp_port_tuple = (host, udp_port) udp_socket.sendto(data_str, udp_port_tuple)

    opened by vgoklani 2
  • Do not print warning if I don't want to use pandas

    Do not print warning if I don't want to use pandas

    I have no intention of using pandas and don't want to see the following warning when my application starts or worry about suppressing a warning:

    "Pandas/Numpy is not available. Support for 'dataframe' mode is disabled."

    Please consider either removing this warning on import.

    opened by jsonbrooks 1
  • Serialization of pd.NA

    Serialization of pd.NA

    When trying to write an integer64 field, I was getting an error due to the presence of missing values. The missing values were in the form of pd.NA, rather than np.nan and they were not being excluded in the serialization.

    I made an attempt to fix this and it worked, though might not be the most elegant solution. In the _replace function, I added a new replacement tuple to the list of replacements, very similar to the one that handles the nans:

    def _replace(df):
        obj_cols = {k for k, v in dict(df.dtypes).items() if v is np.dtype('O')}
        other_cols = set(df.columns) - obj_cols
        obj_nans = (f'{k}="nan"' for k in obj_cols)
        other_nans = (f'{k}=nani?' for k in other_cols)
        obj_nas = (f'{k}="<NA>"' for k in obj_cols)
        other_nas = (f'{k}=<NA>i?' for k in other_cols)
        replacements = [
            ('|'.join(chain(obj_nans, other_nans)), ''),
            ('|'.join(chain(obj_nas, other_nas)), ''),
            (',{2,}', ','),
            ('|'.join([', ,', ', ', ' ,']), ' '),
        ]
        return replacements
    

    Hope this ends up helping someone

    opened by goncas23 0
  • LICENSE missing in pypi

    LICENSE missing in pypi

    Hi, If you find the time for some maintenance could you include the LICENSE file in the next pypi release? This simplifies integration of the package through yocto/bitbake into embedded linux applications. Best regards

    opened by HerrMuellerluedenscheid 0
  • serialisation.mapping - bugfix datetime objects

    serialisation.mapping - bugfix datetime objects

    datetime objects were handled incorrectly. This resulted in a time offset from UTC.

    This correct implementation assumes UTC time, if no tzinfo object is attached to the datetime. Further the offset is now taken from the tzinfo object.

    opened by miili 0
  • Still maintained?

    Still maintained?

    Just wondering if this library is still actively maintained since it hasn't had a commit or merged PR since last summer. No judgment, just wondering since I like the idea of this client vs influx-python.

    opened by benlachman 3
Releases(v0.9.0)
  • v0.9.0(Jul 11, 2019)

    Added

    • Add support for custom path to InfluxDB (#24)
    • Add support for Decimal serialization (812c1a8, 100d931)
    • Add chunk count on chunked response debugging message (b9e85ad)

    Changed

    • Refactor rm_none option implementation (5735b51, 13062ed, 89bae37)
    • Make enum typevars more strict (f177212)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(May 10, 2019)

  • v0.7.1(Apr 11, 2019)

    This is version is backwards compatible with v0.7.0

    Fixed

    • Don't cache error responses (be7b87c)

    Docs

    • Minor wording changes

    Internal

    • Minor internal changes
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 11, 2019)

    This is version is mostly backwards compatible with v0.6.x (w/ the exception of query patterns functionality)

    Added

    • Redis-based caching functionality. See the docs for details.
    • Timeout functionality (#21 by @SuminAndrew)

    Changed

    • Move ClientSession creation logic outside __init__. It is now easier to used advanced aiohttp.ClientSession options. See the docs for details.

    Removed

    • Query patterns functionality

    Internal

    • Refactor test suite
    • Various other internal changes
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Feb 1, 2019)

    This is version is backwards compatible with v0.6.0

    Fixed

    • Type annotation error in Python 3.6 (febfe47)
    • Suppress The object should be created from async function warning from aiohttp 3.5 (da950e9)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Feb 1, 2019)

    Added

    • Support serializing NaN integers in pandas 0.24+ (See blog post) (1c55217)
    • Support for using namedtuple with iterpoints (be93c53)

    Changed

    • [BREAKING] Changed signature of parser argument of iterpoints from (x, meta) to (*x, meta) (bd93c53)

    Removed

    • [BREAKING] Removed iterable mode and InfluxDBResult / InfluxDBChunkedResult. Use iterpoints instead. (592c5ed)
    • Deprecated set_query_pattern (1d36b07)

    Docs

    • Various improvements (8c6cbd3, ce46596, b7db169, ba3edae)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Jan 24, 2019)

    This is version is backwards compatible with v0.5.0

    Fixed

    • Fix type annotations
    • Fix internal API inconsistencies

    Docs

    • Complete API section
    • Add proper Sphinx links
    • Update/fix various sections
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Jan 24, 2019)

    Changed

    • [BREAKING] Removed DataPoint functionality in favor of simpler and more flexible @lineprotocol decorator. See the docs for details.

    Docs

    • Added detailed @lineprotocol usage
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 22, 2018)

    Fixed

    • Fixed bug when doing multi-statement queries when using dataframe mode

    Docs

    • Added note regarding handling of multi-statement/multi-series queries when using dataframe mode
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 22, 2018)

    Added

    • Added ability to write datapoint objects. See the docs for details.
    • Added bytes output format. This is to facilitate the addition of a caching layer on top of InfluxDB. (cb4e3d1)

    Changed

    • Change write method signature to match the /write endpoint docs
      • Allow writing to non-default retention policy (#14)
      • (precision is not fully implemented yet)
    • Renamed raw output format to json. Most users should be unaffected by this. (cb4e3d1)

    Fixed

    • Improved docs

    Internal

    • Refactored serialization/parsing functionality into a subpackage
    • Fix test warnings (2e42d50)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Sep 3, 2018)

    • Fixed output='dataframe' parsing bug (#15)
    • Removed tag column -> categorical dtype conversion functionality
    • Moved documentation to Read The Docs
    • Added two query patterns (671013b)
    • Added this CHANGELOG
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jul 23, 2018)

  • v0.3.2(May 3, 2018)

    • Fix parsing bug for string ending in a backslash (db8846ec6037752fe4fff8d88aa8fa989bc69452)
    • Add InfluxDBWriteError exception class (d8d0a0181f3e05b6e754cd309015b73a4a0b1fb9)
    • Make InfluxDBClient.db attribute optional (039e0886f3b2469bc2d2edd8b3da34b08b31b1db)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Apr 29, 2018)

    • Fix bug where timezone-unaware datetime input was assumed to be in local time (#11 / a8c81b788a16030a70c8f2a07ebc36b34924f8d5)
    • Minor improvement in dataframe parsing (1e33b92)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 24, 2018)

    Highlights:

    • Drop Pandas/Numpy requirement (#9)
    • Improved iteration support (816a722)
    • Implement tag/key value caching (9a65787)
    • Improve dataframe serialization
      • Speed improvements (ddc9ecc)
      • Memory usage improvements (a2b58bd)
      • Disable concatenating of dataframes of the same measurement when grouping by tag (331a0c9)
      • Queries now return tag columns with pd.Categorical dtype (efdea98)
      • Writes now automatically identify pd.Categorical dtype columns as tag columns (ddc9ecc)

    API changes:

    • mode attribute was "split" into mode and output. Default behavior remains the same (async / raw).
    • Iteration is now made easier through the iterable mode and InfluxDBResult and InfluxDBChunkedResult classes
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 6, 2018)

    Highlights:

    • Documentation is now complete
    • Improved iteration support (via iter_resp) (cfffbf5)
    • Allow users to add custom query patterns
    • Add support for positional arguments in query patterns
    • Reimplement __del__ (40d0a69 / #7)
    • Improve/debug dataframe parsing (7beeb53 / 96d78a4)
    • Improve write error message (7972946) (by @miracle2k)

    API changes:

    • Rename AsyncInfluxDBClient to InfluxDBClient (54d98c9)
    • Change return format of chunked responses (related: cfffbf5 / #6)
    • Make some __init__ arguments keyword-only (5d2edf6)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Feb 28, 2018)

    Bug fix release. Highlights:

    • Add __aenter__/__aexit__ support (5736446) (by @Kargathia)
    • Add HTTPS URL support (49b8e89) (by @miracle2k)
    • Add Unix socket support (8a8b069) (by @carlos-jenkins)
    • Fix bug where tags where not being added to DataFrames when querying (a9f1d82)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 10, 2017)

    First bug fix release. Highlights:

    • Add error handling for chunked responses (db93c2034d9f100f13cf08d4c96e88587f2dd9f1)
    • Fix DataFrame tag parsing bug (aa02faa6808d9cef751974943cb36e8d0c18cbf6)
    • Fix boolean field parsing bug (4c2bff966c7c640c5182c39a0316a5b22c9977ea)
    • Increase test coverage
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 4, 2017)

Owner
Gustavo Bezerra
Gustavo Bezerra
Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.

Simple DDL Parser Build with ply (lex & yacc in python). A lot of samples in 'tests/. Is it Stable? Yes, library already has about 5000+ usage per day

Iuliia Volkova 95 Jan 05, 2023
A collection of awesome sqlite tools, scripts, books, etc

Awesome Series @ Planet Open Data World (Countries, Cities, Codes, ...) • Football (Clubs, Players, Stadiums, ...) • SQLite (Tools, Books, Schemas, ..

Planet Open Data 205 Dec 16, 2022
PyPika is a python SQL query builder that exposes the full richness of the SQL language using a syntax that reflects the resulting query. PyPika excels at all sorts of SQL queries but is especially useful for data analysis.

PyPika - Python Query Builder Abstract What is PyPika? PyPika is a Python API for building SQL queries. The motivation behind PyPika is to provide a s

KAYAK 1.9k Jan 04, 2023
A simple Python tool to transfer data from MySQL to SQLite 3.

MySQL to SQLite3 A simple Python tool to transfer data from MySQL to SQLite 3. This is the long overdue complimentary tool to my SQLite3 to MySQL. It

Klemen Tusar 126 Jan 03, 2023
A framework based on tornado for easier development, scaling up and maintenance

turbo 中文文档 Turbo is a framework for fast building web site and RESTFul api, based on tornado. Easily scale up and maintain Rapid development for RESTF

133 Dec 06, 2022
Python client for InfluxDB

InfluxDB-Python InfluxDB-Python is a client for interacting with InfluxDB. Development of this library is maintained by: Github ID URL @aviau (https:/

InfluxData 1.6k Dec 24, 2022
MySQL database connector for Python (with Python 3 support)

mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi

PyMySQL 2.2k Dec 25, 2022
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 29, 2022
Some scripts for microsoft SQL server in old version.

MSSQL_Stuff Some scripts for microsoft SQL server which is in old version. Table of content Overview Usage References Overview These script works when

小离 5 Dec 29, 2022
Class to connect to XAMPP MySQL Database

MySQL-DB-Connection-Class Class to connect to XAMPP MySQL Database Basta fazer o download o mysql_connect.py e modificar os parâmetros que quiser. E d

Alexandre Pimentel 4 Jul 12, 2021
Import entity definition document into SQLie3. Manage the entity. Also, create a "Create Table SQL file".

EntityDocumentMaker Version 1.00 After importing the entity definition (Excel file), store the data in sqlite3. エンティティ定義(Excelファイル)をインポートした後、データをsqlit

G-jon FujiYama 1 Jan 09, 2022
SQL queries to collections

SQC SQL Queries to Collections Examples from sqc import sqc data = [ {"a": 1, "b": 1}, {"a": 2, "b": 1}, {"a": 3, "b": 2}, ] Simple filte

Alexander Volkovsky 0 Jul 06, 2022
Redis Python Client - The Python interface to the Redis key-value store.

redis-py The Python interface to the Redis key-value store. Installation | Contributing | Getting Started | Connecting To Redis Installation redis-py

Redis 11k Jan 08, 2023
Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Odiwuor Lameck 1 Jan 07, 2022
The JavaScript Database, for Node.js, nw.js, electron and the browser

The JavaScript Database Embedded persistent or in memory database for Node.js, nw.js, Electron and browsers, 100% JavaScript, no binary dependency. AP

Louis Chatriot 13.2k Jan 02, 2023
Async database support for Python. 🗄

Databases Databases gives you simple asyncio support for a range of databases. It allows you to make queries using the powerful SQLAlchemy Core expres

Encode 3.2k Dec 30, 2022
MariaDB connector using python and flask

MariaDB connector using python and flask This should work with flask and to be deployed on docker. Setting up stuff 1. Docker build and run docker bui

Bayangmbe Mounmo 1 Jan 11, 2022
Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python.

MongoDB with Python Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python. We can connect to a MongoDB database hosted

MousamSingh 4 Dec 01, 2021
New generation PostgreSQL database adapter for the Python programming language

Psycopg 3 -- PostgreSQL database adapter for Python Psycopg 3 is a modern implementation of a PostgreSQL adapter for Python. Installation Quick versio

The Psycopg Team 880 Jan 08, 2023
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 04, 2023