A wrapper around asyncpg for use with sqlalchemy

Overview

Documentation Status

asyncpgsa

A python library wrapper around asyncpg for use with sqlalchemy

Backwards incompatibility notice

Since this library is still in pre 1.0 world, the api might change. I will do my best to minimize changes, and any changes that get added, I will mention here. You should lock the version for production apps.

  1. 0.9.0 changed the dialect from psycopg2 to pypostgres. This should be mostly backwards compatible, but if you notice weird issues, this is why. You can now plug-in your own dialect using pg.init(..., dialect=my_dialect), or setting the dialect on the pool. See the top of the connection file for an example of creating a dialect. Please let me know if the change from psycopg2 to pypostgres broke you. If this happens enough, I might make psycopg2 the default.

  2. 0.18.0 Removes the Record Proxy objects that would wrap asyncpg's records. Now asyncpgsa just returns whatever asyncpg would return. This is a HUGE backwards incompatible change but most people just used record._data to get the object directly anyways. This means dot notation for columns is no longer possible and you need to access columns using exact names with dictionary notation.

  3. 0.18.0 Removed the insert method. We found this method was just confusing, and useless as SqlAlchemy can do it for you by defining your table with a primary key.

sqlalchemy ORM

Currently this repo does not support SA ORM, only SA Core.

As we at canopy do not use the ORM, if you would like to have ORM support feel free to PR it. You would need to create an "engine" interface, and that should be it. Then you can bind your sessions to the engine.

sqlalchemy Core

This repo supports sqlalchemy core. Go here for examples.

Docs

Go here for docs.

Examples

Go here for examples.

install

pip install asyncpgsa

Note: You should not have asyncpg in your requirements at all. This lib will pull down the correct version of asyncpg for you. If you have asyncpg in your requirements, you could get a version newer than this one supports.

Contributing

To contribute or build this locally see contributing.md

FAQ

Does SQLAlchemy integration defeat the point of using asyncpg as a backend (performance)?

I dont think so. asyncpgsa is written in a way where any query can be a string instead of an SA object, then you will get near asyncpg speeds, as no SA code is ran.

However, when running SA queries, comparing this to aiopg, it still seams to work faster. Here is a very basic timeit test comparing the two. https://gist.github.com/nhumrich/3470f075ae1d868f663b162d01a07838

aiopg.sa: 9.541276566000306
asyncpsa: 6.747777451004367

So, seems like its still faster using asyncpg, or in otherwords, this library doesnt add any overhead that is not in aiopg.sa.

Versioning

This software follows Semantic Versioning.

Comments
  • Improved record type.

    Improved record type.

    Adds support to asyncpgsa Record type for all the methods allowed in asyncpg Record type.

    This change allows to use subscript operator and SQL Alchemy Core tables columns as indexes for the subscript.

    On top of:

    row.type_id
    1
    

    You can now:

    row['type_id']
    1
    row[models.t_table.c.type_id]
    1
    

    It calls asyncpg Record for every other method call not implemented here, so you can use row.keys() or row.values() or row_list = [ dict(p) for p in rows ], the last being useful for serialization for example.

    Related to #32

    opened by skuda 8
  • Add an example involving range types

    Add an example involving range types

    I have a table like the following:

    CREATE TABLE users (
      id SERIAL,
      name VARCHAR(15),
      active DATERANGE,
    
      PRIMARY KEY (id)
    )
    

    that with plain SQLAlchemy+psycopg2 I can query with:

        today = date.today()
        q = select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains(today))
        r = session.execute(q)
    

    With the following asyncpgsa transliteration:

        today = date.today()
        q = sa.select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains(today))
        r = await pg.fetchrow(q)
    

    I get an error:

    /usr/local/lib/python3.5/site-packages/asyncpgsa/pgsingleton.py:65: in fetchrow
        return await conn.fetchrow(query, *args, timeout=timeout)
    /usr/local/lib/python3.5/site-packages/asyncpgsa/connection.py:91: in fetchrow
        result = await self.connection.fetchrow(query, *params, *args, **kwargs)
    /usr/local/lib/python3.5/site-packages/asyncpg/connection.py:259: in fetchrow
        False, timeout)
    asyncpg/protocol/protocol.pyx:157: in bind_execute (asyncpg/protocol/protocol.c:45856)
        ???
    asyncpg/protocol/prepared_stmt.pyx:122: in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg (asyncpg/protocol/protocol.c:42239)
        ???
    asyncpg/protocol/codecs/base.pyx:123: in asyncpg.protocol.protocol.Codec.encode (asyncpg/protocol/protocol.c:12276)
        ???
    asyncpg/protocol/codecs/base.pyx:90: in asyncpg.protocol.protocol.Codec.encode_range (asyncpg/protocol/protocol.c:11868)
        ???
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    >   ???
    E   TypeError: list, tuple or Range object expected (got type <class 'datetime.date'>)
    
    asyncpg/protocol/codecs/range.pyx:53: TypeError
    

    which seems odd to me: what I want is to test whether a single date is within a period...

    I tried the following code, passing a one-item-tuple instead:

        today = date.today()
        q = sa.select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains((today,)))
        r = await pg.fetchrow(q)
    

    that works, but that seems to convert the argument to a [today, +infinity] range, something different from what I'm seeking.

    What am I missing?

    Hacktoberfest 
    opened by lelit 8
  • `ConnectionTransactionContextManager` slowly drains the pool

    `ConnectionTransactionContextManager` slowly drains the pool

    We are using transaction as follows

    async with pool.transaction() as conn:
        ...
    

    and from time to time (in our use case it ranges from once a day to once a week) our service freezes. When we investigated the issue we noticed, that all connections in the pool were marked as used and no free connection was available. We traced the problem to the ConnectionTransactionContextManager.

    If cancellation or timeout raises during __aenter__ or __aexit__ there is no guarantee that connection is returned to the pool. And it slowly drains. We confirmed that this is the issue, because we change code to this

    async with pool.acquire() as conn:
        async with conn.transaction():
            ....
    

    and our problem stops.

    I can create PR to fix this if you like.

    opened by villlem 7
  • None result should be None (fetchrow and fetch)

    None result should be None (fetchrow and fetch)

    query = ...  # some SA select query
    row = await conn.fetchrow(query)  # conn is SAConnection
    

    When the underlying asyncpg connection returns None as the "empty" result, row should become None as well. Currently, row is not None but row.row is None. This would confuse the users who expect the API semantics to be same to asyncpg.

    The same applies to fetch and execute which returns a RecordGenerator because it does not check None when instantiating Record objects.

    Q: What's the purpose of having seem-to-be-redundant Record and RecordGenerator classes? They look like a no-op proxy to asyncpg's return objects.

    opened by achimnol 7
  • Fixes #6: Adds a debug param for printing queries

    Fixes #6: Adds a debug param for printing queries

    Fixes #6

    • Adds a debug param to various components to enable printing of some debug statements, currently only the query.
    • Makes the connection protected to maintain library standard

    I intentionally left off the params - I could see a case for someone wanting that at some point but there is often enough info in there you may not want shown.

    As a start it just prints to stdout, not sure if you would like to use the logger for users to customize or print to stderr instead. In either case, I can happily fix it.

    opened by mattrasband 6
  • Cannot compile DropTable and CreateTable queries

    Cannot compile DropTable and CreateTable queries

    Hello!

    I try to create table via this code:

    import asyncio
    
    from asyncpgsa import pg
    import sqlalchemy as sa
    from sqlalchemy.sql.ddl import CreateTable, DropTable
    
    users = sa.Table(
        'users', sa.MetaData(),
        sa.Column('id', sa.Integer, primary_key=True),
        sa.Column('name', sa.VARCHAR(255)),
    )
    
    
    async def main():
        await pg.init('postgresql://localhost/test')
        await pg.query(DropTable(users))
        await pg.query(CreateTable(users))
    
    
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    and got an error from asyncpgsa

    Traceback (most recent call last):
      File "/project/main.py", line 20, in <module>
        loop.run_until_complete(main())
      File "/project/.venv/python3.6/asyncio/base_events.py", line 467, in run_until_complete
        return future.result()
      File "/project/main.py", line 15, in main
        await pg.query(DropTable(users))
      File "/project/.venv/lib/python3.6/site-packages/asyncpgsa/pgsingleton.py", line 65, in query
        compiled_q, compiled_args = compile_query(query)
      File "/project/.venv/lib/python3.6/site-packages/asyncpgsa/connection.py", line 60, in compile_query
        compiled_params = sorted(compiled.params.items())
    AttributeError: 'NoneType' object has no attribute 'items'
    

    Same code using sqlalchemy works fine:

    import sqlalchemy as sa
    from sqlalchemy.sql.ddl import CreateTable, DropTable
    
    users = sa.Table(
        'users', sa.MetaData(),
        sa.Column('id', sa.Integer, primary_key=True),
        sa.Column('name', sa.VARCHAR(255)),
    )
    
    engine = sa.create_engine('postgresql://localhost/test')
    engine.execute(DropTable(users))
    engine.execute(CreateTable(users))
    

    Thanks!

    Hacktoberfest 
    opened by abogushov 5
  • `conn.execute(query)` is not working with params

    `conn.execute(query)` is not working with params

    From version 0.14.0 conn.execute() is not working.

    Simple example:

    async with app.pool.acquire() as conn:
        query = table.update(table.c.some_culumn=123).where(table.c.id=1)
        await conn.execute(query)
    

    Will go asyncpgsa/connection.py :

    
    async def execute(self, script, *args, **kwargs) -> str:
            # script, params = compile_query(script, dialect=self._dialect)
            result = await super().execute(script, *args, **kwargs)
            return RecordGenerator(result)
    
    

    And there *args will be empty tuple, hence when it would go to super().execute(script, *args, **kwargs) which is asyncpg.Connection.execute() :

    async def execute(self, query: str, *args, timeout: float=None) -> str:
        ...
        if not args:  # <- WILL ENTER HERE
                return await self._protocol.query(query, timeout)
    
         _, status, _ = await self._execute(query, args, 0, timeout, True)
        return status.decode()
    

    And if i will use version 0.13.0 everything will be alright, because i will pass *params and *args to asyncpg.Connection.execute() and then it will skip if not args clause in asyncpg.Connectuon.execute:

    async def execute(self, script, *args, **kwargs) -> str:
        script, params = compile_query(script, dialect=self._dialect)
        result = await self._connection.execute(script, *params, *args, **kwargs)
        return RecordGenerator(result)
    

    I am certainly grateful for that awesome package, and i am willing to help, but i not sure how, because recent change to omit compile_query in execute statement is made for a reason. And i don't really understand what reason it is.

    For a moment i rolled back to 0.13.0 and everything works as expected.

    opened by vishes-shell 5
  • Asyncpg record objects

    Asyncpg record objects

    This removes the Record and RecordGenerators in favor of the asyncpg Records. This means dot notation must be replaces with dict notation when accessing values, making this a breaking change.

    It also fixes an issue where the _dialect property on the SAConnection was not getting set, causing issues when trying to use the query builder with table definitions containing JSON types. This is also potentially breaking, as it enables automatic type coercion into a JSON string, even if you have already passed the structure through json.dumps.

    It also re-enables the use of compile_query for several of the SAConnection types, so that SA query-builder queries will get properly converted.

    It also proxies the cursor method of the parent connection, allowing SA queries to be used with cursors directly.

    opened by rlittlefield 4
  • support asyncpg 0.22.0

    support asyncpg 0.22.0

    Hi!

    The new release of asyncpg 0.22.0 breaks compatibility with asyncpgsa and here is a simple fix for that. Please review and let me know if I need to fix something.

    Thanks!

    opened by Gr1N 3
  • Connect Error When using dsn and password contains '#'

    Connect Error When using dsn and password contains '#'

    when i use pool and my db password is "dY8*6fN6Z#xSOg$wG9zDATTe" pool = await pg.create_pool(dsn) raise an error

    hostlist_ports.append(int(hostspec_port))
    ValueError: invalid literal for int() with base 10: 'dY8*6fN6Z'
    
    opened by wujunkui 3
  • Dependency on old version of asyncpg

    Dependency on old version of asyncpg

    When using asyncpgsa it's hard to use latest version of asyncpg, because there's asyncpg~=0.12.0 in install_requires. What's more, I'd like to use asyncpgsa only as sql query compiler, without using its context managers (as shown here). In this case asyncpg is not a asyncpgsa's dependency any more in practice. It's just a query compiler. How about moving asyncpg to extras_require? Or splitting the library into 2 separate packages (compiler and asyncpg adapter (any better name?)).

    opened by bitrut 3
  • Missing releases after 0.25 version on github

    Missing releases after 0.25 version on github

    Today caught issue with incompatibility asyncpgsa==0.26.1 with asyncpg==0.22.0 (but found that latest release works well!).

    So, i opened releases to see the diff between 0.26.1 and latest, but found the latest release described is 0.25. Perhaps, you would like to add other releases to make it easier to see the diffs?

    Of course it is possible to build diff against the commits, but releases seem to be little bit faster/easier (and it is bit confusing that you have only part of releases on the github).

    opened by alvassin 0
  • compile_query fails with sqlalchemy 1.4

    compile_query fails with sqlalchemy 1.4

    This works fine with asyncpgsa version 0.27.1 and sqlalchemy: 1.3.23, but fails when I upgrade to sqlalchemy 1.4.4.

    import asyncpgsa
    import sqlalchemy as sa
    
    Thing = sa.Table("thing", sa.MetaData(), sa.Column("name", sa.Text))
    query = Thing.insert().values(name="name").returning(Thing)
    asyncpgsa.compile_query(query)
    
    

    This prevents me from running any code that uses an INSERT statement.

    The error seems to be here:

    https://github.com/CanopyTax/asyncpgsa/blob/master/asyncpgsa/connection.py#L36

        if isinstance(query.parameters, list):
    

    because sqlalchemy.sql.dml.Insert no longer has a parameters attribute.

    For now I'm dealing with this by pinning to an earlier sqlalchemy version. It's not clear to me what the fix should be, because the sqlalchemy internals seem to have changed considerably between versions.

    opened by trvrmcs 4
  • the right way to autoload/reflect

    the right way to autoload/reflect

    Please teach how to do Table(autoload=True) or metadata.reflect() with asyncpgsa properly.

    Tried passing reflect(bind=asyncpgsa_connection) and it's AttributeError game: needs connect then dialect, I feel like it's going in wrong direction.

    People use psycopg2 to create sqlalchemy engine and then async driver for normal work. That may work if reflection doesn't use driver specifics. I'll use that crutch if nothing else works.

    Can you create sqlalchemy Engine with asyncpgsa?

    opened by temoto 7
  • process_result_value callback for column type is not handled

    process_result_value callback for column type is not handled

    I need TypeDecorator for my data column, it is handled by SQLAlchemy + psycopg2 correctly. Is it possible to make it work with asyncpgsa?

    import asyncio
    from datetime import datetime
    
    from asyncpgsa import PG
    from pytz import timezone
    from sqlalchemy import (
        Column, DateTime, Integer, MetaData, Table, TypeDecorator, create_engine
    )
    
    
    DB_URL = 'postgresql://user:[email protected]/db'
    
    
    class DateTime_(TypeDecorator):
        impl = DateTime
    
        def __init__(self):
            TypeDecorator.__init__(self, timezone=True)
    
        def process_bind_param(self, value, dialect):
            if value is not None:
                return datetime.fromtimestamp(value, timezone('UTC'))
    
        def process_result_value(self, value, dialect):
            return int(value.timestamp())
    
    
    metadata = MetaData()
    example_table = Table('example', metadata,
                          Column('id', Integer, primary_key=True),
                          Column('some_date', DateTime_))
    
    engine = create_engine(DB_URL)
    
    # Create table & add row
    metadata.create_all(engine)
    engine.execute(example_table.insert().values({
        'some_date': int(datetime.now().timestamp())
    }))
    
    # psycopg2 with sqlalchemy handles process_result_value correctly
    rows = engine.execute(example_table.select()).fetchall()
    assert isinstance(rows[0]['some_date'], int)
    
    
    # asyncpgsa does not handle process_result_value callback
    async def main():
        db = PG()
        await db.init(DB_URL)
        rows = await db.fetch(example_table.select())
        assert isinstance(rows[0]['some_date'], datetime)  # True
        assert isinstance(rows[0]['some_date'], int)  # False!
    
    
    asyncio.run(main())
    

    Perhaps such callbacks should be called in SAConnection.execute?

    opened by alvassin 1
  • bindparam does not work

    bindparam does not work

    Perhaps you could advice some fast fix or workaround for that? I need update for many different rows with different values. Previously used bindparam for that.

    import asyncio
    
    from asyncpgsa import PG
    from sqlalchemy import Table, MetaData, Column, Integer, String, bindparam, \
        create_engine
    
    metadata = MetaData()
    
    table = Table(
        'citizens',
        metadata,
        Column('test_id', Integer, primary_key=True),
        Column('name', String, nullable=False),
    )
    
    DB_URL = 'postgresql://user:[email protected]/db'
    
    
    async def main():
        # create table
        engine = create_engine(DB_URL)
        metadata.create_all(engine)
    
        # connect to db
        pg = PG()
        await pg.init(DB_URL)
        async with pg.transaction() as conn:
            # create
            query = table.insert().values([
                {'name': str(i)} for i in range(10)
            ]).returning(table)
            rows = await conn.fetch(query)
    
            # update
            query = table.update().values(name=bindparam('name'))
            await conn.execute(query, [
                {'test_id': row['test_id'], 'name': row['name'] + '_new'}
                for row in rows
            ])
    
            # check
            # asyncpg.exceptions.NotNullViolationError: null value in column "name" violates not-null constraint
            # DETAIL:  Failing row contains (31, null).
            results = await conn.execute(table.select())
            print(results)
    
    asyncio.run(main())
    
    opened by alvassin 4
Releases(0.25.0)
  • 0.25.0(Feb 11, 2019)

  • 0.24.0(Jun 30, 2018)

  • 0.23.0(May 16, 2018)

  • 0.21.0(Mar 14, 2018)

  • 0.20.0(Mar 12, 2018)

  • 0.19.2(Feb 13, 2018)

  • 0.19.1(Feb 13, 2018)

    using with with an async context now raises RuntimeError instead of SyntaxError to be consistant with other aio libs. This is technically not backwards compatible, but since no one should be relying on this behavior, it is not being considered a minor release.

    Source code(tar.gz)
    Source code(zip)
  • 0.19.0(Feb 13, 2018)

    This is mostly a bug fix update, but it also includes bumping asyncpg to 0.14 which could potentially break things.

    Other than fixing documentation, there are only two changes from 0.18.2

    • Bump asyncpg to 0.14
    • Check default value if its callable in sqlalchemy parsing (see #73). Thanks to @kamikaze
    Source code(tar.gz)
    Source code(zip)
  • 0.18.2(Dec 7, 2017)

  • 0.18.0(Oct 3, 2017)

    This version will break everything. It removes the record proxy object, now returning the object returned by asyncpg, meaning that accessing columns is now dictionary bracket notation, and not dot notation. It also removes the insert function, and adds automatic json parsing.

    record.my_column becomes record['my_column']

    Full list of changes:

    • Dropped asyncpgsa's Record and RecordGenerator in favor of asyncpg's Records and lists, causing dot notation to be replaced with dict notation when accessing properties (row['id'] instead of row.id)
    • connection cursor function now uses compile_query so it can handle the same query objects as the other functions like fetchval
    • removed the insert function from the SAConnection. SA query objects will need to use query.returning(sa.text('*'))) or the like to get the values you want explicitly, and all inserts will have to move to one of the other methods like fetchval. Plain text queries will need to add ' RETURNING id ' or something similar to the query itself instead of relying it it being added by SAConnection. It should be noted that sqlalchemy does this for you as long as your table definition has a primary key.
    • The postgres SA dialact is loaded into the SAConnection class now. This will cause breaking changes when using behaviors that differ based on dialact, such as using JSON column types in SA table definitions. In that case, it will actuall json dumps and loads automatically for you, which will break if you did it manually in your own code.
    Source code(tar.gz)
    Source code(zip)
  • 0.17.0(Aug 29, 2017)

    Found a way to fix an issue with the tests without requiring dynamic subclassing. This fixes the regression in 0.16.0, so SAConnection is now reference-able again.

    Source code(tar.gz)
    Source code(zip)
  • 0.16.0(Aug 28, 2017)

    This Version fixes the testing framework, which was broken in 0.14.x.

    Your tests might not be completely backwards compatible. If that is the case, please file an issue so I can make sure we handle all cases.

    Major change: This also breaks asyncpgsa.connection.SAConnection and so any direct references to that class will fail. If you have a use-case for referencing it. Please let me know.

    Source code(tar.gz)
    Source code(zip)
  • 0.15.0(Aug 14, 2017)

  • 0.14.2(Aug 10, 2017)

  • 0.14.1(Jul 21, 2017)

    Changes from 0.14.0:

    • bugfix, pg.execute() now maintains args when passing a string. See https://github.com/CanopyTax/asyncpgsa/pull/39. Shoutout to @fantix for the fix.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Apr 21, 2017)

    changes

    1. Changed the dialect from psycopg2 to pypostgres. This should be mostly backwards compatible, but if you notice weird issues, this is why.
    2. You can now plug-in your own dialect using pg.init(..., dialect=my_dialect), or setting the dialect on the pool. See the top of the connection file for an example of creating a dialect. Please let me know if the change from psycopg2 to pypostgres broke you. If this happens enough, I might make psycopg2 the default.
    Source code(tar.gz)
    Source code(zip)
Owner
Canopy
Canopy
Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Kenneth Reitz 6.9k Jan 03, 2023
Amazon S3 Transfer Manager for Python

s3transfer - An Amazon S3 Transfer Manager for Python S3transfer is a Python library for managing Amazon S3 transfers. Note This project is not curren

the boto project 158 Jan 07, 2023
A simple Python tool to transfer data from MySQL to SQLite 3.

MySQL to SQLite3 A simple Python tool to transfer data from MySQL to SQLite 3. This is the long overdue complimentary tool to my SQLite3 to MySQL. It

Klemen Tusar 126 Jan 03, 2023
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching.

:: Description :: A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching :: Requirements :: Requires P

ESN Social Software 549 Nov 18, 2022
Class to connect to XAMPP MySQL Database

MySQL-DB-Connection-Class Class to connect to XAMPP MySQL Database Basta fazer o download o mysql_connect.py e modificar os parâmetros que quiser. E d

Alexandre Pimentel 4 Jul 12, 2021
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 2.1k Dec 26, 2022
Pure-python PostgreSQL driver

pg-purepy pg-purepy is a pure-Python PostgreSQL wrapper based on the anyio library. A lot of this library was inspired by the pg8000 library. Credits

Lura Skye 11 May 23, 2022
SAP HANA Connector in pure Python

SAP HANA Database Client for Python A pure Python client for the SAP HANA Database based on the SAP HANA Database SQL Command Network Protocol. pyhdb

SAP 299 Nov 20, 2022
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 08, 2023
Micro ODM for MongoDB

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

Roman 993 Jan 03, 2023
A fast unobtrusive MongoDB ODM for Python.

MongoFrames MongoFrames is a fast unobtrusive MongoDB ODM for Python designed to fit into a workflow not dictate one. Documentation is available at Mo

getme 45 Jun 01, 2022
aiosql - Simple SQL in Python

aiosql - Simple SQL in Python SQL is code. Write it, version control it, comment it, and run it using files. Writing your SQL code in Python programs

Will Vaughn 1.1k Jan 08, 2023
Little wrapper around asyncpg for specific experience.

Little wrapper around asyncpg for specific experience.

Nikita Sivakov 3 Nov 15, 2021
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 345 Dec 28, 2022
A Relational Database Management System for a miniature version of Twitter written in MySQL with CLI in python.

Mini-Twitter-Database This was done as a database design course project at Amirkabir university of technology. This is a relational database managemen

Ali 12 Nov 23, 2022
Asynchronous interface for peewee ORM powered by asyncio

peewee-async Asynchronous interface for peewee ORM powered by asyncio. Important notes Since version 0.6.0a only peewee 3.5+ is supported If you still

05Bit 666 Dec 30, 2022
Find graph motifs using intuitive notation

d o t m o t i f Find graph motifs using intuitive notation DotMotif is a library that identifies subgraphs or motifs in a large graph. It looks like t

APL BRAIN 45 Jan 02, 2023
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 07, 2022
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 03, 2022