The Database Toolkit for Python

Overview

SQLAlchemy

PyPI PyPI - Python Version PyPI - Downloads

The Python SQL Toolkit and Object Relational Mapper

Introduction

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL. SQLAlchemy provides a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language.

Major SQLAlchemy features include:

  • An industrial strength ORM, built from the core on the identity map, unit of work, and data mapper patterns. These patterns allow transparent persistence of objects using a declarative configuration system. Domain models can be constructed and manipulated naturally, and changes are synchronized with the current transaction automatically.
  • A relationally-oriented query system, exposing the full range of SQL's capabilities explicitly, including joins, subqueries, correlation, and most everything else, in terms of the object model. Writing queries with the ORM uses the same techniques of relational composition you use when writing SQL. While you can drop into literal SQL at any time, it's virtually never needed.
  • A comprehensive and flexible system of eager loading for related collections and objects. Collections are cached within a session, and can be loaded on individual access, all at once using joins, or by query per collection across the full result set.
  • A Core SQL construction system and DBAPI interaction layer. The SQLAlchemy Core is separate from the ORM and is a full database abstraction layer in its own right, and includes an extensible Python-based SQL expression language, schema metadata, connection pooling, type coercion, and custom types.
  • All primary and foreign key constraints are assumed to be composite and natural. Surrogate integer primary keys are of course still the norm, but SQLAlchemy never assumes or hardcodes to this model.
  • Database introspection and generation. Database schemas can be "reflected" in one step into Python structures representing database metadata; those same structures can then generate CREATE statements right back out - all within the Core, independent of the ORM.

SQLAlchemy's philosophy:

  • SQL databases behave less and less like object collections the more size and performance start to matter; object collections behave less and less like tables and rows the more abstraction starts to matter. SQLAlchemy aims to accommodate both of these principles.
  • An ORM doesn't need to hide the "R". A relational database provides rich, set-based functionality that should be fully exposed. SQLAlchemy's ORM provides an open-ended set of patterns that allow a developer to construct a custom mediation layer between a domain model and a relational schema, turning the so-called "object relational impedance" issue into a distant memory.
  • The developer, in all cases, makes all decisions regarding the design, structure, and naming conventions of both the object model as well as the relational schema. SQLAlchemy only provides the means to automate the execution of these decisions.
  • With SQLAlchemy, there's no such thing as "the ORM generated a bad query" - you retain full control over the structure of queries, including how joins are organized, how subqueries and correlation is used, what columns are requested. Everything SQLAlchemy does is ultimately the result of a developer- initiated decision.
  • Don't use an ORM if the problem doesn't need one. SQLAlchemy consists of a Core and separate ORM component. The Core offers a full SQL expression language that allows Pythonic construction of SQL constructs that render directly to SQL strings for a target database, returning result sets that are essentially enhanced DBAPI cursors.
  • Transactions should be the norm. With SQLAlchemy's ORM, nothing goes to permanent storage until commit() is called. SQLAlchemy encourages applications to create a consistent means of delineating the start and end of a series of operations.
  • Never render a literal value in a SQL statement. Bound parameters are used to the greatest degree possible, allowing query optimizers to cache query plans effectively and making SQL injection attacks a non-issue.

Documentation

Latest documentation is at:

http://www.sqlalchemy.org/docs/

Installation / Requirements

Full documentation for installation is at Installation.

Getting Help / Development / Bug reporting

Please refer to the SQLAlchemy Community Guide.

Code of Conduct

Above all, SQLAlchemy places great emphasis on polite, thoughtful, and constructive communication between users and developers. Please see our current Code of Conduct at Code of Conduct.

License

SQLAlchemy is distributed under the MIT license.

Comments
  • Support for psycopg3

    Support for psycopg3

    Describe the use case

    SQLAlchemy will need to add support for psycopg3 at some point.

    From the article here it seems that its behaviour will be similar to asyncpg wrt execution, so it may require adding casting to the queries and/or rewrite some cases. I think care must be taken to avoid cases similar to https://github.com/sqlalchemy/sqlalchemy/issues/6833

    I'm not sure if it still supports execution without server side parameterized query.

    We may also need to decide how to call the dialect, since it seems that the project is called psycopg3 but the package is imported as psycopg

    Databases / Backends / Drivers targeted

    PostrgreSQL

    Example Use

    Not relevant

    Additional context

    Current wip patch: https://github.com/sqlalchemy/sqlalchemy/issues/6842#issuecomment-919531590

    postgresql use case dialects public api 
    opened by CaselIT 120
  • Query builder does not quote mixed case table and column names in Postgresql

    Query builder does not quote mixed case table and column names in Postgresql

    Migrated issue, originally created by Anonymous

    (original reporter: creiht) In Postgresql if your table has mixed case name, you have to quote them in the query.

    An example might be:

    select * from "Ticket" where "Ticket"."SpecialValue" = 12;
    

    SQL Alchemy currently does not quote the names, so the query generated would look like:

    select * from Ticket where Ticket.SpecialValue = 12;
    

    and Postgres will turn that into:

    select * from ticket where ticket.specialvalue = 12;
    

    I would expect that a similar problem could happen with other engines, but I don't know the mixed case naming conventions of the other databases.


    Attachments: require_quotes1.diff | delimited_identifiers2.diff | firebird_quoted_reflection1.diff | firebird_quoted_reflection3.diff | ansisql.py.diff | postgres.py.diff | delimited_identifiers.diff | delimited_identifiers2.2.diff | delimited_identifiers3.diff | updated.diff | firebird_quoted_reflection2.diff | case_and_reflection2.diff | postgres.py.2.diff | case_and_reflection1.diff | delim_id_firebird.diff

    bug sql 
    opened by sqlalchemy-bot 77
  • Dictionary changed size during iteration during lambda generation

    Dictionary changed size during iteration during lambda generation

    Describe the bug

    Thank you for reading this, and thank you for sqlalchemy.

    While tracking this issue https://github.com/home-assistant/core/issues/72991 we received an exception in the lambda generation code

    This appears to be a thread safety issue as the logs the user provided (attached) show multiple threads around the same area of the code

    One of the side effects is that it appears the bindparams get confused with other queries that happen to be running at the same time

    To Reproduce

    Reproduction is currently not possible on demand. As https://github.com/home-assistant/core/issues/72991 progresses we may end up with a small reproducer

    Error

    Traceback (most recent call last):
      File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 515, in async_update_ha_state
        await self.async_device_update()
      File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 743, in async_device_update
        raise exc
      File "/config/custom_components/average/sensor.py", line 301, in async_update
        await self._async_update_state()
      File "/config/custom_components/average/sensor.py", line 479, in _async_update_state
        history_list = await get_instance(self.hass).async_add_executor_job(
      File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run
        result = self.fn(*self.args, **self.kwargs)
      File "/usr/src/homeassistant/homeassistant/components/recorder/history.py", line 399, in state_changes_during_period
        _sorted_states_to_dict(
      File "/usr/src/homeassistant/homeassistant/components/recorder/history.py", line 668, in _sorted_states_to_dict
        for row in _get_rows_with_session(
      File "/usr/src/homeassistant/homeassistant/components/recorder/history.py", line 568, in _get_rows_with_session
        _get_single_entity_states_stmt(
      File "/usr/src/homeassistant/homeassistant/components/recorder/history.py", line 614, in _get_single_entity_states_stmt
        stmt += lambda q: q.outerjoin(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 451, in __add__
        return self.add_criteria(other)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 503, in add_criteria
        return LinkedLambdaElement(other, parent_lambda=self, opts=opts)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 593, in __init__
        self._retrieve_tracker_rec(fn, self, opts)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 178, in _retrieve_tracker_rec
        tracker = AnalyzedCode.get(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 617, in get
        cls._fns[fn.__code__] = analyzed = AnalyzedCode(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 659, in __init__
        self._setup_additional_closure_trackers(fn, lambda_element, opts)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 727, in _setup_additional_closure_trackers
        analyzed_function = AnalyzedFunction(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 989, in __init__
        self._instrument_and_run_function(lambda_element)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 1003, in _instrument_and_run_function
        self.expr = lambda_element._invoke_user_fn(tracker_instrumented_fn)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 597, in _invoke_user_fn
        return fn(self.parent_lambda._resolved)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1113, in __get__
        obj.__dict__[self.__name__] = result = self.fget(obj)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 327, in _resolved
        expr = self._setup_binds_for_tracked_expr(expr)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/lambdas.py", line 308, in _setup_binds_for_tracked_expr
        expr = visitors.replacement_traverse(expr, {}, replace)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/visitors.py", line 848, in replacement_traverse
        obj = clone(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/visitors.py", line 844, in clone
        newelem._copy_internals(clone=clone, **kw)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 5747, in _copy_internals
        super(Select, self)._copy_internals(
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/traversals.py", line 826, in _copy_internals
        result = meth(attrname, self, obj, **kw)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/traversals.py", line 838, in visit_clauseelement
        return clone(element, **kw)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/visitors.py", line 843, in clone
        cloned[id_elem] = newelem = elem._clone(**kw)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 1634, in _clone
        c = ClauseElement._clone(self, **kw)
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 244, in _clone
        c.__dict__ = {k: v for k, v in self.__dict__.items() if k not in skip}
      File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 244, in <dictcomp>
        c.__dict__ = {k: v for k, v in self.__dict__.items() if k not in skip}
    RuntimeError: dictionary changed size during iteration
    

    The actual code (with all the debug logging) is here https://github.com/home-assistant/core/blob/9d08bd2ce24c003d1b2df60757336e86c2bb32df/homeassistant/components/recorder/history.py#L609

    Versions

    • OS: Linux RPI 32bit
    • Python: 3.9.9
    • SQLAlchemy: 1.4.37
    • Database: MariaDB
    • DBAPI (eg: psycopg, cx_oracle, mysqlclient): mysqlclient

    Additional context

    User provided logs https://www.dropbox.com/s/ey4izab3jjde3l3/sqlalchemy_logs_02.7z?dl=0

    bug blocker lambda sql next release 
    opened by bdraco 74
  • None returned when accessing backref with non nullable foreign key, after upgrading to 1.4

    None returned when accessing backref with non nullable foreign key, after upgrading to 1.4

    Describe the bug After upgrading to 1.4, some existing backrefs started returning None when accessed. For ex: assert badge.user is not None The user being None is impossible, because the foreign key for that relationship is not nullable. Must be new bug or regression in 1.4.

    Expected behavior Accessing a backref where the foreign key user_id isn't nullable should always return a user row, never None.

    To Reproduce

    class User(Model):
        badges = relationship('Badge', lazy='dynamic', backref='user')
        ...
    
    class Badge(Model):
        user_id = Column(UUID(), ForeignKey('user.id'), nullable=False)
        ...
    
    user = User()
    session.add(user)
    session.flush()
    session.add(Badge(user_id=user.id))
    session.commit()
    
    badge = Badge.query.filter_by(user_id=user.id).first()
    assert badge.user is not None
    

    Error The error doesn't occur every time. I'm still looking for a pattern. The same code accessing the same backref sometimes incorrectly returns None and sometimes returns the correct row.

    Versions.

    • OS: Ubuntu
    • Python: 3.6.7
    • SQLAlchemy: 1.4.0
    • Database: Postgres
    • DBAPI: psycopg2==2.8.6
    bug orm regression 
    opened by alanhamlett 69
  • postgresql / sqlite / mysql /oracle regular expression operators

    postgresql / sqlite / mysql /oracle regular expression operators

    Migrated issue, originally created by Anonymous

    match operator not implemented

    in sqlalchemy/databases/postgres.py 727 just add sql_operators.match_op: lambda x, y, escape=None: '%s ~ %s' % (x, y) + (escape and ' ESCAPE '%s'' % escape or ''),

    and it works for me CAVEAT sql might crash if improper regexp syntax entered.

    • missing imatch / notmatch ....

    (these matching functions are still used with same syntax) http://www.postgresql.org/docs/7.4/interactive/functions-matching.html


    Attachments: 1390.patch


    Edit 2020-07-10

    Coping this recap from https://github.com/sqlalchemy/sqlalchemy/issues/5447#issue-654578670

    The call column.regex_match('[a-z]*') would then evaluate to:

    • Postgres: column ~ "[a-z]*"
    • SQLite: column REGEXP "[a-z]*"
    • MySQL, MariaDB: column REGEXP "[a-z]*"
    • Oracle: REGEXP_LIKE(column, "[a-z]*")
    • MSSQL: throw exception
    postgresql feature mysql sqlite oracle mariadb 
    opened by sqlalchemy-bot 59
  • Support for generated columns in postgres and mysql

    Support for generated columns in postgres and mysql

    Add support for specifying a column as generated by the database with GENERATED ALWAYS

    Currently postgres and mysql support it (the list may not be complete) https://www.postgresql.org/docs/12/ddl-generated-columns.html https://dev.mysql.com/doc/refman/8.0/en/create-table-generated-columns.html

    I've tried to search in the documentation and it does not seems to be a way to specify a generated column in sqlalchemy. I've found a stack overflow answer https://stackoverflow.com/questions/57596603/sqlalchemy-mysql-column-ddl-with-generated-always-as but it does not seem to be the cleanest way to deal with it.

    maybe a Generated construct used in the column with the code would be appropriate? Something like

    sa.Table('foo', meta,
      sa.Column('quantity', sa.Integer, nullable=False),
      sa.Column('price', sa.Float, nullable=False),
      sa.Column('revenue', sa.Float, Computed('quantity * price', always=True))
    )
    

    edit: updated example following https://github.com/sqlalchemy/sqlalchemy/issues/4894#issuecomment-539050154

    Current support and syntax seems to be:

    • Postgress: supports only stored GENERATED ALWAYS AS ( generation_expr ) STORED https://www.postgresql.org/docs/12/ddl-generated-columns.html https://www.postgresql.org/docs/12/sql-createtable.html

    • mysql | mariad db: supports both stored and virtual col_name data_type [GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED] (maridadb supports also PERSISTENT, but is an alias to STORED https://dev.mysql.com/doc/refman/5.7/en/create-table-generated-columns.html https://mariadb.com/kb/en/library/generated-columns/

    • mssql: support both stored and virtual <computed_column_definition> ::= column_name AS computed_column_expression [ PERSISTED [ NOT NULL ] ] https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-2017

    • oracle: support only virtual column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL] https://oracle-base.com/articles/11g/virtual-columns-11gr1

    high priority postgresql schema mysql PRs welcome use case 
    opened by CaselIT 57
  • Async connections not returned to pool if task is cancelled

    Async connections not returned to pool if task is cancelled

    Describe the bug

    The AsyncConnection context manager does not reliably close the connection when exiting due to a cancellation. This results in resources leaking, which is clearly surfaced by the warning during garbage collection. Garbage collection can't perform the cleanup itself, but tells us that something was missed earlier.

    To Reproduce

    import anyio
    from sqlalchemy.ext.asyncio import create_async_engine
    from sqlalchemy.sql import text
    
    async def main():
        engine = create_async_engine(
            "postgresql+asyncpg://username:[email protected]/db_name",
            echo=True,
            connect_args={"timeout": 5}
            )
    
        async def sleep_in_db():
            print(engine.pool.status()) # 0 connections
            async with engine.connect() as connection:
                print(engine.pool.status()) # 1 connections
                while True:
                    await connection.execute(text("select now();"))
                    await anyio.sleep(1)
    
        async with anyio.create_task_group() as db_tasks:
            db_tasks.start_soon(sleep_in_db)
    
            await anyio.sleep(2.5)
    
            db_tasks.cancel_scope.cancel() # Will raise a CancelledError under the hood
    
    
        print(engine.pool.status()) # 1 connection????
    
        await engine.dispose()
    
    anyio.run(main)
    

    Error

    The garbage collector is trying to clean up connection <AdaptedConnection <asyncpg.connection.Connection object at 0x7f1cf4760660>>. This feature is unsupported on async dbapi, since no IO can be performed at this stage to reset the connection. Please close out all connections when they are no longer used, calling ``close()`` or using a context manager to manage their lifetime.
    sys:1: SAWarning: The garbage collector is trying to clean up connection <AdaptedConnection <asyncpg.connection.Connection object at 0x7f1cf4760660>>. This feature is unsupported on async dbapi, since no IO can be performed at this stage to reset the connection. Please close out all connections when they are no longer used, calling ``close()`` or using a context manager to manage their lifetime.
    

    Versions

    • OS: Ubuntu on WSL2 in Windows 10
    • Python: 3.10.4
    • SQLAlchemy: 1.4.37
    • Database: Postgres 14
    • DBAPI (eg: psycopg, cx_oracle, mysqlclient): asyncpg 0.25.0

    Additional context

    I think the problem is that the way cancellation works is that the event loop (asyncio in this case) raises CancelledError when you do await, so if any await is needed while actually processing a cancellation (ex. to cleanup), that needs to be "shielded" from the cancellation we're trying to process. We can see in AsyncConnection that on __aexit__() it's trying to await self.close(), but it's unable to (self.close() is never run) because of the cancellation: https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/ext/asyncio/engine.py#L700

        async def __aexit__(self, type_: Any, value: Any, traceback: Any) -> None:
            await self.close()
    

    Here's a good description of this concept in the trio docs: https://trio.readthedocs.io/en/stable/reference-core.html?highlight=shield#cancellation-semantics (it takes a bit to get to the relevant part, but read the whole block).

    I don't know asyncio well enough to code up a robust solution but I believe these are the relevant docs: https://docs.python.org/3/library/asyncio-task.html#shielding-from-cancellation

    Using anyio to achieve trio-like structured concurrency on top of asyncio, I am able to patch this __aexit__() method and then the connection is properly closed. I'm not sure if you'd consider using anyio in sqlalchemy. Could be used in just this one place, throughout the codebase in order to add compatibility with trio, or piecemeal to get the advantages of structured concurrency without a big refactor. Here's the anyio docs for shielding: https://anyio.readthedocs.io/en/stable/cancellation.html?highlight=shield#shielding

    import anyio
    
    class PatchedAsyncConnection(AsyncConnection):
        async def __aexit__(self, type_, value, traceback):
            with anyio.open_cancel_scope(shield=True):
                await self.close()
    
    third party integration issues asyncio next release 
    opened by camlee 55
  • [BUG] WeakSequence circular reference

    [BUG] WeakSequence circular reference

    WeakSequence is creating unnecessary circular reference and requiring gc module to clean it up.

    Python: CPython 2.7.15

    Code to reproduce issue:

    import gc
    gc.disable()
    from sqlalchemy.util import WeakSequence
    
    gc.collect()
    gc.set_debug(gc.DEBUG_SAVEALL)
    
    
    class X(object):
        pass
    
    
    w = WeakSequence([X])
    del w
    
    gc.collect()
    print(gc.garbage)
    

    Expected result:

    []
    

    Actual result:

    [<sqlalchemy.util._collections.WeakSequence object at 0x7f7221b30bd0>, [<weakref at 0x7f721f853208; to 'type' at 0x5602f6ab5410 (X)>], <bound method WeakSequence._remove of <sqlalchemy.util._collections.WeakSequence object at 0x7f7221b30bd0>>, <weakref at 0x7f721f853208; to 'type' at 0x5602f6ab5410 (X)>, {'_storage': [<weakref at 0x7f721f853208; to 'type' at 0x5602f6ab5410 (X)>]}]
    

    Visualization using objgraph: 0

    Circular reference is not good for CPython's refcount garbage collector. Since the class is called "WeakSequence", it is expected that refcount can clean up the garbage instead of the gc module.

    A fix is coming up. I suppose fixing this would be good for performance and memory.

    bug orm performance 
    opened by carsonip 53
  • Unable to use transactions with Azure Synapse Analytics

    Unable to use transactions with Azure Synapse Analytics

    Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/8223

    Originally posted by ww2406 July 7, 2022

    Describe the bug

    Hello!

    Problem: Unable to use transactions with Azure Synapse Analytics

    Description: ODBC Error 111214 is returned: An attempt to complete a transaction has failed. No corresponding transaction found.

    Other notes: Transactions function properly when using pyodbc directly [see MCVE]. I saw this post by gordthompson in response to a question on StackOverflow (and since added to the documentation) that

    Azure SQL Data Warehouse does not support transactions

    but this is outdated. Azure Synapse Analytics does support transactions in most cases; see this Microsoft documentation:

    As you would expect, SQL pool supports transactions as part of the data warehouse workload. However, to ensure SQL pool is maintained at scale, some features are limited when compared to SQL Server. This article highlights the differences.

    To Reproduce

    import pyodbc 
    import sqlalchemy as sa 
    import os 
    
    p_con_string = os.getenv("synapse_conn_pyodbc")
    sa_eng_string = os.getenv("synapse_conn_sa")
    
    p_con = pyodbc.connect(p_con_string)
    engine = sa.create_engine(sa_eng_string)
    
    p_cur = p_con.cursor()
    p_cur.execute("INSERT INTO test VALUES ('test')")
    p_con.commit()
    p_con.close()
    # No error
    
    with engine.begin() as s_con:
        stmt = sa.Text("INSERT INTO test VALUES ('test')")
        s_con.execute(stmt)
    
    # Traceback (most recent call last):
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3280, in _wrap_pool_connect
    #     return fn()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 310, in connect
    #     return _ConnectionFairy._checkout(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 868, in _checkout
    #     fairy = _ConnectionRecord.checkout(pool)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 476, in checkout
    #     rec = pool._do_get()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
    #     with util.safe_reraise():
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
    #     compat.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 143, in _do_get
    #     return self._create_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 256, in _create_connection
    #     return _ConnectionRecord(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 371, in __init__
    #     self.__connect()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 679, in __connect
    #     )._exec_w_sync_on_first_run(self.dbapi_connection, self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 329, in _exec_w_sync_on_first_run
    #     self(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 343, in __call__
    #     fn(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 1695, in go
    #     return once_fn(*arg, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\create.py", line 694, in first_connect
    #     dialect.do_rollback(c.connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\default.py", line 682, in do_rollback
    #     dbapi_connection.rollback()
    # pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]111214;An attempt to complete a transaction has failed. No corresponding transaction found. (111214) (SQLEndTran)')
    
    # The above exception was the direct cause of the following exception:
    
    # Traceback (most recent call last):
    #   File "synapse_issue.py", line 15, in <module>
    #     with engine.begin() as s_con:
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3062, in begin
    #     conn = self.connect(close_with_result=close_with_result)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3234, in connect
    #     return self._connection_cls(self, close_with_result=close_with_result)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 96, in __init__
    #     else engine.raw_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3313, in raw_connection
    #     return self._wrap_pool_connect(self.pool.connect, _connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3283, in _wrap_pool_connect
    #     Connection._handle_dbapi_exception_noconnection(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 2117, in _handle_dbapi_exception_noconnection
    #     util.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3280, in _wrap_pool_connect
    #     return fn()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 310, in connect
    #     return _ConnectionFairy._checkout(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 868, in _checkout
    #     fairy = _ConnectionRecord.checkout(pool)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 476, in checkout
    #     rec = pool._do_get()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
    #     with util.safe_reraise():
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
    #     compat.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 143, in _do_get
    #     return self._create_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 256, in _create_connection
    #     return _ConnectionRecord(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 371, in __init__
    #     self.__connect()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 679, in __connect
    #     )._exec_w_sync_on_first_run(self.dbapi_connection, self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 329, in _exec_w_sync_on_first_run
    #     self(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 343, in __call__
    #     fn(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 1695, in go
    #     return once_fn(*arg, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\create.py", line 694, in first_connect
    #     dialect.do_rollback(c.connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\default.py", line 682, in do_rollback
    #     dbapi_connection.rollback()
    # sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]111214;An attempt to complete a transaction has failed. No 
    # corresponding transaction found. (111214) (SQLEndTran)')
    # (Background on this error at: https://sqlalche.me/e/14/f405)
    

    Error

    # Traceback (most recent call last):
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3280, in _wrap_pool_connect
    #     return fn()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 310, in connect
    #     return _ConnectionFairy._checkout(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 868, in _checkout
    #     fairy = _ConnectionRecord.checkout(pool)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 476, in checkout
    #     rec = pool._do_get()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
    #     with util.safe_reraise():
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
    #     compat.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 143, in _do_get
    #     return self._create_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 256, in _create_connection
    #     return _ConnectionRecord(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 371, in __init__
    #     self.__connect()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 679, in __connect
    #     )._exec_w_sync_on_first_run(self.dbapi_connection, self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 329, in _exec_w_sync_on_first_run
    #     self(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 343, in __call__
    #     fn(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 1695, in go
    #     return once_fn(*arg, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\create.py", line 694, in first_connect
    #     dialect.do_rollback(c.connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\default.py", line 682, in do_rollback
    #     dbapi_connection.rollback()
    # pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]111214;An attempt to complete a transaction has failed. No corresponding transaction found. (111214) (SQLEndTran)')
    
    # The above exception was the direct cause of the following exception:
    
    # Traceback (most recent call last):
    #   File "synapse_issue.py", line 15, in <module>
    #     with engine.begin() as s_con:
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3062, in begin
    #     conn = self.connect(close_with_result=close_with_result)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3234, in connect
    #     return self._connection_cls(self, close_with_result=close_with_result)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 96, in __init__
    #     else engine.raw_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3313, in raw_connection
    #     return self._wrap_pool_connect(self.pool.connect, _connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3283, in _wrap_pool_connect
    #     Connection._handle_dbapi_exception_noconnection(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 2117, in _handle_dbapi_exception_noconnection
    #     util.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\base.py", line 3280, in _wrap_pool_connect
    #     return fn()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 310, in connect
    #     return _ConnectionFairy._checkout(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 868, in _checkout
    #     fairy = _ConnectionRecord.checkout(pool)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 476, in checkout
    #     rec = pool._do_get()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
    #     with util.safe_reraise():
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
    #     compat.raise_(
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
    #     raise exception
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\impl.py", line 143, in _do_get
    #     return self._create_connection()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 256, in _create_connection
    #     return _ConnectionRecord(self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 371, in __init__
    #     self.__connect()
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\pool\base.py", line 679, in __connect
    #     )._exec_w_sync_on_first_run(self.dbapi_connection, self)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 329, in _exec_w_sync_on_first_run
    #     self(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\event\attr.py", line 343, in __call__
    #     fn(*args, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\util\langhelpers.py", line 1695, in go
    #     return once_fn(*arg, **kw)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\create.py", line 694, in first_connect
    #     dialect.do_rollback(c.connection)
    #   File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\sqlalchemy\engine\default.py", line 682, in do_rollback
    #     dbapi_connection.rollback()
    # sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]111214;An attempt to complete a transaction has failed. No 
    # corresponding transaction found. (111214) (SQLEndTran)')
    # (Background on this error at: https://sqlalche.me/e/14/f405)
    

    Versions

    • OS: Windows 10
    • Python: 3.10
    • SQLAlchemy: 1.4.37
    • Database: Azure Synapse Analytics Dedicated Pool
    • DBAPI (eg: psycopg, cx_oracle, mysqlclient): pyodbc

    Additional context

    No response

    SQL Server external driver issues 
    opened by gordthompson 50
  • Support comments on SQL Server

    Support comments on SQL Server

    Describe the use case

    It appears that support for comments was implemented in https://github.com/sqlalchemy/sqlalchemy/commit/fadb8d61babb76ef7bdbc98279096a8900c7328d, however it wasn't implemented for SQL Server - https://github.com/sqlalchemy/sqlalchemy/issues/1546#issuecomment-441906255

     In[14]: engine.dialect
    Out[14]: <sqlalchemy.dialects.mssql.pyodbc.MSDialect_pyodbc at 0x1db4d5289c8>
    
     In[15]: engine.dialect.supports_comments
    Out[15]: False
    

    It would be great if comments were also supported for SQL Server.

    Databases / Backends / Drivers targeted

    MSSQL

    Example Use

    NA

    Additional context

    Just opening an issue to document and track this feature request as it took me a while to understand why the comments weren't working on my Azure SQL Database.

    SQL Server fairly easy use case 
    opened by dhirschfeld 50
  • Lost params when using `join` with `select_from`

    Lost params when using `join` with `select_from`

    Describe the bug

    The params set to .params are lost when executing a select with a select_from that starts uses a join

    To Reproduce

    import sqlalchemy as sa
    from sqlalchemy import event
    
    engine = sa.create_engine("sqlite://", echo=True)
    meta = sa.MetaData()
    
    X = sa.Table("x", meta, sa.Column("a", sa.Integer), sa.Column("b", sa.Integer))
    Y = sa.Table("y", meta, sa.Column("a", sa.Integer), sa.Column("b", sa.Integer))
    meta.create_all(engine)
    
    s1 = sa.select(X.c.a).where(X.c.b == sa.bindparam("xb")).alias("s1")
    jj = sa.select(Y).join(s1, Y.c.a == s1.c.a).where(Y.c.b == sa.bindparam("yb")).alias("s2")
    # jj = sa.select(Y).where(Y.c.b == sa.bindparam("yb")).alias('s2')
    
    params = {"xb": 42, "yb": 33}
    sel = sa.select(Y).select_from(jj).params(params)
    assert sel.compile().params == params
    
    
    @event.listens_for(engine, "before_cursor_execute")
    def go(conn, cursor, statement, parameters, context, executemany):
        assert parameters == tuple(params.values()), (parameters, tuple(params.values()))
    
    
    with engine.connect() as conn:
        conn.execute(sel)
    

    Error

    The above script produces the error:

    .\go.py:26: SAWarning: SELECT statement has a cartesian product between FROM element(s) "s2" and FROM element "y".  Apply join condition(s) between each element to resolve.
      conn.execute(sel)
    Traceback (most recent call last):
      File ".\go.py", line 26, in <module>
        conn.execute(sel)
      File "C:\lib\site-packages\sqlalchemy\engine\base.py", line 1262, in execute
        return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
      File "C:\lib\site-packages\sqlalchemy\sql\elements.py", line 324, in _execute_on_connection
        return connection._execute_clauseelement(
      File "C:\lib\site-packages\sqlalchemy\engine\base.py", line 1451, in _execute_clauseelement
        ret = self._execute_context(
      File "C:\\lib\site-packages\sqlalchemy\engine\base.py", line 1712, in _execute_context
        statement, parameters = fn(
      File "C:\\lib\site-packages\sqlalchemy\engine\events.py", line 128, in wrap_before_cursor_execute
        orig_fn(
      File ".\go.py", line 22, in go
        assert parameters == tuple(params.values()), (parameters, tuple(params.values()))
    AssertionError: ((None, 33), (42, 33))
    

    Using the commented jj does not cause issues

    Versions

    • OS: -
    • Python: 3.8
    • SQLAlchemy: master
    • Database: any
    • DBAPI: any

    Additional context

    I think that the cartesian produc warning is also wrong

    C:\folder\go.py:26: SAWarning: SELECT statement has a cartesian product between FROM element(s) "s2" and FROM element "y". Apply join condition(s) between each element to resolve.

    bug sql great mcve 
    opened by CaselIT 49
  • declarative_base() fails to provide editor hints in 2.0.0rc1

    declarative_base() fails to provide editor hints in 2.0.0rc1

    Describe the bug

    This is how code looks in vscode with sqlalchemy 1.4.46 Screenshot 2023-01-08 at 22 08 31

    This is how code looks in vscode with sqlalchemy 2.0.0rc1 Screenshot 2023-01-08 at 22 07 06

    To Reproduce

    Tried to recreate in an empty file, such three lines five the same effect, "metadata" and "create_all" does not bear type hint info for editor.
    
    from sqlalchemy.orm import declarative_base
    base = declarative_base()
    base.metadata.create_all()
    

    Error

    "metadata: Any", "create_all: Any" -> this is all vscode recognizes.

    Versions

    • OS: osx
    • Python: tested on 3.10 and 3.11
    • SQLAlchemy: 2.0.0rc1
    • Database: -
    • DBAPI (eg: psycopg, cx_oracle, mysqlclient): -

    Additional context

    No response

    requires triage 
    opened by toinbis 0
  • Wrong dataclass_transform parameter name used for DCTransformDeclarative

    Wrong dataclass_transform parameter name used for DCTransformDeclarative

    Describe the bug

    This is basically the same issue that happened to pydantic here. Previous versions of PEP 681 had dataclass_transform take a parameter called field_descriptors but this name was changed to field_specifiers. SQLAlchemy still uses the old name for MappedAsDataclass, here.

    To Reproduce

    from typing import reveal_type
    from sqlalchemy.orm import DeclarativeBase, mapped_column, Mapped, MappedAsDataclass
    
    class Model(MappedAsDataclass, DeclarativeBase):
        a: Mapped[int] = mapped_column(default=1)
    
    reveal_type(Model.__init__)
    

    Error

    Type of "Model.__init__" is "(self: Model, a: SQLCoreOperations[int] | int = mapped_column(default=1)) -> None"
    

    Versions

    • Python: 3.11
    • SQLAlchemy: 2.0.0rc1

    Additional context

    The type of the init method should be (self: Model, a: SQLCoreOperations[int] | int = 1) -> None, which it is if the dataclass_transform call on DCTransformDeclarative uses the correct parameter name.

    bug orm regression dataclasses typing 
    opened by ImogenBits 3
  • Allow __abstract__ classes to be used as relationship target

    Allow __abstract__ classes to be used as relationship target

    Describe the use case

    See the full discussion here: https://github.com/sqlalchemy/sqlalchemy/discussions/8959

    As an outcome of that discussion, please redefine the abstract attribute, and allow it to be either set to:

    True, full: fully abstract class (the existing meaning of the attribute; backwards-compatible, while suggesting in the docs to not use a boolean for it in new code) virtual, or polymorphic: abstract class, allowed to be used relational target in relationship()

    Actual naming of the flag values are of course changeable at you discretion.

    Setting it to virtual would make the class purely abstract, but allow it to be used on the left side of a relationship() foreign-key relation. Intent of the change is to allow relationships into sub-classes of a polymorphic type, where only a subset of the classes support a certain foreign key. Here's an example:

    class TeamMixIn:
        team_id: Mapped[int|None] = mapped_column(ForeignKey("teams.id"), init=False, use_existing_column=True, repr=False)
    
        @declared_attr.cascading
        def team(_cls) -> Mapped["Team"]:
            return relationship(back_populates="user", init=False, repr=True)
    
    
    class User(BaseModel):
        __tablename__ = "users"
        id: Mapped[int] = mapped_column(primary_key=True)
        name: Mapped[str]
        user_type: Mapped[str] = mapped_column(init=False, repr=False)
    
        __mapper_args__ = {
            "polymorphic_identity": "user",
            "polymorphic_on": "user_type",
        }
    
    class HasTeams(User, TeamMixIn):
       __abstract__ = 'virtual'
    
    class Engineer(HasTeams):
        __mapper_args__ = {
            "polymorphic_identity": "engineer",
        }
    
    class Accounting(HasTeams):
        __mapper_args__ = {
            "polymorphic_identity": "accounting",
        }
    
    class Manager(User):
        __mapper_args__ = {
            "polymorphic_identity": "hr",
        }
    
    
    class Team(BaseModel):
        __tablename__ = "teams"
        id: Mapped[int] = mapped_column(primary_key=True)
        name: Mapped[str]
        user: Mapped[HasTeams] = relationship(back_populates="team", init=False)
    

    Now, the foreign key from Team can be either an Engineer class instance, or an Accounting class instance, but never a Manager instance.

    Databases / Backends / Drivers targeted

    Purely declarative; no actual database backend

    Example Use

    No changes to existing SQL; relationship() already builds the correct SQL queries

    Additional context

    No response

    orm inheritance use case 
    opened by daemonkeeper 1
  • figure out some way to gracefully handle fks that are placed on

    figure out some way to gracefully handle fks that are placed on "invisible" columns

    Describe the bug

    When trying to reflect metadata for table that contains column that is both invisible and foreign key the reflection fails creating the ForeignKeyConstraint because it can't find the invisible column.

    To Reproduce

    engine = create_engine('oracle://...')
    
    sql = 'create table table_b (id number(3) primary key)'
    engine.execute(sql)
    
    sql = 'create table table_a (id number(3) primary key, a_col1 number(3))'
    engine.execute(sql)
    
    sql = 'alter table table_a modify (a_col1 invisible)'
    engine.execute(sql)
    
    sql = 'alter table table_a add constraint FK$table_a$a_col1 foreign key(a_col1) references table_b'
    engine.execute(sql)
    
    metadata = MetaData()
    metadata = metadata.reflect(bind=engine)
    

    Error

    Traceback (most recent call last):
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3887, in _set_parent
        ColumnCollectionConstraint._set_parent(self, table)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3476, in _set_parent
        ColumnCollectionMixin._set_parent(self, table)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3433, in _set_parent
        for col in self._col_expressions(table):
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3427, in _col_expressions
        return [
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3428, in <listcomp>
        table.c[col] if isinstance(col, util.string_types) else col
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\base.py", line 1214, in __getitem__
        return self._index[key]
    KeyError: 'a_col1'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "C:\Users\Josef.ondrej\projects\ts-ml-oracle-to-postgres\ts_ml_oracle_to_postgres\example.py", line 27, in <module>
        metadata = metadata.reflect(bind=engine)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 4901, in reflect
        Table(name, self, **reflect_opts)
      File "<string>", line 2, in __new__
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\util\deprecations.py", line 309, in warned
        return fn(*args, **kwargs)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 619, in __new__
        metadata._remove_table(name, schema)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
        compat.raise_(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\util\compat.py", line 211, in raise_
        raise exception
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 614, in __new__
        table._init(name, metadata, *args, **kw)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 689, in _init
        self._autoload(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 724, in _autoload
        conn_insp.reflect_table(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\engine\reflection.py", line 795, in reflect_table
        self._reflect_fk(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\engine\reflection.py", line 1007, in _reflect_fk
        table.append_constraint(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 925, in append_constraint
        constraint._set_parent_with_dispatch(self)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\base.py", line 1047, in _set_parent_with_dispatch
        self._set_parent(parent, **kw)
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\sql\schema.py", line 3889, in _set_parent
        util.raise_(
      File "C:\Users\Josef.ondrej\Anaconda3\envs\ts-ml-oracle-to-postgres\lib\site-packages\sqlalchemy\util\compat.py", line 211, in raise_
        raise exception
    sqlalchemy.exc.ArgumentError: Can't create ForeignKeyConstraint on table 'table_a': no column named 'a_col1' is present.
    
    Process finished with exit code 1
    

    Versions

    • OS: Windows
    • Python: 3.8
    • SQLAlchemy: 1.4.45
    • Database: Oracle
    • DBAPI (eg: psycopg, cx_oracle, mysqlclient): oracledb

    Additional context

    No response

    oracle reflection use case 
    opened by josefondrej 6
  • Unify operator definition in postgresql

    Unify operator definition in postgresql

    In the postgresql there are some operators that are defined multiple times, like the @> for range, array, hstore, range. We could probably clean things up a bit by having them defined only once

    postgresql 
    opened by CaselIT 0
  • Type annotations for sqlalchemy.orm.dynamic

    Type annotations for sqlalchemy.orm.dynamic

    Description

    An attempt to annotate lib/sqlalchemy/orm/dynamic.py with type hints (issue https://github.com/sqlalchemy/sqlalchemy/issues/6810)

    Checklist

    This pull request is:

    • [ ] A documentation / typographical error fix
      • Good to go, no issue or tests are needed
    • [ ] A short code fix
      • please include the issue number, and create an issue if none exists, which must include a complete example of the issue. one line code fixes without an issue and demonstration will not be accepted.
      • Please include: Fixes: #<issue number> in the commit message
      • please include tests. one line code fixes without tests will not be accepted.
    • [ ] A new feature implementation
      • please include the issue number, and create an issue if none exists, which must include a complete example of how the feature would look.
      • Please include: Fixes: #<issue number> in the commit message
      • please include tests.
    opened by mlatysh 10
Releases(rel_1_4_46)
Owner
SQLAlchemy
The Database Toolkit and Object Relational Mapper
SQLAlchemy
A tutorial designed to introduce you to SQlite 3 database using python

SQLite3-python-tutorial A tutorial designed to introduce you to SQlite 3 database using python What is SQLite? SQLite is an in-process library that im

0 Dec 28, 2021
Redis Python Client - The Python interface to the Redis key-value store.

redis-py The Python interface to the Redis key-value store. Installation | Contributing | Getting Started | Connecting To Redis Installation redis-py

Redis 11k Jan 08, 2023
PostgreSQL database adapter for the Python programming language

psycopg2 - Python-PostgreSQL Database Adapter Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main fe

The Psycopg Team 2.8k Jan 05, 2023
Database connection pooler for Python

Nimue Strange women lying in ponds distributing swords is no basis for a system of government! --Dennis, Peasant Nimue is a database connection pool f

1 Nov 09, 2021
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 27, 2022
New generation PostgreSQL database adapter for the Python programming language

Psycopg 3 -- PostgreSQL database adapter for Python Psycopg 3 is a modern implementation of a PostgreSQL adapter for Python. Installation Quick versio

The Psycopg Team 880 Jan 08, 2023
Python ODBC bridge

pyodbc pyodbc is an open source Python module that makes accessing ODBC databases simple. It implements the DB API 2.0 specification but is packed wit

Michael Kleehammer 2.6k Dec 27, 2022
Making it easy to query APIs via SQL

Shillelagh Shillelagh (ʃɪˈleɪlɪ) is an implementation of the Python DB API 2.0 based on SQLite (using the APSW library): from shillelagh.backends.apsw

Beto Dealmeida 207 Dec 30, 2022
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
Little wrapper around asyncpg for specific experience.

Little wrapper around asyncpg for specific experience.

Nikita Sivakov 3 Nov 15, 2021
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status 📦 Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 66 Dec 02, 2022
This is a repository for a task assigned to me by Bilateral solutions!

Processing-Files-using-MySQL This is a repository for a task assigned to me by Bilateral solutions! Task: Make Folders named Processing,queue and proc

Kandal Khandeka 1 Nov 07, 2022
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.9k Jan 08, 2023
Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Kenneth Reitz 6.9k Jan 03, 2023
A Pythonic, object-oriented interface for working with MongoDB.

PyMODM MongoDB has paused the development of PyMODM. If there are any users who want to take over and maintain this project, or if you just have quest

mongodb 345 Dec 25, 2022
A SQL linter and auto-formatter for Humans

The SQL Linter for Humans SQLFluff is a dialect-flexible and configurable SQL linter. Designed with ELT applications in mind, SQLFluff also works with

SQLFluff 5.5k Jan 08, 2023
Find graph motifs using intuitive notation

d o t m o t i f Find graph motifs using intuitive notation DotMotif is a library that identifies subgraphs or motifs in a large graph. It looks like t

APL BRAIN 45 Jan 02, 2023
Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Databank Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment. No ORM, no frills. Thread-safe. Only ra

snapADDY GmbH 4 Apr 04, 2022
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 03, 2022
Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.

Simple DDL Parser Build with ply (lex & yacc in python). A lot of samples in 'tests/. Is it Stable? Yes, library already has about 5000+ usage per day

Iuliia Volkova 95 Jan 05, 2023