Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB

Overview

Beanie

Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic.

When using Beanie each database collection has a corresponding Document that is used to interact with that collection. In addition to retrieving data, Beanie allows you to add, update, or delete documents from the collection as well.

Beanie saves you time by removing boiler-plate code and it helps you focus on the parts of your app that actually matter.

Data and schema migrations are supported by Beanie out of the box.

Installation

PIP

pip install beanie

Poetry

poetry add beanie

Quick Start

from typing import Optional, List

import motor
from beanie import Document, init_beanie
from pydantic import BaseModel


class Tag(BaseModel):
    name: str
    color: str


class Note(Document):
    title: str
    text: Optional[str]
    tag_list: List[Tag] = []


async def main():
    # Crete Motor client
    client = motor.motor_asyncio.AsyncIOMotorClient(
        "mongodb://user:[email protected]:27017"
    )
    
    # Init beanie with the Note document class
    await init_beanie(database=client.db_name, document_models=[Note])

    # Get all the notes
    all_notes = await Note.find_all().to_list()

Documentation

ODM

  • Tutorial - ODM usage examples
  • API - Full list of the ODM classes and methods with descriptions

Migrations

Example Projects

  • FastAPI Demo - Beanie and FastAPI collaboration demonstration. CRUD and Aggregation.
  • Indexes Demo - Regular and Geo Indexes usage example wrapped to a microservice.

Articles

Resources

  • GitHub - GitHub page of the project
  • Changelog - list of all the valuable changes
  • Discord - ask your questions, share ideas or just say Hello!!

Supported by JetBrains

JetBrains

Comments
  • Error with type checking on sort in PyCharm

    Error with type checking on sort in PyCharm

    Hi. Using the documented mode of sorting sort(Class.field) results in a warning in PyCharm. Is something off in the type definition. It seems to work fine.

    Screen Shot 2022-02-17 at 5 28 01 PM
    opened by mikeckennedy 18
  • Motor 3.0.0 support

    Motor 3.0.0 support

    Motor 3.0.0 was released a few days ago: https://www.mongodb.com/community/forums/t/mongodb-motor-3-0-0-released/160708. Do you plan to support the new version in the near future?

    opened by fs86 10
  • [feature] Relations

    [feature] Relations

    If the field type is Document subclass, then only id should be stored there and the whole subdocument must be stored to the separated collection.

    Example:

    class Window(Document):
      width: int
      height: int
    
    class House(Document):
      address : str
      windows: List[Window]
      favorite_window: Window
    

    Problems:

    • fetching/ lazy fetching
    • find by subfield. Example: House.find(House.favorite_window.width == 1)
    • updates of the subdocument. Example: house.set({House.favorite_window.width: 1})
    opened by roman-right 10
  • [BUG] All documents get dumped into

    [BUG] All documents get dumped into "Documents" collection since 1.14.0

    Describe the bug Since 1.14.0 all my documents get inserted into one collection named "Documents". All documents also got a _class_id value with .UserEntry

    To Reproduce

    class UserEntry(Document, ABC):
        id: int
        value: Optional[str] = None
        ...
    
        class Settings:
            name = 'user_data'
            use_state_management = True
    
    user_data = await UserEntry(...)
    user_data.value = 'Test'
    await user_data.insert()
    

    Expected behavior The document get's inserted into a "user_data" collection.

    Additional context https://github.com/roman-right/beanie/compare/1.13.1...1.14.0

    opened by Luc1412 9
  • [BUG] Problem in save method

    [BUG] Problem in save method

    Bug in action of save method I wanna save my document to database and use it after insert but I have problem with this situation:

    This is My code

    class Child(BaseModel):
        child_field: str
    
    
    class Sample(Document):
        field: Dict[str, Child]
    
      instance1 = Sample(field={"Bar": Child(child_field="Foo")})
      print(instance1)
      await instance1.save()
      print(instance1)
    
    

    Expected behavior

    # first print :
    id=None revision_id=None field={'Bar': Child(child_field='Foo')}
    # second print:
    id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': Child(child_field='Foo')}
    
    

    But I got this:

    # first print :
    id=None revision_id=None field={'Bar': Child(child_field='Foo')}
    # second print:
    id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': {'child_field': 'Foo'}}
    
    field={'Bar': {'child_field': 'Foo'}} != field={'Bar': Child(child_field='Foo')}
    

    It's okay when I wanna fetch from db and my field filled by Child model but when I want to save to db, this situation happened!!!

    opened by miladvayani 9
  • Creating Document with Relation Requires All Parent Fields Instead of Just ID

    Creating Document with Relation Requires All Parent Fields Instead of Just ID

    When used with FastAPI, all parent required fields become child required fields when creating a new document. For example:

    Models

    from beanie import Document, Indexed, Link
    
    
    class Organization(Document):
        slug: Indexed(str, unique=True)
    
        class Settings:
            name = "github-organization"
            use_revision = True
    
    
    class Repository(Document):
        organization: Link[Organization]
        name: Indexed(str, unique=True)
    
        class Settings:
            name = "github-repository"
            use_revision = True
    

    Routers

    from fastapi import APIRouter
    
    from .models import Organization, Repository
    
    router = APIRouter()
    
    
    @router.post(
        "/organization",
        response_description="Add a new GitHub Organization",
        response_model=Organization,
    )
    async def create_organization(organization: Organization):
        await organization.create()
        return organization
    
    
    @router.post(
        "/repository",
        response_description="Add a new GitHub Repository",
        response_model=Repository,
    )
    async def create_repository(repository: Repository):
        await repository.create()
        return repository
    

    I can create an Organization by POSTing the following:

    {
      "slug": "roman-right"
    }
    

    Let's say that generated a Mongo object ID of 62de9ecf2fa3d30007e3b5ce.

    However, I cannot create a Repository with just the following:

    {
      "organization": {
        "id": "62de9ecf2fa3d30007e3b5ce"
      },
      "name": "beanie"
    }
    

    That results in 422, unprocessable entity, with details that the Organization SLUG is required. I have tried the a few variations - id, _id, and also specifying organization as a string (the ID) vs an object (dictionary) - all with the same 422 error and similar messages.

    Specifying any string for slug seems to work. For example:

    {
      "organization": {
        "id": "62de9ecf2fa3d30007e3b5ce",
        "slug": ""
      },
      "name": "beanie"
    }
    

    (Repository document created as expected with link to correct Organization document)

    It would seem to just be a parameter validating issue (required parent fields are required parameters even though only the id is used).

    opened by rgajason 8
  • [Query] Common ORM like functionalities

    [Query] Common ORM like functionalities

    There are ORM functionality which I don't see in beanie which are mentioned below,

    1. How to add default fields in the model which is updated from whenever records are added. eg. created and updated fields.
    2. I want certain fields to be added while insertion and certain fields only when record is updated. eg. created must be added value only at insertion of the records but updated field must be always updated whenever records is updated.

    Is the above feature already present, if yes can you point me to the documentation.

    opened by sushilkjaiswar 8
  • Support for fetching deep-nested Links

    Support for fetching deep-nested Links

    Hi @roman-right! Our team is trying to get Beanie to fetch deep-nested Links when fetch_links=True. To accomplish this, I added additional queries to the MongoDB aggregation. However, I saw your comments in the open issues related to nested Links, and I'm not sure this solution gets around the challenges you had mentioned. Would super appreciate your feedback - thank you! :-)

    opened by csanders-rga 7
  • [BUG] AWS DocumentDB does not work with 1.14.0 - Not found for _id: ...

    [BUG] AWS DocumentDB does not work with 1.14.0 - Not found for _id: ...

    Describe the bug I noticed, that since I updated to beanie 1.14.0, my program does not work with AWS DocumentDB anymore. This has not been a problem prior, and the same code works perfectly with 1.13.1

    Additionally, the code works perfectly fine with 1.14.0 against the local MongoDB test database in version 5.0.10.

    The error message is not very helpful, the requested resources can simply be not found (although they are there)

    NotFound '<some_OID>' for '<class 'mongodb.model.user.odm.User'>' 
    not found in database 'User' with id '<some_OID>' not found in database
    

    To verify, that the resource is there I use a tool like NoSQLBooster or Robo3T

    db.user.find( {"_id" : ObjectId("<some_OID>")}  )
       .projection({})
       .sort({_id:-1})
       .limit(100)
    

    To Reproduce

    # Nothing special, just a simple find command
    result = await model.find_one(model.id == oid)
    

    Expected behavior I expected beanie 1.14.0 to work with AWS DocumentDB the same way as 1.13.1

    Additional context I am glad to provide further information, or I can make some tests against DocumentDB if someone can give me hints what to do.

    opened by micktg 7
  • Error: Cannot insert datetime.date - 'datetime.date' object is not iterable

    Error: Cannot insert datetime.date - 'datetime.date' object is not iterable

    So I'm trying to save an entry that has a datetime.date field. I keep getting two errors:

    ValueError: [
      TypeError("'datetime.date' object is not iterable"), 
      TypeError('vars() argument must have __dict__ attribute')
    ]
    

    This is very easy to recreate. Just try to insert this mode:

    class TestInsert(beanie.Document):
        name: str
        recorded_date: datetime.date
        items: List[object] = pydantic.Field(default_factory=list)
    
        class Collection:
            name = "test_remove_after"
    

    With these values, like this:

    async def test_date_insert_async():
        ti = TestInsert(
            name="Test",
            recorded_date=datetime.date.today()
        )
        return await ti.insert()
    
    opened by mikeckennedy 7
  • [BUG] ElemMatch on Document property of Type List[Link] fails with IndexError in relations.py convert_ids() beanie==1.15.4

    [BUG] ElemMatch on Document property of Type List[Link] fails with IndexError in relations.py convert_ids() beanie==1.15.4

    First of all, thank you so much for the great package!!!

    Describe the bug beanie fails to properly convert an ElemMatch Query on a Document property of Type Optional[List[Link]] and raises 'beanie\odm\utils\relations.py", line 65, in convert_ids and k.split(".")[1] == "id" IndexError: list index out of range'

    To Reproduce Python 3.9

    requirements.txt beanie==1.15.4 pydantic==1.9.2 pymongo[srv]==4.1.1

    
    from typing import Optional
    from beanie import Document, Link
    from beanie.operators import ElemMatch
    
    import asyncio
    
    
    class DocToLink(Document):
    
        class Settings:
            name = 'LinkExample'
    
        child_name: str
    
    
    class DocWithLinkAttribute(Document):
    
        class Settings:
            name = 'ParentExample'
    
        parent_name: str
        linked_docs: Optional[list[Link[DocToLink]]]
    
    
    async def add_and_query(child_name: str, parent_name: str) -> Optional[list[DocWithLinkAttribute]]:
        """"""
    
        child: DocToLink = DocToLink(child_name=child_name)
        await child.insert()
    
        parent: DocWithLinkAttribute = DocWithLinkAttribute(parent_name=parent_name, linked_docs=[child])
        await parent.insert()
    
        queried_doc:  list[DocWithLinkAttribute] = await DocWithLinkAttribute.find(
                ElemMatch(DocWithLinkAttribute.linked_docs, DocToLink.child_name == child_name), fetch_links=True
                ).to_list()
        return queried_doc
    
    
    async def init_mongo_client(models: list[Document]):
        """"""
        mongoClient = mdb(secretName=secretMONGO)
        await init_beanie(database=mongoClient.client[mongoClient.databaseInfo["database"]],
                          document_models=models)
        mongoClient.client.get_io_loop = asyncio.get_running_loop
    
    
    models: list = [DocWithLinkAttribute, DocToLink]
    asyncio.run(init_mongo_client(models))
    
    docs = asyncio.run(add_and_query("I'm a child", "I'm the parent"))
    
    

    Traceback (most recent call last): File "", line 55, in File "Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete return future.result() File "", line 35, in add_and_query File "\venv\lib\site-packages\beanie\odm\queries\cursor.py", line 71, in to_list cursor = self.motor_cursor File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 609, in motor_cursor aggregation_pipeline.append({"$match": self.get_filter_query()}) File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 105, in get_filter_query self.prepare_find_expressions() File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 93, in prepare_find_expressions self.find_expressions[i] = convert_ids( File "\venv\lib\site-packages\beanie\odm\utils\relations.py", line 65, in convert_ids and k.split(".")[1] == "id" IndexError: list index out of range

    Expected behavior The query (debugger shows: {'linked_docs': {'$elemMatch': {'child_name': "I'm a child"}}}) should return the correct (list) of documents. In, fact if ' and k.split(".")[1] == "id"' is commented out and new_k is set to k as in line 72 of relations.py, the output is given correctly. E.g.

      for k, v in query.items():
          if (
              isinstance(k, ExpressionField)
              and doc.get_link_fields() is not None
              and k.split(".")[0] in doc.get_link_fields().keys()  # type: ignore
              # and k.split(".")[1] == "id"
          ):
              if fetch_links:
                  new_k = k
                  # new_k = f"{k.split('.')[0]}._id"
              else:
                  new_k = f"{k.split('.')[0]}.$id"
          else:
              new_k = k
    

    Unfortunately, however the error prevents this from happening. I am not sure where the "id" should come from or how to fix it properly as of now.

    I have seen the "# TODO add all the cases" in "convert_ids()" but I was not aware whether this came to your mind already.

    Additional context Thank you very much! I really appreciate the package. Let me know if I can help you on this:) Best, Thilo

    bug 
    opened by TLeitzbach 6
  • [BUG] `save_changes()` doesn't throw an error when document doesn't exists

    [BUG] `save_changes()` doesn't throw an error when document doesn't exists

    Describe the bug In 1.11.9 state was changed and always defaults being not None, instead it got a dict with the default values. This causes check_if_state_saved to never throw an error.

    save_changes on a new created document that isn't saved in the database silently does nothing.

    To Reproduce

    user_data = UserEntry(...)
    user_data.x = ...
    await user_data.save_changes()
    

    Expected behavior StateNotSaved("No state was saved") Additional context https://github.com/roman-right/beanie/compare/1.11.8...1.11.9 https://canary.discord.com/channels/822196934973456394/822196935435747332/1042243293662158970

    opened by Luc1412 0
  • [BUG] Sort by multiple fields does not work with ExpressionField

    [BUG] Sort by multiple fields does not work with ExpressionField

    Describe the bug when using multiple ExpressionField only the first sorting argument reach mongo

    To Reproduce

    import asyncio
    
    import structlog
    from beanie import Document, init_beanie
    from beanie.odm.enums import SortDirection
    from motor.motor_asyncio import AsyncIOMotorClient
    from pymongo import monitoring
    from pymongo.monitoring import CommandStartedEvent, CommandSucceededEvent, CommandFailedEvent
    
    logger = structlog.get_logger("beanie test")
    
    
    class CommandLogger(monitoring.CommandListener):
    
        def started(self, event: CommandStartedEvent):
            if event.command_name == "find":
                logger.debug(
                    f"mongo {event.command_name} started",
                    command=event.command,
                )
    
        def succeeded(self, event: CommandSucceededEvent):
            pass
    
        def failed(self, event: CommandFailedEvent):
            pass
    
    
    class MyModel(Document):
        int_number: int
        is_boolean: bool
    
        class Settings:
            name = "my_model"
    
    
    async def main():
        await init_beanie(AsyncIOMotorClient(event_listeners=[CommandLogger()])["test_hs_das"], document_models=[MyModel])
        sort1 = [+MyModel.int_number, +MyModel.is_boolean]
        sort2 = [("int_number", SortDirection.ASCENDING), "-is_boolean"]
        logger.info(sort1)
        logger.info(sort2)
        await MyModel.find().sort(sort1).to_list()
        await MyModel.find().sort(sort2).to_list()
    
    
    if __name__ == '__main__':
        asyncio.run(main())
    
    

    Expected behavior I expect that when using

    await MyModel.find().sort( [+MyModel.int_number,+MyModel.is_boolean]).to_list()
    

    the logger output will be

    2023-01-02 16:19:22 [debug ] mongo find started command=SON([('find', 'my_model'), ('filter', {}), ('sort', SON([('int_number', <SortDirection.ASCENDING: 1>), ('is_boolean', <SortDirection.DESCENDING: -1>)])), ...

    but is

    2023-01-02 16:19:22 [debug ] mongo find started command=SON([('find', 'my_model'), ('filter', {}), ('sort', SON([('int_number', <SortDirection.ASCENDING: 1>)]))

    it does work when NOT using ExpressionField

    await MyModel.find().sort([("int_number", SortDirection.ASCENDING), "-is_boolean"]).to_list()
    
    bug 
    opened by nadir-albajari-hs 1
  • Allow change class_id and use name settings in UnionDoc

    Allow change class_id and use name settings in UnionDoc

    Beanie is all sound and good until there is a need to connect to existing DB where beanie is/was not used. For instance, one might have SNS -> SQS -> lambda (that uses any other client other than beanie) -> DocDB / Mongo. In case like this, the existing data would not have beanie's internal data field such as '_class_id".

    This makes switching to Beanie difficult as it requires full DB migration, especially if DB is used for event-sourcing.

    This PR allows (1) custom internal data field name to be set and (2) custom name to be set in Union doc children classes.

    Usage:

    class Parent(UnionDoc):
        class Settings:
            class_id = "event_type" <-
            name = "eventsource"
    
    class ChildCreated(Document):
        event_type: Literal["created"] = "created" <- event_type is used instead of _class_id 
        sent_at: datetime
    
        class Settings:
            name = "created" <- This was option was ignored before
            union_doc = Parent
    
    class ChildDeleted(Document):
        event_type: Literal["created"] = "deleted" <- event_type is used instead of _class_id 
        sent_at: datetime
    
        class Settings:
            name = "deleted" <- This was option was ignored before
            union_doc = Parent
    

    Example:

    Schema

    class Parent(UnionDoc):
        class Settings:
            class_id = "event_type"
            name = "collection"
    
    
    class One(Document):
        test: int
    
        class Settings:
            name = "two"
            class_id = "event_type"
            union_doc = Parent
            
     class Two(Document): <- without union_doc
        test: int
    
        class Settings:
            name = "three"
            class_id = "event_type"
    
    

    Test

    In [1]: await db.One(test=1).insert()
    Out[1]: One(id=ObjectId('63ae1890edab780d45df0b07'), revision_id=None, test=1)
    
    pymongo
    In [17]: list(c["db"]["collection"].find({'test': 1}))
    Out[17]: [{'_id': ObjectId('63ae1890edab780d45df0b07'), 'event_type': 'two', 'test': 1}]
    
    In [1]: await db.Two(test=3).insert()
    Out[1]: Two(id=ObjectId('63ae19cd8e684a4cd769fb1f'), revision_id=None, test=3)
    
    pymongo
    In [23]: list(c["db"]["three"].find({'test': 3}))
    Out[23]: [{'_id': ObjectId('63ae19cd8e684a4cd769fb1f'), 'test': 3}]
    
    

    FYI, https://github.com/roman-right/beanie/pull/206 this PR was not reviewed for a year so I closed it..

    opened by wonjoonSeol-WS 0
  • [BUG] PydanticObjectId Serialization Issue When Beanie is Used With Starlite

    [BUG] PydanticObjectId Serialization Issue When Beanie is Used With Starlite

    Describe the bug

    Starlite raises an HTTP 500 error when trying to return a Beanie Document. It seems to be due to the PydanticObjectId type not being JSON serializable. The issue was discussed here on the Starlite repo. Is this an issue that can be fixed within Beanie, or should it be addressed within Starlite?

    bug 
    opened by bwhli 5
  • [BUG] get_motor_collection() returning `None`

    [BUG] get_motor_collection() returning `None`

    Thank you for the great work done in Beanie, it simplified my life significantly.

    Describe the bug

    This is a DB Client I wrote:

    class MongoClient:
        def __init__(self):
            client_options = {'appname': self.__class__.__name__}
            self.client = AsyncIOMotorClient(MONGO_URL, **client_options)
            self.client.get_io_loop = asyncio.get_running_loop
    
            self.is_testing = TESTING == Environments.Testing.value
    
        @property
        def db_name(self) -> str:
            db_postfix = ENV
    
            return f'{svc}-{db_postfix}'
    
        @abstractmethod
        async def initialize(self):
            raise NotImplementedError('MongoClient.initialize() must be implemented')
    
        @asynccontextmanager
        async def transaction(self) -> AsyncIOMotorClientSession:
            async with await self.client.start_session() as session:
                async with session.start_transaction():
                    yield session
    

    The initialize() method is implemented like below:

    class MongoClientNumberOne(MongoClient):
        async def initialize(self):
            collections = [
                Service,
                ...
            ]
            await init_beanie(database=self.client[self.db_name], document_models=collections)
    

    The way I use the Mongo client is to declare it as a dependency in FastAPI routes like below:

    class MongoClientDependency:
        def __init__(self, db_type: Type[T]):
            if db_type == MongoClientNumberOne:
                self.client = MongoClientNumberOne()
            elif db_type == MongoClientNumberTwo:
                self.client = MongoClientNumberTwo()
            else:
                raise ValueError("Invalid DB Type given")
    
        async def __call__(self, request: Request) -> MongoClientNumberOne | MongoClientNumberTwo:
            await self.client.initialize()
            return self.client
    
    class Context:
        def __init__(
            self,
            access_token: str | None = None,
            permissions: List[Permissions] | None = None,
            mongo_client: MongoClientNumberOne | None = None,
            mongo_session: AsyncIOMotorClientSession | None = None,
        ):
            self.access_token: str = access_token
            self.permissions: List[Permissions] = permissions
    
            self.mongo_client: AsyncIOMotorClient = mongo_client
            self.mongo_session: AsyncIOMotorClientSession = mongo_session
            if self.mongo_client:
                self.db_name: str = self.mongo_client.db_name
    
            self.current_user: User | None = None
            self.current_service: Service | None = None
    
        @classmethod
        @asynccontextmanager
        async def plain(
            cls, mongo_session: AsyncIOMotorClientSession | None = None, mongo_client: MongoClientNumberOne | None = None
        ) -> Context:
            yield cls(mongo_session=mongo_session, mongo_client=mongo_client)
    
    @router.get(
        '/blog/posts',
        summary='Get latest blog posts',
        tags=['Blog'],
        response_model=List[BlogPost],
        responses={503: {'model': ErrorResponse}},
    )
    async def get_latest_blog_posts(mongo_client: MongoClientNumberOne = Depends(MongoClientDependency(MongoClientNumberOne))):
        async with mongo_client.transaction():
            async with Context.plain():
                return await Blog.get_latest_posts()
    

    The problem that I see in Sentry is that at times, read/write queries encountered the motor_collection in ItemSettings is None. A screenshot for this below:

    Screen Shot 2022-12-06 at 12 27 49 PM

    Unfortunately this happens randomly and intermittent. Usually when there's no traffic and new traffic comes, these errors popped up.

    To Reproduce

    At the start, we almost succeeded in replicating this by creating this test, not so many requests but very high concurrency. Before the codes I wrote above, we have an async context manager that was buggy which is now fixed and the test below passed.

    @pytest.mark.asyncio
    async def test_concurrency(client: TestClient, individual_user: (RegisterRequest, str, str)):
        token = individual_user[2]
    
        async def get_profile():
            headers = {'Authorization': f'Bearer {token}'}
            response = await asyncify(client.get)(headers=headers, url='/me')
            assert response.status_code == 200
    
        async def get_wishlist():
            headers = {'Authorization': f'Bearer {token}'}
            response = await asyncify(client.get)(headers=headers, url='/wishlist')
            assert response.status_code == 200
    
        async def get_txn():
            headers = {'Authorization': f'Bearer {token}'}
            url = '/transactions?limit=10&skip=0&statuses=submitted,accepted,waitlisted,pending-signature,pending-funding,funds-received,in-execution,completed,unsuccessful'
            response = await asyncify(client.get)(headers=headers, url=url)
            assert response.status_code == 200
    
        for i in range(10):
            print(f'Iteration {i} - Enter')
            await asyncio.gather(get_profile(), get_wishlist(), get_txn(), get_profile(), get_wishlist(), get_txn())
            print(f'Iteration {i} - Exit')
    

    The bug is still happening though, motor_collection sometimes still returns None.

    Expected behavior

    self.document_model.get_motor_collection() should return the collection.

    Additional context

    I'm happy to elaborate more with code examples.

    opened by tistaharahap 10
  • [BUG] insert/replace_many not triggers actions

    [BUG] insert/replace_many not triggers actions

    Describe the bug Inserting/replacing documents using insert_many or replace_many not triggers any of event-based actions.

    To Reproduce

    import asyncio
    from typing import cast
    
    from beanie import Document, Insert, Replace, before_event, init_beanie
    from motor.motor_asyncio import AsyncIOMotorClient
    
    
    class Test(Document):
        name: str
    
        @before_event([Insert, Replace])
        def name_force_upper(self) -> None:
            self.name = self.name.upper()
    
    
    async def test():
        client = AsyncIOMotorClient("mongodb://root:[email protected]:27017")
    
        # drop db...
        await client.drop_database("test_db")
    
        await init_beanie(database=client.test_db, document_models=[Test])
    
        test_doc = Test(name="lowcase_string")
        await test_doc.insert_many([test_doc])
    
        test_from_db = cast(Test, await Test.find_one())
        assert test_doc.name.upper() == test_from_db.name, f"{test_doc.name = }, {test_from_db.name = }"
    
    
    if __name__ == "__main__":
        asyncio.run(test())
    

    Expected behavior test_from_db.name in uppercase, which means, that name_force_upper were called

    Additional context This behaviour not described in docs, and there are no issues about it, so i think this is a bug.

    opened by Rubikoid 1
Releases(1.16.8)
Owner
Roman
This is my public github profile
Roman
A PostgreSQL or SQLite orm for Python

Prom An opinionated lightweight orm for PostgreSQL or SQLite. Prom has been used in both single threaded and multi-threaded environments, including en

Jay Marcyes 18 Dec 01, 2022
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

2.1k Dec 30, 2022
The Python SQL Toolkit and Object Relational Mapper

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

mike bayer 3.5k Dec 29, 2022
Prisma Client Python is an auto-generated and fully type-safe database client

Prisma Client Python is an unofficial implementation of Prisma which is a next-generation ORM that comes bundled with tools, such as Prisma Migrate, which make working with databases as easy as possi

Robert Craigie 930 Jan 08, 2023
Twisted wrapper for asynchronous PostgreSQL connections

This is txpostgres is a library for accessing a PostgreSQL database from the Twisted framework. It builds upon asynchronous features of the Psycopg da

Jan Urbański 104 Apr 22, 2022
A simple project to explore the number of GCs when doing basic ORM work.

Question: Does Python do extremely too many GCs for ORMs? YES, OMG YES. Check this out Python Default GC Settings: SQLAlchemy - 20,000 records in one

Michael Kennedy 26 Jun 05, 2022
Tortoise ORM is an easy-to-use asyncio ORM inspired by Django.

Tortoise ORM was build with relations in mind and admiration for the excellent and popular Django ORM. It's engraved in it's design that you are working not with just tables, you work with relational

Tortoise 3.3k Jan 07, 2023
A curated list of awesome tools for SQLAlchemy

Awesome SQLAlchemy A curated list of awesome extra libraries and resources for SQLAlchemy. Inspired by awesome-python. (See also other awesome lists!)

Hong Minhee (洪 民憙) 2.5k Dec 31, 2022
An async ORM. 🗃

ORM The orm package is an async ORM for Python, with support for Postgres, MySQL, and SQLite. ORM is built with: SQLAlchemy core for query building. d

Encode 1.7k Dec 28, 2022
A new ORM for Python specially for PostgreSQL

A new ORM for Python specially for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
A dataclasses-based ORM framework

dcorm A dataclasses-based ORM framework. [WIP] - Work in progress This framework is currently under development. A first release will be announced in

HOMEINFO - Digitale Informationssysteme GmbH 1 Dec 24, 2021
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

67 Sep 01, 2022
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Jan 08, 2023
ORM for Python for PostgreSQL.

New generation (or genius) ORM for Python for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
The ormar package is an async mini ORM for Python, with support for Postgres, MySQL, and SQLite.

python async mini orm with fastapi in mind and pydantic validation

1.2k Jan 05, 2023
Global base classes for Pyramid SQLAlchemy applications.

pyramid_basemodel pyramid_basemodel is a thin, low level package that provides an SQLAlchemy declarative Base and a thread local scoped Session that c

Grzegorz Śliwiński 15 Jan 03, 2023
Object mapper for Amazon's DynamoDB

Flywheel Build: Documentation: http://flywheel.readthedocs.org/ Downloads: http://pypi.python.org/pypi/flywheel Source: https://github.com/stevearc/fl

Steven Arcangeli 128 Dec 31, 2022
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 318 Jan 03, 2023
A single model for shaping, creating, accessing, storing data within a Database

'db' within pydantic - A single model for shaping, creating, accessing, storing data within a Database Key Features Integrated Redis Caching Support A

Joshua Jamison 178 Dec 16, 2022
Sqlalchemy seeder that supports nested relationships.

sqlalchemyseed Sqlalchemy seeder that supports nested relationships. Supported file types json yaml csv Installation Default installation pip install

Jedy Matt Tabasco 10 Aug 13, 2022