Micro ODM for MongoDB

Overview

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic.

It uses an abstraction over Pydantic models and Motor collections to work with the database. Class Document allows to create, replace, update, get, find and aggregate.

Here you can see, how to use Beanie, in simple examples:

Installation

PIP

pip install beanie

Poetry

poetry add beanie

Usage

Init

from typing import List

import motor
from beanie import Document
from pydantic import BaseModel


# CREATE BEANIE DOCUMENT STRUCTURE

class SubDocument(BaseModel):
    test_str: str


class DocumentTestModel(Document):
    test_int: int
    test_list: List[SubDocument]
    test_str: str


# CREATE MOTOR CLIENT AND DB

client = motor.motor_asyncio.AsyncIOMotorClient(
    "mongodb://user:[email protected]:27017/db",
    serverSelectionTimeoutMS=100
)
db = client.beanie_db

# INIT BEANIE

init_beanie(database=db, document_models=[DocumentTestModel])

Create

Create a document (insert it)

document = DocumentTestModel(
    test_int=42,
    test_list=[SubDocument(test_str="foo"), SubDocument(test_str="bar")],
    test_str="kipasa",
)

await document.create()

Insert one document

document = DocumentTestModel(
    test_int=42,
    test_list=[SubDocument(test_str="foo"), SubDocument(test_str="bar")],
    test_str="kipasa",
)

await DocumentTestModel.insert_one(document)

Insert many documents

document_1 = DocumentTestModel(
    test_int=42,
    test_list=[SubDocument(test_str="foo"), SubDocument(test_str="bar")],
    test_str="kipasa",
)
document_2 = DocumentTestModel(
    test_int=42,
    test_list=[SubDocument(test_str="foo"), SubDocument(test_str="bar")],
    test_str="kipasa",
)

await DocumentTestModel.insert_many([document_1, document_2])

Find

Get the document

document = await DocumentTestModel.get(DOCUMENT_ID)

Find one document

document = await DocumentTestModel.find_one({"test_str": "kipasa"})

Find many documents

async for document in DocumentTestModel.find_many({"test_str": "uno"}):
    print(document)

OR

documents =  await DocumentTestModel.find_many({"test_str": "uno"}).to_list()

Find all the documents

async for document in DocumentTestModel.find_all()
    print(document)

OR

documents = await DocumentTestModel.find_all().to_list()

Update

Replace the document (full update)

document.test_str = "REPLACED_VALUE"
await document.replace()

Replace one document

Replace one doc data by another

new_doc = DocumentTestModel(
    test_int=0,
    test_str='REPLACED_VALUE',
    test_list=[]
)
await DocumentTestModel.replace_one({"_id": document.id}, new_doc)

Update the document (partial update)

in this example, I'll add an item to the document's "test_list" field

to_insert = SubDocument(test_str="test")
await document.update(update_query={"$push": {"test_list": to_insert.dict()}})

Update one document

await DocumentTestModel.update_one(
    update_query={"$set": {"test_list.$.test_str": "foo_foo"}},
    filter_query={"_id": document.id, "test_list.test_str": "foo"},
)

Update many documents

await DocumentTestModel.update_many(
    update_query={"$set": {"test_str": "bar"}},
    filter_query={"test_str": "foo"},
)

Update all the documents

await DocumentTestModel.update_all(
    update_query={"$set": {"test_str": "bar"}}
)

Delete

Delete the document

await document.delete()

Delete one documents

await DocumentTestModel.delete_one({"test_str": "uno"})

Delete many documents

await DocumentTestModel.delete_many({"test_str": "dos"})

Delete all the documents

await DocumentTestModel.delete_all()

Aggregate

async for item in DocumentTestModel.aggregate(
    [{"$group": {"_id": "$test_str", "total": {"$sum": "$test_int"}}}]
):
    print(item)

OR

class OutputItem(BaseModel):
    id: str = Field(None, alias="_id")
    total: int

async for item in DocumentTestModel.aggregate(
    [{"$group": {"_id": "$test_str", "total": {"$sum": "$test_int"}}}],
    item_model=OutputModel
):
    print(item)

OR

results = await DocumentTestModel.aggregate(
    [{"$group": {"_id": "$test_str", "total": {"$sum": "$test_int"}}}],
    item_model=OutputModel
).to_list()
Comments
  • Error with type checking on sort in PyCharm

    Error with type checking on sort in PyCharm

    Hi. Using the documented mode of sorting sort(Class.field) results in a warning in PyCharm. Is something off in the type definition. It seems to work fine.

    Screen Shot 2022-02-17 at 5 28 01 PM
    opened by mikeckennedy 18
  • Motor 3.0.0 support

    Motor 3.0.0 support

    Motor 3.0.0 was released a few days ago: https://www.mongodb.com/community/forums/t/mongodb-motor-3-0-0-released/160708. Do you plan to support the new version in the near future?

    opened by fs86 10
  • [feature] Relations

    [feature] Relations

    If the field type is Document subclass, then only id should be stored there and the whole subdocument must be stored to the separated collection.

    Example:

    class Window(Document):
      width: int
      height: int
    
    class House(Document):
      address : str
      windows: List[Window]
      favorite_window: Window
    

    Problems:

    • fetching/ lazy fetching
    • find by subfield. Example: House.find(House.favorite_window.width == 1)
    • updates of the subdocument. Example: house.set({House.favorite_window.width: 1})
    opened by roman-right 10
  • [BUG] All documents get dumped into

    [BUG] All documents get dumped into "Documents" collection since 1.14.0

    Describe the bug Since 1.14.0 all my documents get inserted into one collection named "Documents". All documents also got a _class_id value with .UserEntry

    To Reproduce

    class UserEntry(Document, ABC):
        id: int
        value: Optional[str] = None
        ...
    
        class Settings:
            name = 'user_data'
            use_state_management = True
    
    user_data = await UserEntry(...)
    user_data.value = 'Test'
    await user_data.insert()
    

    Expected behavior The document get's inserted into a "user_data" collection.

    Additional context https://github.com/roman-right/beanie/compare/1.13.1...1.14.0

    opened by Luc1412 9
  • [BUG] Problem in save method

    [BUG] Problem in save method

    Bug in action of save method I wanna save my document to database and use it after insert but I have problem with this situation:

    This is My code

    class Child(BaseModel):
        child_field: str
    
    
    class Sample(Document):
        field: Dict[str, Child]
    
      instance1 = Sample(field={"Bar": Child(child_field="Foo")})
      print(instance1)
      await instance1.save()
      print(instance1)
    
    

    Expected behavior

    # first print :
    id=None revision_id=None field={'Bar': Child(child_field='Foo')}
    # second print:
    id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': Child(child_field='Foo')}
    
    

    But I got this:

    # first print :
    id=None revision_id=None field={'Bar': Child(child_field='Foo')}
    # second print:
    id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': {'child_field': 'Foo'}}
    
    field={'Bar': {'child_field': 'Foo'}} != field={'Bar': Child(child_field='Foo')}
    

    It's okay when I wanna fetch from db and my field filled by Child model but when I want to save to db, this situation happened!!!

    opened by miladvayani 9
  • Creating Document with Relation Requires All Parent Fields Instead of Just ID

    Creating Document with Relation Requires All Parent Fields Instead of Just ID

    When used with FastAPI, all parent required fields become child required fields when creating a new document. For example:

    Models

    from beanie import Document, Indexed, Link
    
    
    class Organization(Document):
        slug: Indexed(str, unique=True)
    
        class Settings:
            name = "github-organization"
            use_revision = True
    
    
    class Repository(Document):
        organization: Link[Organization]
        name: Indexed(str, unique=True)
    
        class Settings:
            name = "github-repository"
            use_revision = True
    

    Routers

    from fastapi import APIRouter
    
    from .models import Organization, Repository
    
    router = APIRouter()
    
    
    @router.post(
        "/organization",
        response_description="Add a new GitHub Organization",
        response_model=Organization,
    )
    async def create_organization(organization: Organization):
        await organization.create()
        return organization
    
    
    @router.post(
        "/repository",
        response_description="Add a new GitHub Repository",
        response_model=Repository,
    )
    async def create_repository(repository: Repository):
        await repository.create()
        return repository
    

    I can create an Organization by POSTing the following:

    {
      "slug": "roman-right"
    }
    

    Let's say that generated a Mongo object ID of 62de9ecf2fa3d30007e3b5ce.

    However, I cannot create a Repository with just the following:

    {
      "organization": {
        "id": "62de9ecf2fa3d30007e3b5ce"
      },
      "name": "beanie"
    }
    

    That results in 422, unprocessable entity, with details that the Organization SLUG is required. I have tried the a few variations - id, _id, and also specifying organization as a string (the ID) vs an object (dictionary) - all with the same 422 error and similar messages.

    Specifying any string for slug seems to work. For example:

    {
      "organization": {
        "id": "62de9ecf2fa3d30007e3b5ce",
        "slug": ""
      },
      "name": "beanie"
    }
    

    (Repository document created as expected with link to correct Organization document)

    It would seem to just be a parameter validating issue (required parent fields are required parameters even though only the id is used).

    opened by rgajason 8
  • [Query] Common ORM like functionalities

    [Query] Common ORM like functionalities

    There are ORM functionality which I don't see in beanie which are mentioned below,

    1. How to add default fields in the model which is updated from whenever records are added. eg. created and updated fields.
    2. I want certain fields to be added while insertion and certain fields only when record is updated. eg. created must be added value only at insertion of the records but updated field must be always updated whenever records is updated.

    Is the above feature already present, if yes can you point me to the documentation.

    opened by sushilkjaiswar 8
  • Support for fetching deep-nested Links

    Support for fetching deep-nested Links

    Hi @roman-right! Our team is trying to get Beanie to fetch deep-nested Links when fetch_links=True. To accomplish this, I added additional queries to the MongoDB aggregation. However, I saw your comments in the open issues related to nested Links, and I'm not sure this solution gets around the challenges you had mentioned. Would super appreciate your feedback - thank you! :-)

    opened by csanders-rga 7
  • [BUG] AWS DocumentDB does not work with 1.14.0 - Not found for _id: ...

    [BUG] AWS DocumentDB does not work with 1.14.0 - Not found for _id: ...

    Describe the bug I noticed, that since I updated to beanie 1.14.0, my program does not work with AWS DocumentDB anymore. This has not been a problem prior, and the same code works perfectly with 1.13.1

    Additionally, the code works perfectly fine with 1.14.0 against the local MongoDB test database in version 5.0.10.

    The error message is not very helpful, the requested resources can simply be not found (although they are there)

    NotFound '<some_OID>' for '<class 'mongodb.model.user.odm.User'>' 
    not found in database 'User' with id '<some_OID>' not found in database
    

    To verify, that the resource is there I use a tool like NoSQLBooster or Robo3T

    db.user.find( {"_id" : ObjectId("<some_OID>")}  )
       .projection({})
       .sort({_id:-1})
       .limit(100)
    

    To Reproduce

    # Nothing special, just a simple find command
    result = await model.find_one(model.id == oid)
    

    Expected behavior I expected beanie 1.14.0 to work with AWS DocumentDB the same way as 1.13.1

    Additional context I am glad to provide further information, or I can make some tests against DocumentDB if someone can give me hints what to do.

    opened by micktg 7
  • Error: Cannot insert datetime.date - 'datetime.date' object is not iterable

    Error: Cannot insert datetime.date - 'datetime.date' object is not iterable

    So I'm trying to save an entry that has a datetime.date field. I keep getting two errors:

    ValueError: [
      TypeError("'datetime.date' object is not iterable"), 
      TypeError('vars() argument must have __dict__ attribute')
    ]
    

    This is very easy to recreate. Just try to insert this mode:

    class TestInsert(beanie.Document):
        name: str
        recorded_date: datetime.date
        items: List[object] = pydantic.Field(default_factory=list)
    
        class Collection:
            name = "test_remove_after"
    

    With these values, like this:

    async def test_date_insert_async():
        ti = TestInsert(
            name="Test",
            recorded_date=datetime.date.today()
        )
        return await ti.insert()
    
    opened by mikeckennedy 7
  • [BUG] ElemMatch on Document property of Type List[Link] fails with IndexError in relations.py convert_ids() beanie==1.15.4

    [BUG] ElemMatch on Document property of Type List[Link] fails with IndexError in relations.py convert_ids() beanie==1.15.4

    First of all, thank you so much for the great package!!!

    Describe the bug beanie fails to properly convert an ElemMatch Query on a Document property of Type Optional[List[Link]] and raises 'beanie\odm\utils\relations.py", line 65, in convert_ids and k.split(".")[1] == "id" IndexError: list index out of range'

    To Reproduce Python 3.9

    requirements.txt beanie==1.15.4 pydantic==1.9.2 pymongo[srv]==4.1.1

    
    from typing import Optional
    from beanie import Document, Link
    from beanie.operators import ElemMatch
    
    import asyncio
    
    
    class DocToLink(Document):
    
        class Settings:
            name = 'LinkExample'
    
        child_name: str
    
    
    class DocWithLinkAttribute(Document):
    
        class Settings:
            name = 'ParentExample'
    
        parent_name: str
        linked_docs: Optional[list[Link[DocToLink]]]
    
    
    async def add_and_query(child_name: str, parent_name: str) -> Optional[list[DocWithLinkAttribute]]:
        """"""
    
        child: DocToLink = DocToLink(child_name=child_name)
        await child.insert()
    
        parent: DocWithLinkAttribute = DocWithLinkAttribute(parent_name=parent_name, linked_docs=[child])
        await parent.insert()
    
        queried_doc:  list[DocWithLinkAttribute] = await DocWithLinkAttribute.find(
                ElemMatch(DocWithLinkAttribute.linked_docs, DocToLink.child_name == child_name), fetch_links=True
                ).to_list()
        return queried_doc
    
    
    async def init_mongo_client(models: list[Document]):
        """"""
        mongoClient = mdb(secretName=secretMONGO)
        await init_beanie(database=mongoClient.client[mongoClient.databaseInfo["database"]],
                          document_models=models)
        mongoClient.client.get_io_loop = asyncio.get_running_loop
    
    
    models: list = [DocWithLinkAttribute, DocToLink]
    asyncio.run(init_mongo_client(models))
    
    docs = asyncio.run(add_and_query("I'm a child", "I'm the parent"))
    
    

    Traceback (most recent call last): File "", line 55, in File "Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete return future.result() File "", line 35, in add_and_query File "\venv\lib\site-packages\beanie\odm\queries\cursor.py", line 71, in to_list cursor = self.motor_cursor File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 609, in motor_cursor aggregation_pipeline.append({"$match": self.get_filter_query()}) File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 105, in get_filter_query self.prepare_find_expressions() File "\venv\lib\site-packages\beanie\odm\queries\find.py", line 93, in prepare_find_expressions self.find_expressions[i] = convert_ids( File "\venv\lib\site-packages\beanie\odm\utils\relations.py", line 65, in convert_ids and k.split(".")[1] == "id" IndexError: list index out of range

    Expected behavior The query (debugger shows: {'linked_docs': {'$elemMatch': {'child_name': "I'm a child"}}}) should return the correct (list) of documents. In, fact if ' and k.split(".")[1] == "id"' is commented out and new_k is set to k as in line 72 of relations.py, the output is given correctly. E.g.

      for k, v in query.items():
          if (
              isinstance(k, ExpressionField)
              and doc.get_link_fields() is not None
              and k.split(".")[0] in doc.get_link_fields().keys()  # type: ignore
              # and k.split(".")[1] == "id"
          ):
              if fetch_links:
                  new_k = k
                  # new_k = f"{k.split('.')[0]}._id"
              else:
                  new_k = f"{k.split('.')[0]}.$id"
          else:
              new_k = k
    

    Unfortunately, however the error prevents this from happening. I am not sure where the "id" should come from or how to fix it properly as of now.

    I have seen the "# TODO add all the cases" in "convert_ids()" but I was not aware whether this came to your mind already.

    Additional context Thank you very much! I really appreciate the package. Let me know if I can help you on this:) Best, Thilo

    bug 
    opened by TLeitzbach 6
  • [BUG] `save_changes()` doesn't throw an error when document doesn't exists

    [BUG] `save_changes()` doesn't throw an error when document doesn't exists

    Describe the bug In 1.11.9 state was changed and always defaults being not None, instead it got a dict with the default values. This causes check_if_state_saved to never throw an error.

    save_changes on a new created document that isn't saved in the database silently does nothing.

    To Reproduce

    user_data = UserEntry(...)
    user_data.x = ...
    await user_data.save_changes()
    

    Expected behavior StateNotSaved("No state was saved") Additional context https://github.com/roman-right/beanie/compare/1.11.8...1.11.9 https://canary.discord.com/channels/822196934973456394/822196935435747332/1042243293662158970

    opened by Luc1412 0
  • [BUG] Sort by multiple fields does not work with ExpressionField

    [BUG] Sort by multiple fields does not work with ExpressionField

    Describe the bug when using multiple ExpressionField only the first sorting argument reach mongo

    To Reproduce

    import asyncio
    
    import structlog
    from beanie import Document, init_beanie
    from beanie.odm.enums import SortDirection
    from motor.motor_asyncio import AsyncIOMotorClient
    from pymongo import monitoring
    from pymongo.monitoring import CommandStartedEvent, CommandSucceededEvent, CommandFailedEvent
    
    logger = structlog.get_logger("beanie test")
    
    
    class CommandLogger(monitoring.CommandListener):
    
        def started(self, event: CommandStartedEvent):
            if event.command_name == "find":
                logger.debug(
                    f"mongo {event.command_name} started",
                    command=event.command,
                )
    
        def succeeded(self, event: CommandSucceededEvent):
            pass
    
        def failed(self, event: CommandFailedEvent):
            pass
    
    
    class MyModel(Document):
        int_number: int
        is_boolean: bool
    
        class Settings:
            name = "my_model"
    
    
    async def main():
        await init_beanie(AsyncIOMotorClient(event_listeners=[CommandLogger()])["test_hs_das"], document_models=[MyModel])
        sort1 = [+MyModel.int_number, +MyModel.is_boolean]
        sort2 = [("int_number", SortDirection.ASCENDING), "-is_boolean"]
        logger.info(sort1)
        logger.info(sort2)
        await MyModel.find().sort(sort1).to_list()
        await MyModel.find().sort(sort2).to_list()
    
    
    if __name__ == '__main__':
        asyncio.run(main())
    
    

    Expected behavior I expect that when using

    await MyModel.find().sort( [+MyModel.int_number,+MyModel.is_boolean]).to_list()
    

    the logger output will be

    2023-01-02 16:19:22 [debug ] mongo find started command=SON([('find', 'my_model'), ('filter', {}), ('sort', SON([('int_number', <SortDirection.ASCENDING: 1>), ('is_boolean', <SortDirection.DESCENDING: -1>)])), ...

    but is

    2023-01-02 16:19:22 [debug ] mongo find started command=SON([('find', 'my_model'), ('filter', {}), ('sort', SON([('int_number', <SortDirection.ASCENDING: 1>)]))

    it does work when NOT using ExpressionField

    await MyModel.find().sort([("int_number", SortDirection.ASCENDING), "-is_boolean"]).to_list()
    
    bug 
    opened by nadir-albajari-hs 1
  • Allow change class_id and use name settings in UnionDoc

    Allow change class_id and use name settings in UnionDoc

    Beanie is all sound and good until there is a need to connect to existing DB where beanie is/was not used. For instance, one might have SNS -> SQS -> lambda (that uses any other client other than beanie) -> DocDB / Mongo. In case like this, the existing data would not have beanie's internal data field such as '_class_id".

    This makes switching to Beanie difficult as it requires full DB migration, especially if DB is used for event-sourcing.

    This PR allows (1) custom internal data field name to be set and (2) custom name to be set in Union doc children classes.

    Usage:

    class Parent(UnionDoc):
        class Settings:
            class_id = "event_type" <-
            name = "eventsource"
    
    class ChildCreated(Document):
        event_type: Literal["created"] = "created" <- event_type is used instead of _class_id 
        sent_at: datetime
    
        class Settings:
            name = "created" <- This was option was ignored before
            union_doc = Parent
    
    class ChildDeleted(Document):
        event_type: Literal["created"] = "deleted" <- event_type is used instead of _class_id 
        sent_at: datetime
    
        class Settings:
            name = "deleted" <- This was option was ignored before
            union_doc = Parent
    

    Example:

    Schema

    class Parent(UnionDoc):
        class Settings:
            class_id = "event_type"
            name = "collection"
    
    
    class One(Document):
        test: int
    
        class Settings:
            name = "two"
            class_id = "event_type"
            union_doc = Parent
            
     class Two(Document): <- without union_doc
        test: int
    
        class Settings:
            name = "three"
            class_id = "event_type"
    
    

    Test

    In [1]: await db.One(test=1).insert()
    Out[1]: One(id=ObjectId('63ae1890edab780d45df0b07'), revision_id=None, test=1)
    
    pymongo
    In [17]: list(c["db"]["collection"].find({'test': 1}))
    Out[17]: [{'_id': ObjectId('63ae1890edab780d45df0b07'), 'event_type': 'two', 'test': 1}]
    
    In [1]: await db.Two(test=3).insert()
    Out[1]: Two(id=ObjectId('63ae19cd8e684a4cd769fb1f'), revision_id=None, test=3)
    
    pymongo
    In [23]: list(c["db"]["three"].find({'test': 3}))
    Out[23]: [{'_id': ObjectId('63ae19cd8e684a4cd769fb1f'), 'test': 3}]
    
    

    FYI, https://github.com/roman-right/beanie/pull/206 this PR was not reviewed for a year so I closed it..

    opened by wonjoonSeol-WS 0
  • [BUG] PydanticObjectId Serialization Issue When Beanie is Used With Starlite

    [BUG] PydanticObjectId Serialization Issue When Beanie is Used With Starlite

    Describe the bug

    Starlite raises an HTTP 500 error when trying to return a Beanie Document. It seems to be due to the PydanticObjectId type not being JSON serializable. The issue was discussed here on the Starlite repo. Is this an issue that can be fixed within Beanie, or should it be addressed within Starlite?

    bug 
    opened by bwhli 5
  • [BUG] get_motor_collection() returning `None`

    [BUG] get_motor_collection() returning `None`

    Thank you for the great work done in Beanie, it simplified my life significantly.

    Describe the bug

    This is a DB Client I wrote:

    class MongoClient:
        def __init__(self):
            client_options = {'appname': self.__class__.__name__}
            self.client = AsyncIOMotorClient(MONGO_URL, **client_options)
            self.client.get_io_loop = asyncio.get_running_loop
    
            self.is_testing = TESTING == Environments.Testing.value
    
        @property
        def db_name(self) -> str:
            db_postfix = ENV
    
            return f'{svc}-{db_postfix}'
    
        @abstractmethod
        async def initialize(self):
            raise NotImplementedError('MongoClient.initialize() must be implemented')
    
        @asynccontextmanager
        async def transaction(self) -> AsyncIOMotorClientSession:
            async with await self.client.start_session() as session:
                async with session.start_transaction():
                    yield session
    

    The initialize() method is implemented like below:

    class MongoClientNumberOne(MongoClient):
        async def initialize(self):
            collections = [
                Service,
                ...
            ]
            await init_beanie(database=self.client[self.db_name], document_models=collections)
    

    The way I use the Mongo client is to declare it as a dependency in FastAPI routes like below:

    class MongoClientDependency:
        def __init__(self, db_type: Type[T]):
            if db_type == MongoClientNumberOne:
                self.client = MongoClientNumberOne()
            elif db_type == MongoClientNumberTwo:
                self.client = MongoClientNumberTwo()
            else:
                raise ValueError("Invalid DB Type given")
    
        async def __call__(self, request: Request) -> MongoClientNumberOne | MongoClientNumberTwo:
            await self.client.initialize()
            return self.client
    
    class Context:
        def __init__(
            self,
            access_token: str | None = None,
            permissions: List[Permissions] | None = None,
            mongo_client: MongoClientNumberOne | None = None,
            mongo_session: AsyncIOMotorClientSession | None = None,
        ):
            self.access_token: str = access_token
            self.permissions: List[Permissions] = permissions
    
            self.mongo_client: AsyncIOMotorClient = mongo_client
            self.mongo_session: AsyncIOMotorClientSession = mongo_session
            if self.mongo_client:
                self.db_name: str = self.mongo_client.db_name
    
            self.current_user: User | None = None
            self.current_service: Service | None = None
    
        @classmethod
        @asynccontextmanager
        async def plain(
            cls, mongo_session: AsyncIOMotorClientSession | None = None, mongo_client: MongoClientNumberOne | None = None
        ) -> Context:
            yield cls(mongo_session=mongo_session, mongo_client=mongo_client)
    
    @router.get(
        '/blog/posts',
        summary='Get latest blog posts',
        tags=['Blog'],
        response_model=List[BlogPost],
        responses={503: {'model': ErrorResponse}},
    )
    async def get_latest_blog_posts(mongo_client: MongoClientNumberOne = Depends(MongoClientDependency(MongoClientNumberOne))):
        async with mongo_client.transaction():
            async with Context.plain():
                return await Blog.get_latest_posts()
    

    The problem that I see in Sentry is that at times, read/write queries encountered the motor_collection in ItemSettings is None. A screenshot for this below:

    Screen Shot 2022-12-06 at 12 27 49 PM

    Unfortunately this happens randomly and intermittent. Usually when there's no traffic and new traffic comes, these errors popped up.

    To Reproduce

    At the start, we almost succeeded in replicating this by creating this test, not so many requests but very high concurrency. Before the codes I wrote above, we have an async context manager that was buggy which is now fixed and the test below passed.

    @pytest.mark.asyncio
    async def test_concurrency(client: TestClient, individual_user: (RegisterRequest, str, str)):
        token = individual_user[2]
    
        async def get_profile():
            headers = {'Authorization': f'Bearer {token}'}
            response = await asyncify(client.get)(headers=headers, url='/me')
            assert response.status_code == 200
    
        async def get_wishlist():
            headers = {'Authorization': f'Bearer {token}'}
            response = await asyncify(client.get)(headers=headers, url='/wishlist')
            assert response.status_code == 200
    
        async def get_txn():
            headers = {'Authorization': f'Bearer {token}'}
            url = '/transactions?limit=10&skip=0&statuses=submitted,accepted,waitlisted,pending-signature,pending-funding,funds-received,in-execution,completed,unsuccessful'
            response = await asyncify(client.get)(headers=headers, url=url)
            assert response.status_code == 200
    
        for i in range(10):
            print(f'Iteration {i} - Enter')
            await asyncio.gather(get_profile(), get_wishlist(), get_txn(), get_profile(), get_wishlist(), get_txn())
            print(f'Iteration {i} - Exit')
    

    The bug is still happening though, motor_collection sometimes still returns None.

    Expected behavior

    self.document_model.get_motor_collection() should return the collection.

    Additional context

    I'm happy to elaborate more with code examples.

    opened by tistaharahap 10
  • [BUG] insert/replace_many not triggers actions

    [BUG] insert/replace_many not triggers actions

    Describe the bug Inserting/replacing documents using insert_many or replace_many not triggers any of event-based actions.

    To Reproduce

    import asyncio
    from typing import cast
    
    from beanie import Document, Insert, Replace, before_event, init_beanie
    from motor.motor_asyncio import AsyncIOMotorClient
    
    
    class Test(Document):
        name: str
    
        @before_event([Insert, Replace])
        def name_force_upper(self) -> None:
            self.name = self.name.upper()
    
    
    async def test():
        client = AsyncIOMotorClient("mongodb://root:[email protected]:27017")
    
        # drop db...
        await client.drop_database("test_db")
    
        await init_beanie(database=client.test_db, document_models=[Test])
    
        test_doc = Test(name="lowcase_string")
        await test_doc.insert_many([test_doc])
    
        test_from_db = cast(Test, await Test.find_one())
        assert test_doc.name.upper() == test_from_db.name, f"{test_doc.name = }, {test_from_db.name = }"
    
    
    if __name__ == "__main__":
        asyncio.run(test())
    

    Expected behavior test_from_db.name in uppercase, which means, that name_force_upper were called

    Additional context This behaviour not described in docs, and there are no issues about it, so i think this is a bug.

    opened by Rubikoid 1
Releases(1.16.8)
Owner
Roman
This is my public github profile
Roman
Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.

Redash is designed to enable anyone, regardless of the level of technical sophistication, to harness the power of data big and small. SQL users levera

Redash 22.4k Dec 30, 2022
A library for python made by me,to make the use of MySQL easier and more pythonic

my_ezql A library for python made by me,to make the use of MySQL easier and more pythonic This library was made by Tony Hasson , a 25 year old student

3 Nov 19, 2021
Monty, Mongo tinified. MongoDB implemented in Python !

Monty, Mongo tinified. MongoDB implemented in Python ! Inspired by TinyDB and it's extension TinyMongo. MontyDB is: A tiny version of MongoDB, against

David Lai 522 Jan 01, 2023
Making it easy to query APIs via SQL

Shillelagh Shillelagh (ʃɪˈleɪlɪ) is an implementation of the Python DB API 2.0 based on SQLite (using the APSW library): from shillelagh.backends.apsw

Beto Dealmeida 207 Dec 30, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 02, 2023
Simple Python demo app that connects to an Oracle DB.

Cloud Foundry Sample Python Application Connecting to Oracle Simple Python demo app that connects to an Oracle DB. The app is based on the example pro

Daniel Buchko 1 Jan 10, 2022
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
A wrapper for SQLite and MySQL, Most of the queries wrapped into commands for ease.

Before you proceed, make sure you know Some real SQL, before looking at the code, otherwise you probably won't understand anything. Installation pip i

Refined 4 Jul 30, 2022
SpyQL - SQL with Python in the middle

SpyQL SQL with Python in the middle Concept SpyQL is a query language that combines: the simplicity and structure of SQL with the power and readabilit

Daniel Moura 853 Dec 30, 2022
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 08, 2023
Example Python codes that works with MySQL and Excel files (.xlsx)

Python x MySQL x Excel by Zinglecode Example Python codes that do the processes between MySQL database and Excel spreadsheet files. YouTube videos MyS

Potchara Puttawanchai 1 Feb 07, 2022
Estoult - a Python toolkit for data mapping with an integrated query builder for SQL databases

Estoult Estoult is a Python toolkit for data mapping with an integrated query builder for SQL databases. It currently supports MySQL, PostgreSQL, and

halcyon[nouveau] 15 Dec 29, 2022
Implementing basic MySQL CRUD (Create, Read, Update, Delete) queries, using Python.

MySQL with Python Implementing basic MySQL CRUD (Create, Read, Update, Delete) queries, using Python. We can connect to a MySQL database hosted locall

MousamSingh 5 Dec 01, 2021
A selection of SQLite3 databases to practice querying from.

Dummy SQL Databases This is a collection of dummy SQLite3 databases, for learning and practicing SQL querying, generated with the VS Code extension Ge

1 Feb 26, 2022
PostgreSQL database access simplified

Queries: PostgreSQL Simplified Queries is a BSD licensed opinionated wrapper of the psycopg2 library for interacting with PostgreSQL. The popular psyc

Gavin M. Roy 251 Oct 25, 2022
The JavaScript Database, for Node.js, nw.js, electron and the browser

The JavaScript Database Embedded persistent or in memory database for Node.js, nw.js, Electron and browsers, 100% JavaScript, no binary dependency. AP

Louis Chatriot 13.2k Jan 02, 2023
Asynchronous interface for peewee ORM powered by asyncio

peewee-async Asynchronous interface for peewee ORM powered by asyncio. Important notes Since version 0.6.0a only peewee 3.5+ is supported If you still

05Bit 666 Dec 30, 2022
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 345 Dec 28, 2022
A simple Python tool to transfer data from MySQL to SQLite 3.

MySQL to SQLite3 A simple Python tool to transfer data from MySQL to SQLite 3. This is the long overdue complimentary tool to my SQLite3 to MySQL. It

Klemen Tusar 126 Jan 03, 2023
Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL.

cnpj-mysql Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL. Dados públicos de cnpj no site da Receit

17 Dec 25, 2022