The new Python SDK for Sentry.io

Overview

sentry-python - Sentry SDK for Python

Build Status PyPi page link -- version Discord

This is the next line of the Python SDK for Sentry, intended to replace the raven package on PyPI.

from sentry_sdk import init, capture_message

init("https://[email protected]/123")

capture_message("Hello World")  # Will create an event.

raise ValueError()  # Will also create an event.

Contributing to the SDK

Please refer to CONTRIBUTING.md.

License

Licensed under the BSD license, see LICENSE

Comments
  • Azure Function doesn't capture exceptions on Azure

    Azure Function doesn't capture exceptions on Azure

    I have a Queue Trigger Azure Function in Python with the following code:

    import logging
    import os
    
    import azure.functions as func
    import sentry_sdk
    from sentry_sdk.api import capture_exception, flush, push_scope
    from sentry_sdk.integrations.serverless import serverless_function
    
    # Sentry configuration
    sentry_dsn = "my-dsn"
    environment = "DEV"
    logger = logging.getLogger(__name__)
    sentry_sdk.init(
        sentry_dsn,
        environment=environment,
        send_default_pii=True,
        request_bodies="always",
        with_locals=True,
    )
    sentry_sdk.utils.MAX_STRING_LENGTH = 2048
    
    
    @serverless_function
    def main(msg: func.QueueMessage) -> None:
    
        with push_scope() as scope:
            scope.set_tag("function.name", "ProcessHeadersFile")
            scope.set_context(
                "Queue Message",
                {
                    "id": msg.id,
                    "dequeue_count": msg.dequeue_count,
                    "insertion_time": msg.insertion_time,
                    "expiration_time": msg.expiration_time,
                    "pop_receipt": msg.pop_receipt,
                },
            )
    
            try:
                # code that might raise exceptions here
                function_that_raise()
            except Exception as ex:
                print(ex)
                # Rethrow to fail the execution
                raise
    
    def function_that_raise():
        return 5 / 0
    

    Works locally, but not on Azure. I run multiple invocations, I get all the failures, but then nothing shows on Sentry.

    I have also try manually capturing and flushing, but doesn't work either:

    try:
        # code that might raise exceptions here
        function_that_raise()
    except Exception as ex:
        capture_exception(ex)
        flush(2)
        raise
    

    Using sentry-sdk==0.19.5.

    How can I troubleshoot what's happening? What can be the causes that it works when running the function locally, but not when running on Azure?

    needs-information Integration: Serverless Status: Stale 
    opened by empz 36
  • SentryAsgiMiddleware not compatible with Uvicorn 0.13.0

    SentryAsgiMiddleware not compatible with Uvicorn 0.13.0

    On December 8th Uvicorn updated from 0.12.3 to 0.13.0. This results in an error at startup, see output with minimal example. When downgrading Uvicorn to 0.12.3 the example runs fine.

    Why this error is thrown or which changes resulted in the error, I have no clue. Could you help me with this?

    app.py

    from sanic import Sanic
    from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
    
    app = SentryAsgiMiddleware(Sanic(__name__))
    

    requirements.txt

    sanic==20.9.1
    sentry-sdk==0.19.4
    uvicorn==0.13.0
    

    Command to run:

    uvicorn app:app --port 5000 --workers=1 --debug --reload
    

    Output:

    INFO:     Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
    INFO:     Started reloader process [614068] using statreload
    Process SpawnProcess-1:
    Traceback (most recent call last):
      File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
        self.run()
      File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
        self._target(*self._args, **self._kwargs)
      File "/home/lander/.local/share/virtualenvs/san-iJ5wdX60/lib/python3.9/site-packages/uvicorn/subprocess.py", line 61, in subprocess_started
        target(sockets=sockets)
      File "/home/lander/.local/share/virtualenvs/san-iJ5wdX60/lib/python3.9/site-packages/uvicorn/server.py", line 48, in run
        loop.run_until_complete(self.serve(sockets=sockets))
      File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
      File "/home/lander/.local/share/virtualenvs/san-iJ5wdX60/lib/python3.9/site-packages/uvicorn/server.py", line 55, in serve
        config.load()
      File "/home/lander/.local/share/virtualenvs/san-iJ5wdX60/lib/python3.9/site-packages/uvicorn/config.py", line 319, in load
        elif not inspect.signature(self.loaded_app).parameters:
      File "/usr/lib/python3.9/inspect.py", line 3118, in signature
        return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
      File "/usr/lib/python3.9/inspect.py", line 2867, in from_callable
        return _signature_from_callable(obj, sigcls=cls,
      File "/usr/lib/python3.9/inspect.py", line 2409, in _signature_from_callable
        sig = _signature_from_callable(
      File "/usr/lib/python3.9/inspect.py", line 2242, in _signature_from_callable
        raise TypeError('{!r} is not a callable object'.format(obj))
    TypeError: <member '__call__' of 'SentryAsgiMiddleware' objects> is not a callable object
    
    bug 
    opened by LanderMoerkerke 33
  • Version 0.17.8 broke Celery tasks with custom Task class

    Version 0.17.8 broke Celery tasks with custom Task class

    I am using tenant-schemas-celery package, which generates it's own TenantTask (extends celery.app.task.Task) class in it's own CeleryApp (extends celery.Celery). After updating from sentry-sdk from 0.17.7 to 0.17.8 the TenantTask's tenant context switching stopped working. I suspect this is because of the following change in 0.17.8:

    diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
    index 1a11d4a..2b51fe1 100644
    --- a/sentry_sdk/integrations/celery.py
    +++ b/sentry_sdk/integrations/celery.py
    @@ -60,9 +60,8 @@ class CeleryIntegration(Integration):
                     # Need to patch both methods because older celery sometimes
                     # short-circuits to task.run if it thinks it's safe.
                     task.__call__ = _wrap_task_call(task, task.__call__)
                     task.run = _wrap_task_call(task, task.run)
    -                task.apply_async = _wrap_apply_async(task, task.apply_async)
    
                     # `build_tracer` is apparently called for every task
                     # invocation. Can't wrap every celery task for every invocation
                     # or we will get infinitely nested wrapper functions.
    @@ -71,8 +70,12 @@ class CeleryIntegration(Integration):
                 return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
    
             trace.build_tracer = sentry_build_tracer
    
    +        from celery.app.task import Task  # type: ignore
    +
    +        Task.apply_async = _wrap_apply_async(Task.apply_async)
    +
             _patch_worker_exit()
    
             # This logger logs every status of every task that ran on the worker.
             # Meaning that every task's breadcrumbs are full of stuff like "Task
    

    I think this problem has to do with this new way of importing celery.app.task.Task. Even though TenantTask extends the celery.app.task.Task, for some reason this change broke the TenantTask logic.

    I'm not sure which package this should be fixed in, but I'm bit skeptical about this new import in sentry-sdk, so I'm reporting it here.

    Here is my celery.py:

    from tenant_schemas_celery.app import CeleryApp as TenantAwareCeleryApp
    
    
    app = TenantAwareCeleryApp()
    app.config_from_object('django.conf:settings')
    app.autodiscover_tasks()
    
    bug Integration: Celery Status: Stale needs-repro 
    opened by akifd 29
  • document ignore_errors

    document ignore_errors

    The old raven library had this functionality, implemented for Django by declaring a list of names of errors in the settings file.

    This kind of functionality is necessary to control error reporting spam.

    enhancement doc stabilization Status: Backlog 
    opened by biblicabeebli 29
  • Integration for FastAPI

    Integration for FastAPI

    FastAPI is just Starlette which is just ASGI. We have an ASGI integration that should work fine. However, it would be nice to have deeper integration than we currently provide, particularly around Performance monitoring.

    Please vote with 👍 on this post if you're interested in this. I would also like to hear what currently doesn't work out well with the ASGI middleware applied to your FastAPI app.

    enhancement new-integration Status: Backlog Jira 
    opened by untitaker 25
  • Provide a means for capturing stack trace for messages that are not errors

    Provide a means for capturing stack trace for messages that are not errors

    ... without relying on the logging integration.

    SDK v0.7.10

    Currently, the only way to do this is via the logging integration and issuing a log at the configured level or higher with exc_info=True. I don't think this is a good substitute because:

    • I don't want to classify these events as errors (the end goal being errors should drive alerts)
    • Sometimes there are interesting or unexpected events that I want to debug more and leveraging Sentry's stack traces with locals helps in this regard
    • I don't want to lower our logging integration level to warning or similar, as that may spam our project with a lot of warnings/events that are not interesting or useful

    Ideally, the Sentry SDK could add an additional argument to the and capture_message APIs to make this simple from a user perspective:

    sentry_sdk.capture_message("Unexpected event", level="warning", stack_trace=True)
    

    I'm trying to get this working locally with the SDK, and it's leading to incomplete event data (see picture) and also leading to pretty hairy code.

    sentry_sdk.capture_event({"message": "oops 2", "level": "warning", "threads": [{"stacktrace": sentry_sdk.utils.current_stacktrace(with_locals=True), "crashed": False, "current": True}])
    

    This was more or less copied from the code I see for adding stack traces in Client._prepare_event.

    Screenshot from 2019-04-13 14-39-24

    opened by goodspark 25
  • Celery integration not capturing error with max_tasks_per_child = 1

    Celery integration not capturing error with max_tasks_per_child = 1

    The celery integration is failing to capture the exception when I use a celery factory pattern which patches the celery task with Flask's context.

    This is web/celery_factory.py

    # Source: https://stackoverflow.com/questions/12044776/how-to-use-flask-sqlalchemy-in-a-celery-task
    
    from celery import Celery
    import flask
    
    
    class FlaskCelery(Celery):
    
        def __init__(self, *args, **kwargs):
            super(FlaskCelery, self).__init__(*args, **kwargs)
            self.patch_task()
    
            if 'app' in kwargs:
                self.init_app(kwargs['app'])
    
        def patch_task(self):
            TaskBase = self.Task
            _celery = self
    
            class ContextTask(TaskBase):
                abstract = True
    
                def __call__(self, *args, **kwargs):
                    if flask.has_app_context():
                        return TaskBase.__call__(self, *args, **kwargs)
                    else:
                        with _celery.app.app_context():
                            return TaskBase.__call__(self, *args, **kwargs)
    
            self.Task = ContextTask
    
        def init_app(self, app):
            self.app = app
            self.config_from_object(app.config)
    
    
    celery_app = FlaskCelery()
    

    I am adding a random raise inside a simple task

    import celery_app from celery_factory.py
    @celery_app.task
    def simple_task():
        raise Exception("Testing Celery exception")
    

    The error I get printed is:

    [2019-03-08 21:24:21,117: ERROR/ForkPoolWorker-31] Task simple_task[d6e959b1-7253-4e55-861d-c1968ae14e1c] raised unexpected: RuntimeError('No active exception to reraise')
    Traceback (most recent call last):
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py", line 382, in trace_task
        R = retval = fun(*args, **kwargs)
      File "/Users/okomarov/Documents/repos/myproject/web/celery_factory.py", line 28, in __call__
        return TaskBase.__call__(self, *args, **kwargs)
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py", line 641, in __protected_call__
        return self.run(*args, **kwargs)
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 66, in _inner
        reraise(*_capture_exception())
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py", line 52, in reraise
        raise value
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 64, in _inner
        return f(*args, **kwargs)
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 66, in _inner
        reraise(*_capture_exception())
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py", line 52, in reraise
        raise value
      File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 64, in _inner
        return f(*args, **kwargs)
      File "/Users/okomarov/Documents/repos/myproject/web/simple_task.py", line 4, in simple_task
        raise Exception("Testing Celery exception")
    RuntimeError: No active exception to reraise
    

    Relevant pip packages:

    Celery==4.2.1
    Flask==1.0.2
    sentry-sdk==0.7.4
    

    The integration is called as following (flask integration works as expected):

    from flask import Flask
    from celery_factory import celery_app
    from config import config_to_use
    
    
    def create_app():
        app = Flask()
        app.config.from_object(config_to_use)
    
        init_logging(app)
    
        register_extensions(app)
        register_blueprints(app)
        register_jinja_extras(app)
    
        return app
    
    
    def init_logging(app):
        import sentry_sdk
        from sentry_sdk.integrations.flask import FlaskIntegration
        from sentry_sdk.integrations.celery import CeleryIntegration
    
        sentry_sdk.init(
            dsn=app.config.get('FLASK_SENTRY_DSN'),
            integrations=[FlaskIntegration(), CeleryIntegration()]
            )
    
    ...
    
    bug 
    opened by okomarov 23
  • No IPv4 fallback and no error reporting on failed connection

    No IPv4 fallback and no error reporting on failed connection

    If you self-host a sentry instance and have a broken IPv6 setup on the server (i.e., you have an AAAA record set up for the domain, but it points at the wrong IP), but a working IPv4 setup, sentry-python will attempt to deliver the issue via IPv6 (if the network supports it), fail, and not retry with IPv4.

    If this isn't intended behavior, I would recommend implementing an IPv4 fallback. At a minimum, it would be nice if the system would report an error if the connection failed - even with debug enabled, the log does not indicate that the connection failed, and is more or less indistinguishable from a log where the reporting worked.

    In my eyes, an IPv4 fallback would be desirable either way, because some places have broken IPv6 networks (they provide v6 IPs, but do not let them access the internet). This is another scenario where the robustness of reporting would be improved with a v4 fallback.

    Steps to reproduce:

    • Set up Sentry with working A record, but AAAA record pointing at an incorrect IP
    • Run the following code:
    import sentry_sdk
    
    sentry_sdk.init(YOUR_DSN, debug=True)
    
    # Produce an exception:
    1/0
    
    bug help wanted Status: Backlog 
    opened by malexmave 23
  • Django channels + ASGI leaks memory

    Django channels + ASGI leaks memory

    I expended around 3 days trying to figure out what was leaking in my Django app and I was only able to fix it by disabling sentry Django integration (on a very isolated test using memory profiler, tracemalloc and docker). To give more context before profiling information, that's how my memory usage graph looked on a production server (killing the app and/or a worker after a certain threshold): image

    Now the data I gathered:

    By performing 100,000 requests on this endpoint:

    class SimpleView(APIView):
        def get(self, request):
            return Response(status=status.HTTP_204_NO_CONTENT)
    

    A tracemalloc snapshot, grouped by filename, showed sentry django integration using 9MB of memory after a 217 seconds test with 459 requests per second. (using NGINX and Hypercorn with 3 workers):

    /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:0: size=8845 KiB (+8845 KiB), count=102930 (+102930), average=88 B
    /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:0: size=630 KiB (+630 KiB), count=5840 (+5840), average=110 B
    /usr/local/lib/python3.7/linecache.py:0: size=503 KiB (+503 KiB), count=5311 (+5311), average=97 B
    /usr/local/lib/python3.7/asyncio/selector_events.py:0: size=465 KiB (+465 KiB), count=6498 (+6498), average=73 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/scope.py:0: size=325 KiB (+325 KiB), count=373 (+373), average=892 B
    

    tracemalloc probe endpoint:

    import tracemalloc
    tracemalloc.start()
    
    start = tracemalloc.take_snapshot()
    
    @api_view(['GET'])
    def PrintMemoryInformation(request):
        current = tracemalloc.take_snapshot()
    
        top_stats = current.compare_to(start, 'filename')
        for stat in top_stats[:5]:
            print(stat)
    
        return Response(status=status.HTTP_204_NO_CONTENT)
    

    I have performed longers tests and the sentry django integration memory usage only grows, never releases, this is just a scaled-down version of the tests I've been performing to identify this leak.

    This is how my sentry settings looks like on settings.py: image

    Memory profile after disabling the Django Integration (same test and endpoint), no sentry sdk at top 5 most consuming files:

    /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:0: size=1450 KiB (+1450 KiB), count=15123 (+15123), average=98 B
    /usr/local/lib/python3.7/site-packages/hypercorn/protocol/h11.py:0: size=1425 KiB (+1425 KiB), count=8868 (+8868), average=165 B
    /usr/local/lib/python3.7/site-packages/channels/http.py:0: size=1398 KiB (+1398 KiB), count=14848 (+14848), average=96 B
    /usr/local/lib/python3.7/site-packages/h11/_state.py:0: size=1242 KiB (+1242 KiB), count=13998 (+13998), average=91 B
    /usr/local/lib/python3.7/site-packages/h11/_connection.py:0: size=1226 KiB (+1226 KiB), count=15957 (+15957), average=79 B
    

    settings.py for the above profile: image

    Memory profile grouped by line number (more verbose):

    /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:272: size=4512 KiB (+4512 KiB), count=33972 (+33972), average=136 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:134: size=4247 KiB (+4247 KiB), count=67945 (+67945), average=64 B
    /usr/local/lib/python3.7/linecache.py:137: size=492 KiB (+492 KiB), count=4850 (+4850), average=104 B
    /usr/local/lib/python3.7/asyncio/selector_events.py:716: size=415 KiB (+415 KiB), count=2530 (+2530), average=168 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/scope.py:198: size=279 KiB (+279 KiB), count=1 (+1), average=279 KiB
    /usr/local/lib/python3.7/site-packages/django/views/generic/base.py:65: size=262 KiB (+262 KiB), count=4783 (+4783), average=56 B
    /usr/local/lib/python3.7/socket.py:213: size=237 KiB (+237 KiB), count=2530 (+2530), average=96 B
    /usr/local/lib/python3.7/site-packages/ddtrace/span.py:149: size=229 KiB (+229 KiB), count=1765 (+1765), average=133 B
    /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:537: size=229 KiB (+229 KiB), count=390 (+390), average=600 B
    /usr/local/lib/python3.7/site-packages/h11/_state.py:261: size=224 KiB (+224 KiB), count=3170 (+3170), average=72 B
    /usr/local/lib/python3.7/site-packages/django/contrib/messages/storage/session.py:21: size=211 KiB (+211 KiB), count=3863 (+3863), average=56 B
    /usr/local/lib/python3.7/site-packages/rest_framework/request.py:414: size=195 KiB (+195 KiB), count=3565 (+3565), average=56 B
    /usr/local/lib/python3.7/functools.py:60: size=194 KiB (+194 KiB), count=1611 (+1611), average=124 B
    /usr/local/lib/python3.7/site-packages/ddtrace/vendor/msgpack/fallback.py:847: size=192 KiB (+192 KiB), count=542 (+542), average=363 B
    /usr/local/lib/python3.7/site-packages/django/http/request.py:427: size=183 KiB (+183 KiB), count=3335 (+3335), average=56 B
    /usr/local/lib/python3.7/site-packages/ddtrace/encoding.py:114: size=171 KiB (+171 KiB), count=6 (+6), average=28.5 KiB
    /usr/local/lib/python3.7/site-packages/rest_framework/views.py:478: size=166 KiB (+166 KiB), count=3002 (+3002), average=57 B
    /usr/local/lib/python3.7/site-packages/django/utils/datastructures.py:67: size=164 KiB (+164 KiB), count=3006 (+3006), average=56 B
    /usr/local/lib/python3.7/asyncio/selector_events.py:581: size=163 KiB (+163 KiB), count=2530 (+2530), average=66 B
    /usr/local/lib/python3.7/site-packages/h11/_connection.py:233: size=159 KiB (+159 KiB), count=2263 (+2263), average=72 B
    

    my pip freeze output:

    aioredis==1.2.0
    amqp==2.5.0
    appdirs==1.4.3
    asgiref==3.1.4
    asn1crypto==0.24.0
    astroid==2.2.5
    async-timeout==3.0.1
    atomicwrites==1.3.0
    attrs==19.1.0
    autobahn==19.7.1
    Automat==0.7.0
    autopep8==1.4.4
    Babel==2.7.0
    billiard==3.6.0.0
    boto3==1.9.185
    botocore==1.12.185
    celery==4.3.0
    certifi==2019.6.16
    cffi==1.12.3
    channels==2.2.0
    channels-redis==2.4.0
    chardet==3.0.4
    Click==7.0
    colorama==0.4.1
    constantly==15.1.0
    coverage==4.5.3
    cryptography==2.7
    daphne==2.3.0
    ddtrace==0.26.0
    dj-database-url==0.5.0
    Django==2.2.3
    django-anymail==6.1.0
    django-cors-headers==3.0.2
    django-filter==2.1.0
    django-ipware==2.1.0
    django-money==0.15
    django-nose==1.4.6
    django-redis==4.10.0
    django-storages==1.7.1
    django-templated-mail==1.1.1
    djangorestframework==3.9.4
    djoser==1.7.0
    docopt==0.6.2
    docutils==0.14
    factory-boy==2.12.0
    Faker==1.0.7
    flower==0.9.3
    geoip2==2.9.0
    gprof2dot==2017.9.19
    graphviz==0.11
    green==2.16.1
    gunicorn==19.9.0
    h11==0.9.0
    h2==3.1.0
    hiredis==1.0.0
    hpack==3.0.0
    httptools==0.0.13
    Hypercorn==0.7.0
    hyperframe==5.2.0
    hyperlink==19.0.0
    idna==2.8
    importlib-metadata==0.18
    incremental==17.5.0
    isort==4.3.21
    jedi==0.14.0
    Jinja2==2.10.1
    jmespath==0.9.4
    kombu==4.6.3
    lazy-object-proxy==1.4.1
    lxml==4.3.4
    MarkupSafe==1.1.1
    maxminddb==1.4.1
    mccabe==0.6.1
    more-itertools==7.1.0
    msgpack==0.6.1
    nose==1.3.7
    objgraph==3.4.1
    packaging==19.0
    parso==0.5.0
    pendulum==2.0.5
    Pillow==6.1.0
    pipdate==0.3.2
    pluggy==0.12.0
    prompt-toolkit==2.0.9
    psutil==5.6.3
    psycopg2-binary==2.8.3
    ptpython==2.0.4
    py==1.8.0
    py-moneyed==0.8.0
    pycodestyle==2.5.0
    pycparser==2.19
    Pygments==2.4.2
    PyHamcrest==1.9.0
    PyJWT==1.7.1
    pylint==2.3.1
    pylint-django==2.0.10
    pylint-plugin-utils==0.5
    pyparsing==2.4.0
    python-dateutil==2.8.0
    pytoml==0.1.20
    pytz==2019.1
    pytzdata==2019.2
    redis==3.2.1
    requests==2.22.0
    s3transfer==0.2.1
    sentry-sdk==0.10.1
    six==1.12.0
    sqlparse==0.3.0
    text-unidecode==1.2
    toml==0.10.0
    tornado==5.1.1
    Twisted==19.2.1
    txaio==18.8.1
    typed-ast==1.4.0
    typing-extensions==3.7.4
    Unidecode==1.1.1
    urllib3==1.25.3
    uvloop==0.12.2
    vine==1.3.0
    wcwidth==0.1.7
    websockets==7.0
    whitenoise==4.1.2
    wrapt==1.11.2
    wsproto==0.14.1
    zipp==0.5.2
    zope.interface==4.6.0
    

    I used the official python docker image with the label 3.7, meaning latest 3.7 version.

    Hope you guys can figure the problem with this data, I'm not sure if I'll have the time to contribute myself!

    Bonus, memory profiling after 1,000,000 requests (Django Integration using 44MB):

    /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:272: size=43.9 MiB (+43.9 MiB), count=338647 (+338647), average=136 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:134: size=41.3 MiB (+41.3 MiB), count=677294 (+677294), average=64 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/scope.py:198: size=2942 KiB (+2942 KiB), count=1 (+1), average=2942 KiB
    /usr/local/lib/python3.7/site-packages/django/views/generic/base.py:65: size=2584 KiB (+2584 KiB), count=47252 (+47252), average=56 B
    /usr/local/lib/python3.7/site-packages/django/contrib/messages/storage/session.py:21: size=2079 KiB (+2079 KiB), count=38013 (+38013), average=56 B
    /usr/local/lib/python3.7/site-packages/rest_framework/request.py:414: size=2006 KiB (+2006 KiB), count=36684 (+36684), average=56 B
    /usr/local/lib/python3.7/site-packages/django/http/request.py:427: size=1857 KiB (+1857 KiB), count=33946 (+33946), average=56 B
    /usr/local/lib/python3.7/site-packages/django/utils/datastructures.py:67: size=1670 KiB (+1670 KiB), count=30546 (+30546), average=56 B
    /usr/local/lib/python3.7/site-packages/rest_framework/views.py:478: size=1547 KiB (+1547 KiB), count=28237 (+28237), average=56 B
    /usr/local/lib/python3.7/site-packages/django/contrib/auth/middleware.py:24: size=1518 KiB (+1518 KiB), count=27752 (+27752), average=56 B
    /usr/local/lib/python3.7/importlib/__init__.py:118: size=1398 KiB (+1398 KiB), count=25571 (+25571), average=56 B
    /usr/local/lib/python3.7/site-packages/django/contrib/messages/storage/__init__.py:12: size=930 KiB (+930 KiB), count=17000 (+17000), average=56 B
    /usr/local/lib/python3.7/site-packages/sentry_sdk/tracing.py:123: size=885 KiB (+885 KiB), count=5985 (+5985), average=151 B
    /usr/local/lib/python3.7/asyncio/selector_events.py:716: size=664 KiB (+664 KiB), count=4049 (+4049), average=168 B
    /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:541: size=662 KiB (+662 KiB), count=12107 (+12107), average=56 B
    /usr/local/lib/python3.7/site-packages/django/http/request.py:584: size=601 KiB (+601 KiB), count=10986 (+10986), average=56 B
    /usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py:34: size=592 KiB (+592 KiB), count=10618 (+10618), average=57 B
    /usr/local/lib/python3.7/linecache.py:137: size=493 KiB (+493 KiB), count=4875 (+4875), average=104 B
    /usr/local/lib/python3.7/site-packages/h11/_state.py:261: size=434 KiB (+434 KiB), count=6142 (+6142), average=72 B
    /usr/local/lib/python3.7/site-packages/ddtrace/span.py:149: size=406 KiB (+406 KiB), count=3124 (+3124), average=133 B
    
    bug 
    opened by astutejoe 22
  • celery integration RecursionError

    celery integration RecursionError

    Hi there, I upgraded sentry_sdk to 0.7.0 and started getting RecursionError if there's an issue with celery task. Sentry record doesn't contain any stack trace for that but found that error in my apm system (can attach screenshot only, text data is a real mess there). I'm running celery 4.2.1 on Ubuntu 18.

    2019-02-14 15 04 54 bug needs-information 
    opened by chemiron 22
  • Using multiple DSNs, choosing based on the logger

    Using multiple DSNs, choosing based on the logger

    I'm looking to upgrade from raven-python[flask] to sentry-sdk[flask]. We previous had two DSNs for our backend: one for errors and one to log performance issues (e.g. slow requests / high DB query count).

    We were previously able to configure this via:

    from raven.contrib.flask import Sentry
    from raven.handlers.logging import SentryHandler
    
    performance_logger = logging.getLogger("benchling.performance")
    performance_logger.setLevel(logging.WARNING)
    sentry = Sentry(logging=True, level=logging.WARNING)
    
    def init_sentry(app):
        sentry.init_app(app, dsn=app.config["SENTRY_DSN_SERVER"])
        performance_handler = SentryHandler(dsn=app.config["SENTRY_DSN_BACKEND_PERFORMANCE"])
        performance_logger.addHandler(performance_handler)
    

    With the new architecture, this seems hard to do since the DSN is configured once via sentry_sdk.init where the LoggingIntegration simply listens to the root logger. I was able to hack around this by monkey-patching the logging_integration's handler as follows:

    import logging
    
    import sentry_sdk
    from sentry_sdk.client import Client
    from sentry_sdk.hub import Hub
    from sentry_sdk.integrations.celery import CeleryIntegration
    from sentry_sdk.integrations.flask import FlaskIntegration
    from sentry_sdk.integrations.logging import LoggingIntegration
    
    
    def register_clients_for_loggers(logger_name_to_client):
        """Monkeypatch LoggingIntegration's EventHandler to override the Client based on the record's logger"""
        hub = Hub.current
        logging_integration = hub.get_integration(LoggingIntegration)
        if not logging_integration:
            return
        handler = logging_integration._handler
        old_emit = handler.emit
    
        def new_emit(record):
            new_client = logger_name_to_client.get(record.name)
            previous_client = hub.client
            should_bind = new_client is not None
            try:
                if should_bind:
                    hub.bind_client(new_client)
                old_emit(record)
            finally:
                if should_bind:
                    hub.bind_client(previous_client)
    
        handler.emit = new_emit
    
    def init_sentry(app):
        sentry_sdk.init(
            dsn=app.config["SENTRY_DSN_SERVER"],
            release=app.config["SENTRY_RELEASE"],
            environment=app.config["SENTRY_ENVIRONMENT"],
            integrations=[
                LoggingIntegration(
                    level=logging.WARNING,  # Capture info and above as breadcrumbs
                    event_level=logging.WARNING,  # Send warnings and errors as events
                ),
                CeleryIntegration(),
                FlaskIntegration(),
            ],
        )
    
        performance_logger = logging.getLogger("benchling.performance")
        performance_logger.setLevel(logging.WARNING)
    
        perf_client = Client(
            dsn=app.config["SENTRY_DSN_BACKEND_PERFORMANCE"],
            release=app.config["SENTRY_RELEASE"],
            environment=app.config["SENTRY_ENVIRONMENT"],
        )
        register_clients_for_loggers({performance_logger.name: perf_client})
    

    Two questions:

    1. Does this approach seem reasonable, or is there a better way to handle this?
    2. If there isn't a better way, would it be possible to have sentry_sdk.init let you specify this mapping?
    question 
    opened by saifelse 22
  • Possible performance regression related to django signals instrumentation

    Possible performance regression related to django signals instrumentation

    How do you use Sentry?

    Self-hosted/on-premise

    Version

    1.12.1

    Steps to Reproduce

    Updated sentry-sdk from 1.9.8 to 1.12.1.

    Expected Result

    Overall project performance not changed

    Actual Result

    Performance decreased for about 25%.

    Looks like instrumentation of some django signals (https://github.com/getsentry/sentry-python/pull/1526/) brought some overhead. In my specific example, I've used https://github.com/matthewwithanm/django-imagekit for thumbnails generation. And this lib uses post_init signal to track changes of ImageField values. For cases when there is about 1000 instances in response of DRF API method, post_init handler is called for a lot of times (even for models that actually does not have any ImageField). imagekit itself implements a fast check if handler should be called. However, looks like instrumentation of this slow down overall time of response.

    See screenshots with example request. The first one is with commented out patch_signals call:

    image

    The second is with enabled signals instrumentation: image

    I think that sentry option that allow to disable instrumentation of some specific signals/receivers will allow to solve this problem. Or maybe option that will allow to disable specific instrumentation.

    P.S. I used following monkey-patch to disable signals instrumentation and restore previous performance:

    from sentry_sdk.integrations import django
    django.patch_signals = lambda: None
    
    sentry_sdk.init(
    	"integrations": [django.DjangoIntegration()],
    	"dsn": "dsn",
    )
    
    Status: Untriaged 
    opened by ron8mcr 0
  • feat(profiling): Enable profiling for ASGI frameworks

    feat(profiling): Enable profiling for ASGI frameworks

    This enables profiling for ASGI frameworks. When running in ASGI sync views, the transaction gets started in the main thread then the request is dispatched to a handler thread. We want to set the handler thread as the active thread id to ensure that profiles will show it on first render.

    opened by Zylphrex 0
  • Add cloud hosting context

    Add cloud hosting context

    Problem Statement

    When a user gets an error or transacation (performance event) in Sentry they may not know the specific of where that particular service is hosted. Where possible it would be great to know some contextual information about where that service is hosted to speed up problem solving and eventually deploying a fix

    example data

    cloud:
      provider: aws|gcp|azure
      region: ...
      [account|project]_id: ...
      service: ec2, cloud-functions, ...
      resource: <ARN>
    

    Solution Brainstorm

    Add this data using the Contexts Interface of the event payload

    • [ ] Define a standard set of contextual data:
      • example above
    • [ ] determine how to fetch that data from top hosting platforms
      • [ ] AWS
      • [ ] GCP
      • [ ] Azure
      • [ ] Vercel
      • [ ] Netlify
      • [ ] others?
    • [ ] Python SDK should check for this context and populate it in the event when possible
    • [ ] relay changes needed prior to

    Future considerations:

    • contexts should be searchable in product, changes will be needed in sentry backend so this data can be used to filter issues
    enhancement Status: Backlog Component: SDK 
    opened by smeubank 3
  • build(deps): bump pyrsistent from 0.16.0 to 0.19.3

    build(deps): bump pyrsistent from 0.16.0 to 0.19.3

    Bumps pyrsistent from 0.16.0 to 0.19.3.

    Changelog

    Sourced from pyrsistent's changelog.

    0.19.3, 2022-12-29

    • Fix #264, add wheels and official support for Python 3.11. Thanks @​hugovk for this!

    0.19.2, 2022-11-03

    • Fix #263, pmap regression in 0.19.1. Element access sometimes unreliable after insert. Thanks @​mwchase for reporting this!

    0.19.1, 2022-10-30

    • Fix #159 (through PR #243). Pmap keys/values/items now behave more like the corresponding Python 3 methods on dicts. Previously they returned a materialized PVector holding the items, now they return views instead. This is a slight backwards incompatibility compared to previous behaviour, hence stepping version to 0.19. Thanks @​noahbenson for this!
    • Fix #244, type for argument to PVector.delete missing. @​thanks dscrofts for this!
    • Fix #249, rename perf test directory to avoid tripping up automatic discovery in more recent setuptools versions
    • Fix #247, performance bug when setting elements in maps and adding elements to sets
    • Fix #248, build pure Python wheels. This is used by some installers. Thanks @​andyreagan for this!
    • Fix #254, #258, support manylinux_2014_aarch64 wheels. Thanks @​Aaron-Durant for this!

    0.19.0 - Never released due to issues found on test-PyPI

    0.18.1, 2022-01-14

    • Add universal wheels for MacOS, thanks @​ntamas for this!
    • Add support for Python 3.10, thanks @​hugovk for this!
    • Fix #236 compilation errors under Python 3.10.
    • Drop official support for Python 3.6 since it's EOL since 2021-12-23.
    • Fix #238, failing doc tests on Python 3.11, thanks @​musicinmybrain for this!

    0.18.0, 2021-06-28

    • Fix #209 Update freeze recurse into pyrsistent data structures and thaw to recurse into lists and dicts, Thanks @​phil-arh for this! NB! This is a backwards incompatible change! To keep the old behaviour pass strict=False to freeze and thaw.
    • Fix #226, stop using deprecated exception.message. Thanks @​hexagonrecursion for this!
    • Fix #211, add union operator to persistent maps. Thanks @​bowbahdoe for this!
    • Fix #194, declare build dependencies through pyproject.toml. Thanks @​jaraco for this!
    • Officially drop Python 3.5 support.
    • Fix #223, release wheels for all major platforms. Thanks @​johnthagen for helping out with this!
    • Fix #221, KeyError obscured by TypeError if key is a tuple. Thanks @​ganwell for this!
    • Fix LICENSE file name spelling. Thanks @​ndowens and @​barentsen for this!
    • Fix #216, add abstractmethod decorator for CheckedType and ABCMeta for _CheckedTypeMeta. Thanks @​ss18 for this!
    • Fix #228, rename example classes in tests to avoid name clashes with pytest.

    0.17.3, 2020-09-13

    • Fix #208, release v0.17.3 with proper meta data requiring Python >= 3.5.

    0.16.1, 2020-09-13

    • Add "python_requires >= 2.7" to setup.py in preparation for Python 2.7 incompatible updates in 0.17. This is the last version of pyrsistent that can be used with Python 2.7.

    0.17.2 (yanked awaiting proper fix for Python 3 req), 2020-09-09

    • Same as 0.17.1 released with more recent version of setuptools to get proper meta data for in place.

    ... (truncated)

    Commits
    • cc90f3e Prepare version v0.19.3
    • f030e5e Merge pull request #267 from Julian/typing
    • d98a20d Add pyrsistent.typing to the API documentation.
    • e3fbc39 Make push of code and tags one command when releasing
    • 7ca853c Add reference to pyrsistent extras in README
    • 1370783 Merge pull request #264 from hugovk/add-3.11
    • c2fd1f0 Remove redundant Python < 3.7 code
    • 1bbe757 Add support for Python 3.11
    • 29f5ac9 Prepare version v0.19.2
    • 226ebb6 Fix #263 pmap regression when underlying buckets are reallocated
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • build(deps): bump sphinx from 5.3.0 to 6.0.0

    build(deps): bump sphinx from 5.3.0 to 6.0.0

    Bumps sphinx from 5.3.0 to 6.0.0.

    Release notes

    Sourced from sphinx's releases.

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.0.0 (released Dec 29, 2022)

    Dependencies

    • #10468: Drop Python 3.6 support
    • #10470: Drop Python 3.7, Docutils 0.14, Docutils 0.15, Docutils 0.16, and Docutils 0.17 support. Patch by Adam Turner

    Incompatible changes

    • #7405: Removed the jQuery and underscore.js JavaScript frameworks.

      These frameworks are no longer be automatically injected into themes from Sphinx 6.0. If you develop a theme or extension that uses the jQuery, $, or $u global objects, you need to update your JavaScript to modern standards, or use the mitigation below.

      The first option is to use the sphinxcontrib.jquery_ extension, which has been developed by the Sphinx team and contributors. To use this, add sphinxcontrib.jquery to the extensions list in conf.py, or call app.setup_extension("sphinxcontrib.jquery") if you develop a Sphinx theme or extension.

      The second option is to manually ensure that the frameworks are present. To re-add jQuery and underscore.js, you will need to copy jquery.js and underscore.js from the Sphinx repository_ to your static directory, and add the following to your layout.html:

      .. code-block:: html+jinja

      {%- block scripts %} {{ super() }} {%- endblock %}

      .. _sphinxcontrib.jquery: https://github.com/sphinx-contrib/jquery/

      Patch by Adam Turner.

    • #10471, #10565: Removed deprecated APIs scheduled for removal in Sphinx 6.0. See :ref:dev-deprecated-apis for details. Patch by Adam Turner.

    • #10901: C Domain: Remove support for parsing pre-v3 style type directives and roles. Also remove associated configuration variables c_allow_pre_v3 and c_warn_on_allowed_pre_v3. Patch by Adam Turner.

    Features added

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
Releases(1.12.1)
Owner
Sentry
Real-time crash reporting for your web apps, mobile apps, and games.
Sentry
Structured Logging for Python

structlog makes logging in Python faster, less painful, and more powerful by adding structure to your log entries. It's up to you whether you want str

Hynek Schlawack 2.3k Jan 05, 2023
Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running.

lazyprofiler Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running. Installation Use the packag

Shankar Rao Pandala 28 Dec 09, 2022
Prettify Python exception output to make it legible.

pretty-errors Prettifies Python exception output to make it legible. Install it with python -m pip install pretty_errors If you want pretty_errors to

Iain King 2.6k Jan 04, 2023
Fancy console logger and wise assistant within your python projects

Fancy console logger and wise assistant within your python projects. Made to save tons of hours for common routines.

BoB 5 Apr 01, 2022
A very basic esp32-based logic analyzer capable of sampling digital signals at up to ~3.2MHz.

A very basic esp32-based logic analyzer capable of sampling digital signals at up to ~3.2MHz.

Davide Della Giustina 43 Dec 27, 2022
Splunk Add-On to collect audit log events from Github Enterprise Cloud

GitHub Enterprise Audit Log Monitoring Splunk modular input plugin to fetch the enterprise audit log from GitHub Enterprise Support for modular inputs

Splunk GitHub 12 Aug 18, 2022
Translating symbolicated Apple JSON format crash log into our old friends :)

CrashTranslation Translating symbolicated Apple JSON format crash log into our old friends :) Usage python3 translation.py -i {input_sybolicated_json_

Kam-To 11 May 16, 2022
Greppin' Logs: Leveling Up Log Analysis

This repo contains sample code and example datasets from Jon Stewart and Noah Rubin's presentation at the 2021 SANS DFIR Summit titled Greppin' Logs. The talk was centered around the idea that Forens

Stroz Friedberg 20 Sep 14, 2022
Track Nano accounts and notify via log file or email

nano-address-notifier Track accounts and notify via log file or email Required python libs

Joohansson (Json) 4 Nov 08, 2021
Espion is a mini-keylogger tool that keeps track of all keys a user presses on his/her keyboard

Espion is a mini-keylogger tool that keeps track of all keys a user presses on his/her keyboard. The details get displayed on the terminal window and also stored in a log file.

Anurag.R.Simha 1 Apr 24, 2022
Ransomware leak site monitoring

RansomWatch RansomWatch is a ransomware leak site monitoring tool. It will scrape all of the entries on various ransomware leak sites, store the data

Zander Work 278 Dec 31, 2022
Python bindings for g3log

g3logPython Python bindings for g3log This library provides python3 bindings for g3log + g3sinks (currently logrotate, syslog, and a color-terminal ou

4 May 21, 2021
This is a DemoCode for parsing through large log files and triggering an email whenever there's an error.

LogFileParserDemoCode This is a DemoCode for parsing through large log files and triggering an email whenever there's an error. There are a total of f

2 Jan 06, 2022
A simple package that allows you to save inputs & outputs as .log files

wolf_dot_log A simple package that allows you to save inputs & outputs as .log files pip install wolf_dot_log pip3 install wolf_dot_log |Instructions|

Alpwuf 1 Nov 16, 2021
Monitoring plugin to check disk io with Icinga, Nagios and other compatible monitoring solutions

check_disk_io - Monitor disk io This is a monitoring plugin for Icinga, Nagios and other compatible monitoring solutions to check the disk io. It uses

DinoTools 3 Nov 15, 2022
Json Formatter for the standard python logger

This library is provided to allow standard python logging to output log data as json objects. With JSON we can make our logs more readable by machines and we can stop writing custom parsers for syslo

Zakaria Zajac 1.4k Jan 04, 2023
Outlog it's a library to make logging a simple task

outlog Outlog it's a library to make logging a simple task!. I'm a lazy python user, the times that i do logging on my apps it's hard to do, a lot of

ZSendokame 2 Mar 05, 2022
A Python package which supports global logfmt formatted logging.

Python Logfmter A Python package which supports global logfmt formatted logging. Install $ pip install logfmter Usage Before integrating this library,

Joshua Taylor Eppinette 15 Dec 29, 2022
A python library used to interact with webots robocup game web logs

A python library used to interact with webots robocup game web logs

Hamburg Bit-Bots 2 Nov 05, 2021
Python logging made (stupidly) simple

Loguru is a library which aims to bring enjoyable logging in Python. Did you ever feel lazy about configuring a logger and used print() instead?... I

13.7k Jan 02, 2023