Python logging made (stupidly) simple

Overview

Loguru logo

Pypi version Python versions Documentation Build status Coverage Code quality License

Loguru logo


Loguru is a library which aims to bring enjoyable logging in Python.

Did you ever feel lazy about configuring a logger and used print() instead?... I did, yet logging is fundamental to every application and eases the process of debugging. Using Loguru you have no excuse not to use logging from the start, this is as simple as from loguru import logger.

Also, this library is intended to make Python logging less painful by adding a bunch of useful functionalities that solve caveats of the standard loggers. Using logs in your application should be an automatism, Loguru tries to make it both pleasant and powerful.

Installation

pip install loguru

Features

Take the tour

Ready to use out of the box without boilerplate

The main concept of Loguru is that there is one and only one logger.

For convenience, it is pre-configured and outputs to stderr to begin with (but that's entirely configurable).

from loguru import logger

logger.debug("That's it, beautiful and simple logging!")

The logger is just an interface which dispatches log messages to configured handlers. Simple, right?

No Handler, no Formatter, no Filter: one function to rule them all

How to add a handler? How to set up logs formatting? How to filter messages? How to set level?

One answer: the add() function.

logger.add(sys.stderr, format="{time} {level} {message}", filter="my_module", level="INFO")

This function should be used to register sinks which are responsible for managing log messages contextualized with a record dict. A sink can take many forms: a simple function, a string path, a file-like object, a coroutine function or a built-in Handler.

Note that you may also remove() a previously added handler by using the identifier returned while adding it. This is particularly useful if you want to supersede the default stderr handler: just call logger.remove() to make a fresh start.

Easier file logging with rotation / retention / compression

If you want to send logged messages to a file, you just have to use a string path as the sink. It can be automatically timed too for convenience:

logger.add("file_{time}.log")

It is also easily configurable if you need rotating logger, if you want to remove older logs, or if you wish to compress your files at closure.

logger.add("file_1.log", rotation="500 MB")    # Automatically rotate too big file
logger.add("file_2.log", rotation="12:00")     # New file is created each day at noon
logger.add("file_3.log", rotation="1 week")    # Once the file is too old, it's rotated

logger.add("file_X.log", retention="10 days")  # Cleanup after some time

logger.add("file_Y.log", compression="zip")    # Save some loved space

Modern string formatting using braces style

Loguru favors the much more elegant and powerful {} formatting over %, logging functions are actually equivalent to str.format().

logger.info("If you're using Python {}, prefer {feature} of course!", 3.6, feature="f-strings")

Exceptions catching within threads or main

Have you ever seen your program crashing unexpectedly without seeing anything in the log file? Did you ever noticed that exceptions occurring in threads were not logged? This can be solved using the catch() decorator / context manager which ensures that any error is correctly propagated to the logger.

@logger.catch
def my_function(x, y, z):
    # An error? It's caught anyway!
    return 1 / (x + y + z)

Pretty logging with colors

Loguru automatically adds colors to your logs if your terminal is compatible. You can define your favorite style by using markup tags in the sink format.

logger.add(sys.stdout, colorize=True, format="<green>{time}</green> <level>{message}</level>")

Asynchronous, Thread-safe, Multiprocess-safe

All sinks added to the logger are thread-safe by default. They are not multiprocess-safe, but you can enqueue the messages to ensure logs integrity. This same argument can also be used if you want async logging.

logger.add("somefile.log", enqueue=True)

Coroutine functions used as sinks are also supported and should be awaited with complete().

Fully descriptive exceptions

Logging exceptions that occur in your code is important to track bugs, but it's quite useless if you don't know why it failed. Loguru helps you identify problems by allowing the entire stack trace to be displayed, including values of variables (thanks better_exceptions for this!).

The code:

logger.add("out.log", backtrace=True, diagnose=True)  # Caution, may leak sensitive data in prod

def func(a, b):
    return a / b

def nested(c):
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What?!")

nested(0)

Would result in:

2018-07-17 01:38:43.975 | ERROR    | __main__:nested:10 - What?!
Traceback (most recent call last):

  File "test.py", line 12, in <module>
    nested(0)
    └ <function nested at 0x7f5c755322f0>

> File "test.py", line 8, in nested
    func(5, c)
    │       └ 0
    └ <function func at 0x7f5c79fc2e18>

  File "test.py", line 4, in func
    return a / b
           │   └ 0
           └ 5

ZeroDivisionError: division by zero

Structured logging as needed

Want your logs to be serialized for easier parsing or to pass them around? Using the serialize argument, each log message will be converted to a JSON string before being sent to the configured sink.

logger.add(custom_sink_function, serialize=True)

Using bind() you can contextualize your logger messages by modifying the extra record attribute.

logger.add("file.log", format="{extra[ip]} {extra[user]} {message}")
context_logger = logger.bind(ip="192.168.0.1", user="someone")
context_logger.info("Contextualize your logger easily")
context_logger.bind(user="someone_else").info("Inline binding of extra attribute")
context_logger.info("Use kwargs to add context during formatting: {user}", user="anybody")

It is possible to modify a context-local state temporarily with contextualize():

with logger.contextualize(task=task_id):
    do_something()
    logger.info("End of task")

You can also have more fine-grained control over your logs by combining bind() and filter:

logger.add("special.log", filter=lambda record: "special" in record["extra"])
logger.debug("This message is not logged to the file")
logger.bind(special=True).info("This message, though, is logged to the file!")

Finally, the patch() method allows dynamic values to be attached to the record dict of each new message:

logger.add(sys.stderr, format="{extra[utc]} {message}")
logger = logger.patch(lambda record: record["extra"].update(utc=datetime.utcnow()))

Lazy evaluation of expensive functions

Sometime you would like to log verbose information without performance penalty in production, you can use the opt() method to achieve this.

logger.opt(lazy=True).debug("If sink level <= DEBUG: {x}", x=lambda: expensive_function(2**64))

# By the way, "opt()" serves many usages
logger.opt(exception=True).info("Error stacktrace added to the log message (tuple accepted too)")
logger.opt(colors=True).info("Per message <blue>colors</blue>")
logger.opt(record=True).info("Display values from the record (eg. {record[thread]})")
logger.opt(raw=True).info("Bypass sink formatting\n")
logger.opt(depth=1).info("Use parent stack context (useful within wrapped functions)")
logger.opt(capture=False).info("Keyword arguments not added to {dest} dict", dest="extra")

Customizable levels

Loguru comes with all standard logging levels to which trace() and success() are added. Do you need more? Then, just create it by using the level() function.

new_level = logger.level("SNAKY", no=38, color="<yellow>", icon="🐍")

logger.log("SNAKY", "Here we go!")

Better datetime handling

The standard logging is bloated with arguments like datefmt or msecs, %(asctime)s and %(created)s, naive datetimes without timezone information, not intuitive formatting, etc. Loguru fixes it:

logger.add("file.log", format="{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}")

Suitable for scripts and libraries

Using the logger in your scripts is easy, and you can configure() it at start. To use Loguru from inside a library, remember to never call add() but use disable() instead so logging functions become no-op. If a developer wishes to see your library's logs, he can enable() it again.

# For scripts
config = {
    "handlers": [
        {"sink": sys.stdout, "format": "{time} - {message}"},
        {"sink": "file.log", "serialize": True},
    ],
    "extra": {"user": "someone"}
}
logger.configure(**config)

# For libraries
logger.disable("my_library")
logger.info("No matter added sinks, this message is not displayed")
logger.enable("my_library")
logger.info("This message however is propagated to the sinks")

Entirely compatible with standard logging

Wish to use built-in logging Handler as a Loguru sink?

handler = logging.handlers.SysLogHandler(address=('localhost', 514))
logger.add(handler)

Need to propagate Loguru messages to standard logging?

class PropagateHandler(logging.Handler):
    def emit(self, record):
        logging.getLogger(record.name).handle(record)

logger.add(PropagateHandler(), format="{message}")

Want to intercept standard logging messages toward your Loguru sinks?

class InterceptHandler(logging.Handler):
    def emit(self, record):
        # Get corresponding Loguru level if it exists
        try:
            level = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message
        frame, depth = logging.currentframe(), 2
        while frame.f_code.co_filename == logging.__file__:
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())

logging.basicConfig(handlers=[InterceptHandler()], level=0)

Personalizable defaults through environment variables

Don't like the default logger formatting? Would prefer another DEBUG color? No problem:

# Linux / OSX
export LOGURU_FORMAT="{time} | <lvl>{message}</lvl>"

# Windows
setx LOGURU_DEBUG_COLOR "<green>"

Convenient parser

It is often useful to extract specific information from generated logs, this is why Loguru provides a parse() method which helps to deal with logs and regexes.

pattern = r"(?P<time>.*) - (?P<level>[0-9]+) - (?P<message>.*)"  # Regex with named groups
caster_dict = dict(time=dateutil.parser.parse, level=int)        # Transform matching groups

for groups in logger.parse("file.log", pattern, cast=caster_dict):
    print("Parsed:", groups)
    # {"level": 30, "message": "Log example", "time": datetime(2018, 12, 09, 11, 23, 55)}

Exhaustive notifier

Loguru can easily be combined with the great notifiers library (must be installed separately) to receive an e-mail when your program fail unexpectedly or to send many other kind of notifications.

import notifiers

params = {
    "username": "[email protected]",
    "password": "abc123",
    "to": "[email protected]"
}

# Send a single notification
notifier = notifiers.get_notifier("gmail")
notifier.notify(message="The application is running!", **params)

# Be alerted on each error message
from notifiers.logging import NotificationHandler

handler = NotificationHandler("gmail", defaults=params)
logger.add(handler, level="ERROR")

10x faster than built-in logging

Although logging impact on performances is in most cases negligible, a zero-cost logger would allow to use it anywhere without much concern. In an upcoming release, Loguru's critical functions will be implemented in C for maximum speed.

Documentation

Comments
  • Patch Arbitrary Code Execution

    Patch Arbitrary Code Execution

    My company's internal deps auditing system is beginning to flag loguru because of this potential exploit. I don't know if it has come to your attention.

    I don't know exactly how to prevent it, but am willing to help out if you need support. I will study the issue more.

    https://github.com/418sec/huntr/pull/1592

    enhancement 
    opened by aaronclong 44
  • Pytest's caplog fixture doesn't seem to work

    Pytest's caplog fixture doesn't seem to work

    Summary

    Pytest's caplog fixture is a critical part of testing. I'd love to move to loguru, but loguru doesn't seem to work with caplog.

    I'm not sure if this is user error (perhaps it's documented somewhere? I haven't been able to find it.), if it is some design oversight/choice, or if the problem is actually on pytest's end.

    Expected Result

    Users should be able to use loguru as a drop-in replacement for the stdlib logging package and have tests that use the caplog fixture still work.

    Actual Result

    Drop-in replacement causes tests that use the caplog pytest fixture to fail.

    Steps to Reproduce

    Base test file

    # test_demo.py
    import pytest
    import logging
    logger = logging.getLogger()
    logger.addHandler(logging.StreamHandler())
    # from loguru import logger
    
    def some_func(a, b):
        if a < 1:
            logger.warning("Oh no!")
        return a + b
    
    def test_some_func_logs_warning(caplog):
        assert some_func(-1, 2) == 1
        assert "Oh no!" in caplog.text
    
    if __name__ == "__main__":
        some_func(-1, 1)
        print("end")
    

    Without Loguru:

    $ python test_demo.py
    Oh no!
    end
    (.venv) Previous Dir: /home/dthor
    09:59:56 [email protected] /home/dthor/temp/loguru
    $ pytest
    ========================== test session starts ==========================
    platform linux -- Python 3.6.7, pytest-4.3.0, py-1.8.0, pluggy-0.8.1
    rootdir: /home/dthor/temp/loguru, inifile:
    collected 1 item
    
    test_demo.py .                                                    [100%]
    
    ======================= 1 passed in 0.03 seconds ========================
    

    With Loguru:

    Adjust test_demo.py by commenting out stdlib logging and uncommenting loguru:

    ...
    # import logging
    # logger = logging.getLogger()
    # logger.addHandler(logging.StreamHandler())
    from loguru import logger
    ...
    
    $ python test_demo.py
    2019-02-22 10:02:35.551 | WARNING  | __main__:some_func:9 - Oh no!
    end
    (.venv) Previous Dir: /home/dthor
    10:02:35 [email protected] /home/dthor/temp/loguru
    $ pytest
    ========================== test session starts ==========================
    platform linux -- Python 3.6.7, pytest-4.3.0, py-1.8.0, pluggy-0.8.1
    rootdir: /home/dthor/temp/loguru, inifile:
    collected 1 item
    
    test_demo.py F                                                    [100%]
    
    =============================== FAILURES ================================
    ______________________ test_some_func_logs_warning ______________________
    
    caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8e8b620438>
    
        def test_some_func_logs_warning(caplog):
            assert some_func(-1, 2) == 1
    >       assert "Oh no!" in caplog.text
    E       AssertionError: assert 'Oh no!' in ''
    E        +  where '' = <_pytest.logging.LogCaptureFixture object at 0x7f8e8b620438>.text
    
    test_demo.py:14: AssertionError
    ------------------------- Captured stderr call --------------------------
    2019-02-22 10:02:37.708 | WARNING  | test_demo:some_func:9 - Oh no!
    ======================= 1 failed in 0.20 seconds ========================
    

    Version information

    $ python --version
    Python 3.6.7
    (.venv) Previous Dir: /home/dthor
    10:10:03 [email protected] /home/dthor/temp/loguru
    $ pip list
    Package                Version
    ---------------------- -----------
    ansimarkup             1.4.0
    atomicwrites           1.3.0
    attrs                  18.2.0
    better-exceptions-fork 0.2.1.post6
    colorama               0.4.1
    loguru                 0.2.5
    more-itertools         6.0.0
    pip                    19.0.3
    pkg-resources          0.0.0
    pluggy                 0.8.1
    py                     1.8.0
    Pygments               2.3.1
    pytest                 4.3.0
    setuptools             40.8.0
    six                    1.12.0
    (.venv) Previous Dir: /home/dthor
    10:10:07 [email protected] /home/dthor/temp/loguru
    $ uname -a
    Linux Thorium 4.4.0-17763-Microsoft #253-Microsoft Mon Dec 31 17:49:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
    (.venv) Previous Dir: /home/dthor
    10:11:33 [email protected] /home/dthor/temp/loguru
    $ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 18.04.2 LTS
    Release:        18.04
    Codename:       bionic
    
    question documentation 
    opened by dougthor42 29
  • Messages are getting overlapped when logging in the sys.stderr using multiprocessing

    Messages are getting overlapped when logging in the sys.stderr using multiprocessing

    I am trying to log to sys.stderr using multiprocessing but the log messages are getting overlapped. Is this the intended behavior?

    2019-07-01 15:19:49.197 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    2019-07-01 15:19:49.2052019-07-01 15:19:49.205 |  | SUCCESS SUCCESS  |  | __mp_main____mp_main__::workerworker::1414 -  - My function executed successfullyMy function executed successfully
    
    2019-07-01 15:19:50.198 | SUCCESS  | __mp_main__2019-07-01 15:19:50.200: | workerSUCCESS : | 14__mp_main__ - :My function executed successfullyworker
    :14 - My function executed successfully
    2019-07-01 15:19:50.205 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    2019-07-01 15:19:50.209 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    2019-07-01 15:19:51.207 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    2019-07-01 15:19:52.198 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    2019-07-01 15:19:52.209 | SUCCESS  | __mp_main__:worker:14 - My function executed successfully
    

    Here is the code that generated the above output.

    import sys
    import time
    import random
    from concurrent.futures import ProcessPoolExecutor
    
    from loguru import logger
    
    logger.remove()
    logger.add(sys.stderr, enqueue=True)
    
    
    def worker(seconds):
        time.sleep(seconds)
        logger.success("My function executed successfully")
    
    
    def main():
    
        with ProcessPoolExecutor() as executor:
            seconds = [random.randint(1, 3) for i in range(10)]
            executor.map(worker, seconds)
    
    
    if __name__ == "__main__":
        main()
    
    

    Cheers, Chris

    bug 
    opened by chkoar 25
  • Change level of default handler

    Change level of default handler

    First off, this library is terrific, I found it via the podcast Python Bytes and I've been using it ever since.

    So here is my question: I understand, the default handler for from loguru import logger goes to sys.stderr.

    When I try: logger.add(sys.stderr, level="INFO"), I still get DEBUG level messages in the terminal.

    My goal is to change the level of the logging to sys.stderr. I don't have any other handlers.

    question 
    opened by jetheurer 25
  • Does the compression task block the main thread?

    Does the compression task block the main thread?

    Hi, i have a program running with several threads. Each of them is using queues to buffer its work load while another thread is running. In the main thread my main engine runs and uses the queues to distribute and forward work. Also in the main module the logger is configured with a custom log rotation function. (See #241).

    I am monitoring the queue loads and i can see that during the time when log compression happens (after 1 GiB with bz2 - takes ~ 2.5 minutes) my queues only fill up but can not be worked on in my main thread during that time.

    So i thought about putting my engine also in a separate thread. But actually logging should be the part which runs in a separate thread.

    Can you tell me how this is managed in loguru? Does the log processing run in a separate thread?

    I guess the problem is related to the compression itself. It is running synchronosly...

    enhancement question documentation 
    opened by mahadi 23
  • Use separate option to control exception formatting

    Use separate option to control exception formatting

    Currently, the backtrace option is overloaded to control both displaying more of the traceback as well as formatting with better_exceptions. Semantically, those two things should be separate. Having them tied together prevents users from logging just the caught frame with better_exceptions formatting (unless I'm missing something). That's actually how I want most of my exception logging to be.

    enhancement 
    opened by thebigmunch 21
  • How to implement structured logging through indentation?

    How to implement structured logging through indentation?

    I'd like to implement a context manager that will add some indentation to the log messages sent from within its scope.

    Something like this:

    from loguru import logger
    
    # ... configure the logger in any way
    
    logger.info('this is the "root" level')
    logger.info('and so is this')
    
    with indent_logs(logger, indent_size=4):
        logger.info("I'm at the first level")
    
        with indent_logs(logger, indent_size=2):
            logger.info("I'm on the second level")
    
        logger.info('back on level 1')
    
    logger.info('back at root')
    

    Which would output:

    2021-04-09 14:04:14.335 | INFO     | __main__:<module>:29 - this is the "root" level
    2021-04-09 14:04:14.335 | INFO     | __main__:<module>:30 - and so is this
    2021-04-09 14:04:14.336 | INFO     | __main__:<module>:33 -     I'm at the first level
    2021-04-09 14:04:14.336 | INFO     | __main__:<module>:36 -       I'm on the second level
    2021-04-09 14:04:14.336 | INFO     | __main__:<module>:38 -     back on level 1
    2021-04-09 14:04:14.336 | INFO     | __main__:<module>:40 - back at root
    

    But I'm unsure of what is the safest way to modify the logger so that all handlers and configurations are affected properly.

    Any advice?

    question 
    opened by jaksiprejak 19
  • Multiprocessing resource_tracker error when trying to use click completions while using enqueue

    Multiprocessing resource_tracker error when trying to use click completions while using enqueue

    Issue:

    I have found that if I add a sink that uses enqueue=True, using click completions as documented here causes an error (although the completion still works):

    ↴2 ~/----/loguru_click_completion_issue on  master [✘!?] is 📦 v1.0.0
    ❯ poetry shell
    Spawning shell within /----/tool-AGIZ3qxU-py3.9
    source /----/tool-AGIZ3qxU-py3.9/bin/activate.fish
    Welcome to fish, the friendly interactive shell
    
    ↴3 ~/----/loguru_click_completion_issue on  master [✘!?] is 📦 v1.0.0
    ❯ source /----/tool-AGIZ3qxU-py3.9/bin/activate.fish
    
    ↴3 ~/----/loguru_click_completion_issue on  master [✘!?] is 📦 v1.0.0
    ❯ tool sub-/----/.asdf/installs/python/3.9.2/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 8 leaked semaphore objects to clean up at shutdownA.)  sub-b  (Subcommand B.)
      warnings.warn('resource_tracker: There appear to be %d '
    
    

    I have setup a basic project to illustrate the issue: Project Structure:

    loguru_click_completion_issue/
    ├── src
    │   └── tool
    │       ├── __init__.py
    │       └── cli.py
    ├── poetry.lock
    └── pyproject.toml
    

    pyproject.toml:

    [tool.poetry]
    name = "tool"
    version = "1.0.0"
    description = ""
    authors = ["---- <---->"]
    packages = [
        {include = "tool", from = "src"},
    ]
    
    [tool.poetry.dependencies]
    python = "^3.7"
    
    click = "^8.0.1"
    loguru = "^0.5.3"
    
    [tool.poetry.dev-dependencies]
    
    [tool.poetry.scripts]
    tool = "tool.cli:command"
    
    [build-system]
    requires = ["poetry>=0.12"]
    build-backend = "poetry.masonry.api"
    
    

    src/tool/cli.py:

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    
    """Simple example of issue caused by click completion + loguru enqueued log."""
    
    
    ####################################################################################################
    # Imports
    ####################################################################################################
    
    
    # ==Standard Library==
    import sys
    
    # ==Site-Packages==
    import click
    
    from loguru import logger
    
    
    ####################################################################################################
    # Logging
    ####################################################################################################
    
    
    logger.add(sys.stderr, format="{time} {level} {message}", level="INFO", enqueue=True)
    
    
    ####################################################################################################
    # Main
    ####################################################################################################
    
    
    @click.command()
    def sub_a() -> None:
        """Subcommand A."""
    
        print("A")
    
    
    @click.command()
    def sub_b() -> None:
        """Subcommand B."""
    
        print("B")
    
    
    @click.group("tool")
    def command() -> None:
        """Root command group."""
    
        pass
    
    
    # ------------------------------------------------------------------------------------------------ #
    
    
    # Assemble subcommands
    command.add_command(sub_a)
    command.add_command(sub_b)
    

    ~/.config/fish/completions/tool.fish:

    function _tool_completion;
        set -l response;
    
        for value in (env _TOOL_COMPLETE=fish_complete COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t) tool);
            set response $response $value;
        end;
    
        for completion in $response;
            set -l metadata (string split "," $completion);
    
            if test $metadata[1] = "dir";
                __fish_complete_directories $metadata[2];
            else if test $metadata[1] = "file";
                __fish_complete_path $metadata[2];
            else if test $metadata[1] = "plain";
                echo $metadata[2];
            end;
        end;
    end;
    
    complete --no-files --command tool --arguments "(_tool_completion)";
    
    

    The issue does not occur on Python 3.7, but does happen on later versions. I initially raised this issue on the click bug tracker, but it closed since this is "not an issue with click," which I suppose is technically true.


    Environment:

    • OS: macOS 10.15.7
    • Python: > 3.7
    • Shell: fish
    question 
    opened by taranlu-houzz 17
  • Many tests fail on when run locally

    Many tests fail on when run locally

    63 failed, 627 passed, 1 skipped, 1 xfailed

    Same results on my Windows desktop and a Linux server.

    Edit: Should've mentioned, just for completeness, Python 3.7 on Windows and Python 3.6 on the Linux server.

    enhancement 
    opened by thebigmunch 17
  • Capturing logs with log_capture in behave

    Capturing logs with log_capture in behave

    Problem

    I run my test with behave (https://behave.readthedocs.io/en/stable/api.html?highlight=log_capture#logging-capture) and after changing to the loguru my tests fail.

    I have read this post Issue 59 (regarding basically the same but with pytest), and I understand that loguru doesn't use standardlib logging which behave log capture probably utilizes, and the way to solve the problem is to propagte the logs to the logging lib

    however, I don't understand how to replicate the fix.

    I tried using the fixture in an analogous way but it didn't seem to work:

    Example of what I tried

    environment.py

    from api.features.fixtures import log_capture_fix
    
    
    def before_all(context):
        use_fixture(log_capture_fix, context)
        context.config.setup_logging()
    

    fixtures.py

    import logging
    from behave import fixture
    from loguru import logger
    
    
    @fixture
    def log_capture_fix(context):
        class PropogateHandler(logging.Handler):
            def emit(self, record):
                logging.getLogger(record.name).handle(record)
    
        handler_id = logger.add(PropogateHandler(), format="{message} {extra}")
        yield context.log_capture
        logger.remove(handler_id)
    

    Could you help me configure this properly, or point me in the right directions?

    question 
    opened by Tsubanee 16
  • Filtering chatty libraries

    Filtering chatty libraries

    In general, debug output is what I'm looking for, and I'd like to get it from all dependent libraries. Using the snippet for integrating with the logging library, all is mostly well.

    However, if left unchecked, boto3 and requests are good examples of libraries which are too chatty and log a lot of records that are generally not useful. But it's not obvious to me how to add loguru-side sinks in such a way that I can specify, x, y, and z packages ought to be filtered to INFO logs, and otherwise unspecified packages ought to continue logging at DEBUG.

    Would the canonical method be adding a filter param to every add call and doing a separate level comparison at the point where i'm comparing against the logger?

    feature 
    opened by DanCardin 16
  • Bump sphinx from 5.3.0 to 6.0.0

    Bump sphinx from 5.3.0 to 6.0.0

    Bumps sphinx from 5.3.0 to 6.0.0.

    Release notes

    Sourced from sphinx's releases.

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.0.0 (released Dec 29, 2022)

    Dependencies

    • #10468: Drop Python 3.6 support
    • #10470: Drop Python 3.7, Docutils 0.14, Docutils 0.15, Docutils 0.16, and Docutils 0.17 support. Patch by Adam Turner

    Incompatible changes

    • #7405: Removed the jQuery and underscore.js JavaScript frameworks.

      These frameworks are no longer be automatically injected into themes from Sphinx 6.0. If you develop a theme or extension that uses the jQuery, $, or $u global objects, you need to update your JavaScript to modern standards, or use the mitigation below.

      The first option is to use the sphinxcontrib.jquery_ extension, which has been developed by the Sphinx team and contributors. To use this, add sphinxcontrib.jquery to the extensions list in conf.py, or call app.setup_extension("sphinxcontrib.jquery") if you develop a Sphinx theme or extension.

      The second option is to manually ensure that the frameworks are present. To re-add jQuery and underscore.js, you will need to copy jquery.js and underscore.js from the Sphinx repository_ to your static directory, and add the following to your layout.html:

      .. code-block:: html+jinja

      {%- block scripts %} {{ super() }} {%- endblock %}

      .. _sphinxcontrib.jquery: https://github.com/sphinx-contrib/jquery/

      Patch by Adam Turner.

    • #10471, #10565: Removed deprecated APIs scheduled for removal in Sphinx 6.0. See :ref:dev-deprecated-apis for details. Patch by Adam Turner.

    • #10901: C Domain: Remove support for parsing pre-v3 style type directives and roles. Also remove associated configuration variables c_allow_pre_v3 and c_warn_on_allowed_pre_v3. Patch by Adam Turner.

    Features added

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 0
  • Bump pre-commit from 2.20.0 to 2.21.0

    Bump pre-commit from 2.20.0 to 2.21.0

    Bumps pre-commit from 2.20.0 to 2.21.0.

    Release notes

    Sourced from pre-commit's releases.

    pre-commit v2.21.0

    Features

    Fixes

    Changelog

    Sourced from pre-commit's changelog.

    2.21.0 - 2022-12-25

    Features

    Fixes

    Commits
    • 40c5bda v2.21.0
    • bb27ea3 Merge pull request #2642 from rkm/fix/dotnet-nuget-config
    • c38e0c7 dotnet: ignore nuget source during tool install
    • bce513f Merge pull request #2641 from rkm/fix/dotnet-tool-prefix
    • e904628 fix dotnet hooks with prefixes
    • d7b8b12 Merge pull request #2646 from pre-commit/pre-commit-ci-update-config
    • 94b6178 [pre-commit.ci] pre-commit autoupdate
    • b474a83 Merge pull request #2643 from pre-commit/pre-commit-ci-update-config
    • a179808 [pre-commit.ci] pre-commit autoupdate
    • 3aa6206 Merge pull request #2605 from lorenzwalthert/r/fix-exe
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 0
  • Bump tox from 3.27.1 to 4.1.2

    Bump tox from 3.27.1 to 4.1.2

    Bumps tox from 3.27.1 to 4.1.2.

    Release notes

    Sourced from tox's releases.

    4.1.2

    What's Changed

    Full Changelog: https://github.com/tox-dev/tox/compare/4.1.1...4.1.2

    4.1.1

    What's Changed

    Full Changelog: https://github.com/tox-dev/tox/compare/4.1.0...4.1.1

    4.1.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/tox-dev/tox/compare/4.0.19...4.1.0

    4.0.18

    What's Changed

    Full Changelog: https://github.com/tox-dev/tox/compare/4.0.17...4.0.18

    4.0.17

    What's Changed

    New Contributors

    Full Changelog: https://github.com/tox-dev/tox/compare/4.0.16...4.0.17

    4.0.16

    What's Changed

    ... (truncated)

    Changelog

    Sourced from tox's changelog.

    v4.1.2 (2022-12-30)

    Bugfixes - 4.1.2

    - Fix ``--skip-missing-interpreters`` behaviour - by :user:`q0w`. (:issue:`2649`)
    - Restore tox 3 behaviour of showing the output of pip freeze, however now only active when running inside a CI
      environment - by :user:`gaborbernat`. (:issue:`2685`)
    - Fix extracting extras from markers with many extras - by :user:`q0w`. (:issue:`2791`)
    

    v4.1.1 (2022-12-29)

    Bugfixes - 4.1.1

    • Fix logging error with emoji in git branch name. (:issue:2768)

    Improved Documentation - 4.1.1

    - Add faq entry about re-use of environments - by :user:`jugmac00`. (:issue:`2788`)
    

    v4.1.0 (2022-12-29)

    Features - 4.1.0

    - ``-f`` can be used multiple times and on hyphenated factors (e.g. ``-f py311-django -f py39``) - by :user:`sirosen`. (:issue:`2766`)
    

    Improved Documentation - 4.1.0 </code></pre> <ul> <li>Fix a grammatical typo in docs/user_guide.rst. (:issue:<code>2787</code>)</li> </ul> <h2>v4.0.19 (2022-12-28)</h2> <p>Bugfixes - 4.0.19</p> <pre><code>- Create temp_dir if not exists - by :user:q0w. (:issue:2770)

    v4.0.18 (2022-12-26)

    Bugfixes - 4.0.18 </code></pre> <ul> <li>Strip leading and trailing whitespace when parsing elements in requirement files - by :user:<code>gaborbernat</code>. (:issue:<code>2773</code>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary>

    <ul> <li><a href="https://github.com/tox-dev/tox/commit/6253d6220486e17c6132f613cc6218c87b084417"><code>6253d62</code></a> release 4.1.2</li> <li><a href="https://github.com/tox-dev/tox/commit/196b20de4c969a163d34692d8a5d646cad4717d6"><code>196b20d</code></a> Fix extracting extras from markers with many extras (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2792">#2792</a>)</li> <li><a href="https://github.com/tox-dev/tox/commit/a3d3ec042d38195392841a9112911c2bde3587d1"><code>a3d3ec0</code></a> Show installed packages after setup in CI envs (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2794">#2794</a>)</li> <li><a href="https://github.com/tox-dev/tox/commit/d8c4cb0ffa1999b5d6466e0099dab76f242b1ba8"><code>d8c4cb0</code></a> Fix --skip-missing-interpreters (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2793">#2793</a>)</li> <li><a href="https://github.com/tox-dev/tox/commit/1d739a2641bcb0545815afbabf8b3da9694ec0ff"><code>1d739a2</code></a> release 4.1.1</li> <li><a href="https://github.com/tox-dev/tox/commit/b49d11867ab6bd7219c5b2c50f610e7829395975"><code>b49d118</code></a> Fix logging error with emoji in git branch name. (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2790">#2790</a>)</li> <li><a href="https://github.com/tox-dev/tox/commit/c83819262b4b739899af8a0bd7b6f32442344e4a"><code>c838192</code></a> Add faq entry about re-use of environments (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2789">#2789</a>)</li> <li><a href="https://github.com/tox-dev/tox/commit/e0aed508607d34559610fbb0d2b4fd038da9c11b"><code>e0aed50</code></a> release 4.1.0</li> <li><a href="https://github.com/tox-dev/tox/commit/6cdd99cc3ce4fc73455374b40f2dd8a95ef101c5"><code>6cdd99c</code></a> Improved factor selection to allow multiple uses of <code>-f</code> for &quot;OR&quot; and to allo...</li> <li><a href="https://github.com/tox-dev/tox/commit/6f056cafcca6cee4b23a35fdfc2044647c99e8d7"><code>6f056ca</code></a> Update user_guide.rst (<a href="https://github-redirect.dependabot.com/tox-dev/tox/issues/2787">#2787</a>)</li> <li>Additional commits viewable in <a href="https://github.com/tox-dev/tox/compare/3.27.1...4.1.2">compare view</a></li> </ul> </details>

    <br />

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 0
  • According to VizTracer, there is a long wait on logging emit.

    According to VizTracer, there is a long wait on logging emit.

    image

    A single line of log cost more than 100ms to complete.

    This seems unusual. Is there a reason for this?

    It appears that it is waiting for IO to be unlocked, but it seems to be taking a bit too long to wait.

    Here is the viztracer file.

    result1.zip

    Any advice would be greatly appreciated.

    question 
    opened by bill97385 1
  • Duplicate logs with SQLAlchemy after intercept

    Duplicate logs with SQLAlchemy after intercept

    Hey @Delgan! Neat tool here that I am trying to implement in one of my projects.

    I am trying to integrate SQLAlchemy's logging with Loguru using your InterceptHandler class exactly as it is written in the docs, but am still getting duplicate logs from SQLAlchemy. Here is a minimal example that shows what I'm talking about:

    # example.py
    import logging
    
    from loguru import logger
    from sqlalchemy import Column, String, create_engine
    from sqlalchemy.ext.declarative import declarative_base
    
    
    class InterceptHandler(logging.Handler):
        def emit(self, record):
            # Get corresponding Loguru level if it exists
            try:
                level = logger.level(record.levelname).name
            except ValueError:
                level = record.levelno
    
            # Find caller from where originated the logged message
            frame, depth = logging.currentframe(), 2
            while frame.f_code.co_filename == logging.__file__:
                frame = frame.f_back
                depth += 1
    
            logger.opt(depth=depth, exception=record.exc_info).log(
                level, record.getMessage()
            )
    
    
    logging.basicConfig(handlers=[InterceptHandler()], level=0)
    
    db_url = "sqlite:///./test.db"
    
    engine = create_engine(db_url, echo="debug")
    
    Base = declarative_base()
    
    
    class Test(Base):
        __tablename__ = "test"
    
        id = Column(String, primary_key=True, index=True)
        data = Column(String)
    
    
    Base.metadata.create_all(engine)
    

    This is the output I desire:

    $ python example.py
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - BEGIN (implicit)
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - PRAGMA main.table_info("test")
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
    2022-12-16 14:31:52.504 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - PRAGMA temp.table_info("test")
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
    2022-12-16 14:31:52.505 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - 
    CREATE TABLE test (
            id VARCHAR NOT NULL, 
            data VARCHAR, 
            PRIMARY KEY (id)
    )
    
    
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [no key 0.00007s] ()
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - CREATE INDEX ix_test_id ON test (id)
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - [no key 0.00010s] ()
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - COMMIT
    

    But, this is the output I'm getting. Note the duplicate lines coming from SQLAlchemy's own internal logger:

    $ python example.py 
    2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine BEGIN (implicit)
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - BEGIN (implicit)
    2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine PRAGMA main.table_info("test")
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - PRAGMA main.table_info("test")
    2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine [raw sql] ()
    2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
    2022-12-16 14:31:52,504 DEBUG sqlalchemy.engine.Engine Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52.504 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine PRAGMA temp.table_info("test")
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - PRAGMA temp.table_info("test")
    2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine [raw sql] ()
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
    2022-12-16 14:31:52,505 DEBUG sqlalchemy.engine.Engine Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52.505 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
    2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine 
    CREATE TABLE test (
            id VARCHAR NOT NULL, 
            data VARCHAR, 
            PRIMARY KEY (id)
    )
    
    
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - 
    CREATE TABLE test (
            id VARCHAR NOT NULL, 
            data VARCHAR, 
            PRIMARY KEY (id)
    )
    
    
    2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine [no key 0.00007s] ()
    2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [no key 0.00007s] ()
    2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine CREATE INDEX ix_test_id ON test (id)
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - CREATE INDEX ix_test_id ON test (id)
    2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine [no key 0.00010s] ()
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - [no key 0.00010s] ()
    2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine COMMIT
    2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - COMMIT
    

    Package versions:

    python = "^3.10"
    loguru = "^0.6.0"
    sqlalchemy = "^1.4.45"
    

    Any ideas as far as how to have SQLAlchemy's output get redirected to loguru without all the duplicate lines? Surely I'm not the first person to have run into this issue.

    Thanks, and once again, lots of love for Loguru!

    question 
    opened by singhish 1
  • sys.stderr trunc traceback

    sys.stderr trunc traceback

    Django: 3.2.16 Loguru: latest

    Loguru config:

    'handlers': [
            {
                'sink': sys.stderr,
                'catch': True,
                'backtrace': True,
                'diagnose': True,
                'level': 'DEBUG',
                # 'enqueue':True
            },
            {
                'sink': /test/log-{time:YYYY-MM-DD}.log,
                'rotation': timedelta(days=1),
                'compression': 'zip',
                'retention': timedelta(days=7),
                'encoding': 'utf-8',
                'catch': True,
                'backtrace': True,
                'diagnose': True,
                'level': 'INFO',
                # 'enqueue':True
            },
        ]}
    

    All logs are trimmed to this line

        response = response or
    

    but on the log file everything is written perfectly

    message in console:

    2022-11-29 16:02:50.071 | ERROR    | api.controllers.test.views:get:74 - 1
    Traceback (most recent call last):
    
      File "c:\Users\g.chirico\.vscode\extensions\ms-python.python-2022.18.2\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydev_bundle\pydev_monkey.py", line 1053, in __call__
        ret = self.original_func(*self.args, **self.kwargs)
              │    │              │    │       │    └ {}
              │    │              │    │       └ <_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>
              │    │              │    └ ()
              │    │              └ <_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>
              │    └ <bound method Thread._bootstrap of <Thread(Thread-8, started daemon 4168)>>
              └ <_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\threading.py", line 890, in _bootstrap
        self._bootstrap_inner()
        │    └ <function Thread._bootstrap_inner at 0x00000174897DF0D0>
        └ <Thread(Thread-8, started daemon 4168)>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\threading.py", line 932, in _bootstrap_inner
        self.run()
        │    └ <function Thread.run at 0x00000174897DEDC0>
        └ <Thread(Thread-8, started daemon 4168)>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\threading.py", line 870, in run
        self._target(*self._args, **self._kwargs)
        │    │        │    │        │    └ {}
        │    │        │    │        └ <Thread(Thread-8, started daemon 4168)>
        │    │        │    └ (<socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 7000), raddr...
        │    │        └ <Thread(Thread-8, started daemon 4168)>
        │    └ <bound method ThreadingMixIn.process_request_thread of <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>>
        └ <Thread(Thread-8, started daemon 4168)>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\socketserver.py", line 683, in process_request_thread
        self.finish_request(request, client_address)
        │    │              │        └ ('127.0.0.1', 57524)
        │    │              └ <socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 7000), raddr=...
        │    └ <function BaseServer.finish_request at 0x000001748B52E790>
        └ <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\socketserver.py", line 360, in finish_request
        self.RequestHandlerClass(request, client_address, self)
        │    │                   │        │               └ <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>
        │    │                   │        └ ('127.0.0.1', 57524)
        │    │                   └ <socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 7000), raddr=...
        │    └ <class 'django.core.servers.basehttp.WSGIRequestHandler'>
        └ <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\socketserver.py", line 747, in __init__
        self.handle()
        │    └ <function WSGIRequestHandler.handle at 0x000001748D6A2AF0>
        └ <django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\core\servers\basehttp.py", line 178, in handle
        self.handle_one_request()
        │    └ <function WSGIRequestHandler.handle_one_request at 0x000001748D6A2B80>
        └ <django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\core\servers\basehttp.py", line 201, in handle_one_request
        handler.run(self.server.get_app())
        │       │   │    │      └ <function WSGIServer.get_app at 0x000001748CE294C0>
        │       │   │    └ <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>
        │       │   └ <django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>
        │       └ <function BaseHandler.run at 0x000001748CE28160>
        └ <django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\wsgiref\handlers.py", line 137, in run
        self.result = application(self.environ, self.start_response)
        │    │        │           │    │        │    └ <function BaseHandler.start_response at 0x000001748CE284C0>
        │    │        │           │    │        └ <django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>
        │    │        │           │    └ {'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 'C:\\Users\\g.chirico\\AppData\\Roaming', 'CHROME_CRASHPAD_PIPE_NAME': '\\\...
        │    │        │           └ <django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>
        │    │        └ <django.core.handlers.wsgi.WSGIHandler object at 0x00000174904C7640>
        │    └ None
        └ <django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\core\handlers\wsgi.py", line 133, in __call__
        response = self.get_response(request)
                   │    │            └ <WSGIRequest: GET '/api/test/report/'>
                   │    └ <function BaseHandler.get_response at 0x000001748D6A13A0>
        response = response or self.get_response(request)
                   │           │    │            └ <WSGIRequest: GET '/api/test/report/'>
                   │           │    └ <function convert_exception_to_response.<locals>.inner at 0x00000174904CA1F0>
                   │           └ <django.middleware.gzip.GZipMiddleware object at 0x00000174A1E25C40>
                   └ None
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
        response = get_response(request)
                   │            └ <WSGIRequest: GET '/api/test/report/'>
                   └ <django.middleware.security.SecurityMiddleware object at 0x00000174904C7190>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\utils\deprecation.py", line 117, in __call__
        response = response or self.get_response(request)
                   │           │    │            └ <WSGIRequest: GET '/api/test/report/'>
                   │           │    └ <function convert_exception_to_response.<locals>.inner at 0x00000174904CA550>
                   │           └ <django.middleware.security.SecurityMiddleware object at 0x00000174904C7190>
                   └ None
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
        response = get_response(request)
                   │            └ <WSGIRequest: GET '/api/test/report/'>
                   └ <django.contrib.sessions.middleware.SessionMiddleware object at 0x00000174904C71C0>
    
      File "D:\Users\g.chirico\miniconda3\envs\production\lib\site-packages\django\utils\deprecation.py", line 117, in __call__
        response = response or
    
    def write(self, message):
            self._stream.write(message)
            if self._flushable:
                self._stream.flush()
    

    formatted message from previus func: '\x1b[32m2022-11-29 16:02:50.071\x1b[0m | \x1b[31m\x1b[1mERROR \x1b[0m | \x1b[36mapi.controllers.test.views\x1b[0m:\x1b[36mget\x1b[0m:\x1b[36m74\x1b[0m - \x1b[31m\x1b[1m1\x1b[0m\n\x1b[33m\x1b[1mTraceback (most recent call last):\x1b[0m\n\n File "\x1b[32mc:\\Users\\g.chirico\\.vscode\\extensions\\ms-python.python-2022.18.2\\pythonFiles\\lib\\python\\debugpy\\_vendored\\pydevd\\_pydev_bundle\\\x1b[0m\x1b[32m\x1b[1mpydev_monkey.py\x1b[0m", line \x1b[33m1053\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mret\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1moriginal_func\x1b[0m\x1b[1m(\x1b[0m\x1b[35m\x1b[1m*\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1margs\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mkwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m()\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<bound method Thread._bootstrap of <Thread(Thread-8, started daemon 4168)>>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<_pydev_bundle.pydev_monkey._NewThreadStartupWithTrace object at 0x00000174A219DA30>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1mthreading.py\x1b[0m", line \x1b[33m890\x1b[0m, in \x1b[35m_bootstrap\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1m_bootstrap_inner\x1b[0m\x1b[1m(\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function Thread._bootstrap_inner at 0x00000174897DF0D0>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<Thread(Thread-8, started daemon 4168)>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1mthreading.py\x1b[0m", line \x1b[33m932\x1b[0m, in \x1b[35m_bootstrap_inner\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mrun\x1b[0m\x1b[1m(\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function Thread.run at 0x00000174897DEDC0>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<Thread(Thread-8, started daemon 4168)>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1mthreading.py\x1b[0m", line \x1b[33m870\x1b[0m, in \x1b[35mrun\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1m_target\x1b[0m\x1b[1m(\x1b[0m\x1b[35m\x1b[1m*\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1m_args\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1m_kwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m│ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<Thread(Thread-8, started daemon 4168)>\x1b[0m\n \x1b[36m│ │ │ └ \x1b[0m\x1b[36m\x1b[1m(<socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=(\'127.0.0.1\', 7000), raddr...\x1b[0m\n \x1b[36m│ │ └ \x1b[0m\x1b[36m\x1b[1m<Thread(Thread-8, started daemon 4168)>\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<bound method ThreadingMixIn.process_request_thread of <django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<Thread(Thread-8, started daemon 4168)>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1msocketserver.py\x1b[0m", line \x1b[33m683\x1b[0m, in \x1b[35mprocess_request_thread\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mfinish_request\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[1mclient_address\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ │ │ └ \x1b[0m\x1b[36m\x1b[1m(\'127.0.0.1\', 57524)\x1b[0m\n \x1b[36m│ │ └ \x1b[0m\x1b[36m\x1b[1m<socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=(\'127.0.0.1\', 7000), raddr=...\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function BaseServer.finish_request at 0x000001748B52E790>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1msocketserver.py\x1b[0m", line \x1b[33m360\x1b[0m, in \x1b[35mfinish_request\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mRequestHandlerClass\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[1mclient_address\x1b[0m\x1b[1m,\x1b[0m \x1b[1mself\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>\x1b[0m\n \x1b[36m│ │ │ └ \x1b[0m\x1b[36m\x1b[1m(\'127.0.0.1\', 57524)\x1b[0m\n \x1b[36m│ │ └ \x1b[0m\x1b[36m\x1b[1m<socket.socket fd=7244, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=(\'127.0.0.1\', 7000), raddr=...\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<class \'django.core.servers.basehttp.WSGIRequestHandler\'>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\\x1b[0m\x1b[32m\x1b[1msocketserver.py\x1b[0m", line \x1b[33m747\x1b[0m, in \x1b[35m__init__\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mhandle\x1b[0m\x1b[1m(\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function WSGIRequestHandler.handle at 0x000001748D6A2AF0>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\servers\\\x1b[0m\x1b[32m\x1b[1mbasehttp.py\x1b[0m", line \x1b[33m178\x1b[0m, in \x1b[35mhandle\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mhandle_one_request\x1b[0m\x1b[1m(\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function WSGIRequestHandler.handle_one_request at 0x000001748D6A2B80>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\servers\\\x1b[0m\x1b[32m\x1b[1mbasehttp.py\x1b[0m", line \x1b[33m201\x1b[0m, in \x1b[35mhandle_one_request\x1b[0m\n \x1b[1mhandler\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mrun\x1b[0m\x1b[1m(\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mserver\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_app\x1b[0m\x1b[1m(\x1b[0m\x1b[1m)\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<function WSGIServer.get_app at 0x000001748CE294C0>\x1b[0m\n \x1b[36m│ │ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIServer object at 0x00000174A1AC55E0>\x1b[0m\n \x1b[36m│ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.WSGIRequestHandler object at 0x00000174A219D790>\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1m<function BaseHandler.run at 0x000001748CE28160>\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\wsgiref\\\x1b[0m\x1b[32m\x1b[1mhandlers.py\x1b[0m", line \x1b[33m137\x1b[0m, in \x1b[35mrun\x1b[0m\n \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mresult\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mapplication\x1b[0m\x1b[1m(\x1b[0m\x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1menviron\x1b[0m\x1b[1m,\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mstart_response\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m│ │ │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<function BaseHandler.start_response at 0x000001748CE284C0>\x1b[0m\n \x1b[36m│ │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>\x1b[0m\n \x1b[36m│ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{\'ALLUSERSPROFILE\': \'C:\\\\ProgramData\', \'APPDATA\': \'C:\\\\Users\\\\g.chirico\\\\AppData\\\\Roaming\', \'CHROME_CRASHPAD_PIPE_NAME\': \'\\\\\\...\x1b[0m\n \x1b[36m│ │ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>\x1b[0m\n \x1b[36m│ │ └ \x1b[0m\x1b[36m\x1b[1m<django.core.handlers.wsgi.WSGIHandler object at 0x00000174904C7640>\x1b[0m\n \x1b[36m│ └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n \x1b[36m└ \x1b[0m\x1b[36m\x1b[1m<django.core.servers.basehttp.ServerHandler object at 0x00000174A2F862E0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mwsgi.py\x1b[0m", line \x1b[33m133\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<function BaseHandler.get_response at 0x000001748D6A13A0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.core.handlers.wsgi.WSGIHandler object at 0x00000174904C7640>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mbase.py\x1b[0m", line \x1b[33m130\x1b[0m, in \x1b[35mget_response\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1m_middleware_chain\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174A21A3940>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.core.handlers.wsgi.WSGIHandler object at 0x00000174904C7640>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.cache.UpdateCacheMiddleware object at 0x00000174A1D48E20>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174A21A38B0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.cache.UpdateCacheMiddleware object at 0x00000174A1D48E20>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<main.middleware.CustomVariableLabel object at 0x00000174A1D66C40>\x1b[0m\n\n File "\x1b[32mD:\\Repository\\Analisys Detection\\Costal.DataAnalyser\\Analyser\\main\\\x1b[0m\x1b[32m\x1b[1mmiddleware.py\x1b[0m", line \x1b[33m20\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174A1F0AC10>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<main.middleware.CustomVariableLabel object at 0x00000174A1D66C40>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.gzip.GZipMiddleware object at 0x00000174A1E25C40>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA1F0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.gzip.GZipMiddleware object at 0x00000174A1E25C40>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.security.SecurityMiddleware object at 0x00000174904C7190>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA550>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.security.SecurityMiddleware object at 0x00000174904C7190>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.sessions.middleware.SessionMiddleware object at 0x00000174904C71C0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA5E0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.sessions.middleware.SessionMiddleware object at 0x00000174904C71C0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<corsheaders.middleware.CorsMiddleware object at 0x00000174904C7A00>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA670>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<corsheaders.middleware.CorsMiddleware object at 0x00000174904C7A00>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.common.CommonMiddleware object at 0x00000174904C7A90>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA700>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.common.CommonMiddleware object at 0x00000174904C7A90>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.csrf.CsrfViewMiddleware object at 0x00000174904C7AC0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA790>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.csrf.CsrfViewMiddleware object at 0x00000174904C7AC0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.auth.middleware.AuthenticationMiddleware object at 0x00000174904C7B80>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA8B0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.auth.middleware.AuthenticationMiddleware object at 0x00000174904C7B80>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.messages.middleware.MessageMiddleware object at 0x00000174904C7E80>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CA9D0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.contrib.messages.middleware.MessageMiddleware object at 0x00000174904C7E80>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.clickjacking.XFrameOptionsMiddleware object at 0x00000174904C7EB0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CAA60>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.clickjacking.XFrameOptionsMiddleware object at 0x00000174904C7EB0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django_currentuser.middleware.ThreadLocalUserMiddleware object at 0x00000174904C7940>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django_currentuser\\\x1b[0m\x1b[32m\x1b[1mmiddleware.py\x1b[0m", line \x1b[33m34\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<function convert_exception_to_response.<locals>.inner at 0x00000174904CAAF0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django_currentuser.middleware.ThreadLocalUserMiddleware object at 0x00000174904C7940>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.cache.FetchFromCacheMiddleware object at 0x00000174904C7FA0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\utils\\\x1b[0m\x1b[32m\x1b[1mdeprecation.py\x1b[0m", line \x1b[33m117\x1b[0m, in \x1b[35m__call__\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1mor\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<function BaseHandler._get_response at 0x00000174904CA3A0>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<django.middleware.cache.FetchFromCacheMiddleware object at 0x00000174904C7FA0>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1mNone\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mexception.py\x1b[0m", line \x1b[33m47\x1b[0m, in \x1b[35minner\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mget_response\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<bound method BaseHandler._get_response of <django.core.handlers.wsgi.WSGIHandler object at 0x00000174904C7640>>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\core\\handlers\\\x1b[0m\x1b[32m\x1b[1mbase.py\x1b[0m", line \x1b[33m181\x1b[0m, in \x1b[35m_get_response\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mwrapped_callback\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m*\x1b[0m\x1b[1mcallback_args\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mcallback_kwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m()\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<function TestPdf at 0x00000174A1AC00D0>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\views\\decorators\\\x1b[0m\x1b[32m\x1b[1mcache.py\x1b[0m", line \x1b[33m44\x1b[0m, in \x1b[35m_wrapped_view_func\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mview_func\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m*\x1b[0m\x1b[1margs\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mkwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m()\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<function TestPdf at 0x00000174A1AC0040>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\views\\decorators\\\x1b[0m\x1b[32m\x1b[1mcsrf.py\x1b[0m", line \x1b[33m54\x1b[0m, in \x1b[35mwrapped_view\x1b[0m\n \x1b[35m\x1b[1mreturn\x1b[0m \x1b[1mview_func\x1b[0m\x1b[1m(\x1b[0m\x1b[35m\x1b[1m*\x1b[0m\x1b[1margs\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mkwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m(<WSGIRequest: GET \'/api/test/report/\'>,)\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<function TestPdf at 0x00000174A1ABAF70>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\django\\views\\generic\\\x1b[0m\x1b[32m\x1b[1mbase.py\x1b[0m", line \x1b[33m70\x1b[0m, in \x1b[35mview\x1b[0m\n \x1b[35m\x1b[1mreturn\x1b[0m \x1b[1mself\x1b[0m\x1b[35m\x1b[1m.\x1b[0m\x1b[1mdispatch\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m*\x1b[0m\x1b[1margs\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mkwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m()\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m<WSGIRequest: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<function APIView.dispatch at 0x000001749E77E040>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<api.controllers.test.views.TestPdf object at 0x00000174A3AAB280>\x1b[0m\n\n File "\x1b[32mD:\\Users\\g.chirico\\miniconda3\\envs\\production\\lib\\site-packages\\rest_framework\\\x1b[0m\x1b[32m\x1b[1mviews.py\x1b[0m", line \x1b[33m506\x1b[0m, in \x1b[35mdispatch\x1b[0m\n \x1b[1mresponse\x1b[0m \x1b[35m\x1b[1m=\x1b[0m \x1b[1mhandler\x1b[0m\x1b[1m(\x1b[0m\x1b[1mrequest\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m*\x1b[0m\x1b[1margs\x1b[0m\x1b[1m,\x1b[0m \x1b[35m\x1b[1m**\x1b[0m\x1b[1mkwargs\x1b[0m\x1b[1m)\x1b[0m\n \x1b[36m │ │ │ └ \x1b[0m\x1b[36m\x1b[1m{}\x1b[0m\n \x1b[36m │ │ └ \x1b[0m\x1b[36m\x1b[1m()\x1b[0m\n \x1b[36m │ └ \x1b[0m\x1b[36m\x1b[1m<rest_framework.request.Request: GET \'/api/test/report/\'>\x1b[0m\n \x1b[36m └ \x1b[0m\x1b[36m\x1b[1m<bound method TestPdf.get of <api.controllers.test.views.TestPdf object at 0x00000174A3AAB280>>\x1b[0m\n\n> File "\x1b[32mD:\\Repository\\Analisys Detection\\Costal.DataAnalyser\\Analyser\\api\\controllers\\test\\\x1b[0m\x1b[32m\x1b[1mviews.py\x1b[0m", line \x1b[33m72\x1b[0m, in \x1b[35mget\x1b[0m\n \x1b[35m\x1b[1mraise\x1b[0m \x1b[1mException\x1b[0m\x1b[1m(\x1b[0m\x1b[36m\'TestPdf\'\x1b[0m\x1b[1m)\x1b[0m\n\n\x1b[31m\x1b[1mException\x1b[0m:\x1b[1m TestPdf\x1b[0m\n'

    opened by zN3utr4l 4
Releases(0.6.0)
  • 0.6.0(Jan 29, 2022)

    • Remove internal use of pickle.loads() considered as a security vulnerability referenced as CVE-2022-0329 (#563).
    • Modify coroutine sink to make it discard log messages when loop=None and no event loop is running (due to internally using asyncio.get_running_loop() in place of asyncio.get_event_loop()).
    • Remove the possibility to add a coroutine sink with enqueue=True if loop=None and no event loop is running.
    • Change default encoding of file sink to be utf8 instead of locale.getpreferredencoding() (#339).
    • Prevent non-ascii characters to be escaped while logging JSON message with serialize=True (#575, thanks @ponponon).
    • Fix flake8 errors and improve code readability (#353, thanks @AndrewYakimets).
    Source code(tar.gz)
    Source code(zip)
  • 0.5.3(Sep 20, 2020)

    • Fix child process possibly hanging at exit while combining enqueue=True with third party library like uwsgi (#309, thanks @dstlmrk).
    • Fix possible exception during formatting of non-string messages (#331).
    Source code(tar.gz)
    Source code(zip)
  • 0.5.2(Sep 6, 2020)

    • Fix AttributeError within handlers using serialize=True when calling logger.exception() outside of the context of an exception (#296).
    • Fix error while logging an exception containing a non-picklable value to a handler with enqueue=True (#298).
    • Add support for async callable classes (with __call__ method) used as sinks (#294, thanks @jessekrubin).
    Source code(tar.gz)
    Source code(zip)
  • 0.5.1(Jun 12, 2020)

    • Modify the way the extra dict is used by LogRecord in order to prevent possible KeyError with standard logging handlers (#271).
    • Add a new default optional argument to logger.catch(), it should be the returned value by the decorated function in case an error occurred (#272).
    • Fix ValueError when using serialize=True in combination with logger.catch() or logger.opt(record=True) due to circular reference of the record dict (#286).
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(May 17, 2020)

    • Remove the possibility to modify the severity no of levels once they have been added in order to prevent surprising behavior (#209).
    • Add better support for "structured logging" by automatically adding **kwargs to the extra dict besides using these arguments to format the message. This behavior can be disabled by setting the new .opt(capture=False) parameter (#2).
    • Add a new onerror optional argument to logger.catch(), it should be a function which will be called when an exception occurs in order to customize error handling (#224).
    • Add a new exclude optional argument to logger.catch(), is should be a type of exception to be purposefully ignored and propagated to the caller without being logged (#248).
    • Modify complete() to make it callable from non-asynchronous functions, it can thus be used if enqueue=True to make sure all messages have been processed (#231).
    • Fix possible deadlocks on Linux when multiprocessing.Process() collides with enqueue=True or threading (#231).
    • Fix compression function not executable concurrently due to file renaming (to resolve conflicts) being performed after and not before it (#243).
    • Fix the filter function listing files for retention being too restrictive, it now matches files based on the pattern "basename(.*).ext(.*)" (#229).
    • Fix the impossibility to remove() a handler if an exception is raised while the sink' stop() function is called (#237).
    • Fix file sink left in an unstable state if an exception occurred during retention or compression process (#238).
    • Fix situation where changes made to record["message"] were unexpectedly ignored when opt(colors=True), causing "out-of-date" message to be logged due to implementation details (#221).
    • Fix possible exception if a stream having an isatty() method returning True but not being compatible with colorama is used on Windows (#249).
    • Fix exceptions occurring in coroutine sinks never retrieved and hence causing warnings (#227).
    Source code(tar.gz)
    Source code(zip)
  • 0.4.1(Jan 19, 2020)

    • Deprecate the ansi parameter of .opt() in favor of colors which is a name more appropriate.
    • Prevent unrelated files and directories to be incorrectly collected thus causing errors during the retention process (#195, thanks @gazpachoking).
    • Strip color markups contained in record["message"] when logging with .opt(ansi=True) instead of leaving them as is (#198).
    • Ignore color markups contained in *args and **kwargs when logging with .opt(ansi=True), leave them as is instead of trying to use them to colorize the message which could cause undesirable errors (#197).
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Dec 1, 2019)

    • Add support for coroutine functions used as sinks and add the new logger.complete() asynchronous method to await them (#171).
    • Add a way to filter logs using one level per module in the form of a dict passed to the filter argument (#148).
    • Add type hints to annotate the public methods using a .pyi stub file (#162).
    • Add support for copy.deepcopy() of the logger allowing multiple independent loggers with separate set of handlers (#72).
    • Add the possibility to convert datetime to UTC before formatting (in logs and filenames) by adding "!UTC" at the end of the time format specifier (#128).
    • Add the level name as the first argument of namedtuple returned by the .level() method.
    • Remove class objects from the list of supported sinks and restrict usage of **kwargs in .add() to file sink only. User is in charge of instantiating sink and wrapping additional keyword arguments if needed, before passing it to the .add() method.
    • Rename the logger.configure() keyword argument patch to patcher so it better matches the signature of logger.patch().
    • Fix incompatibility with multiprocessing on Windows by entirely refactoring the internal structure of the logger so it can be inherited by child processes along with added handlers (#108).
    • Fix AttributeError while using a file sink on some distributions (like Alpine Linux) missing the os.getxattr and os.setxattr functions (#158, thanks @joshgordon).
    • Fix values wrongly displayed for keyword arguments during exception formatting with diagnose=True (#144).
    • Fix logging messages wrongly chopped off at the end while using standard logging.Handler sinks with .opt(raw=True) (#136).
    • Fix potential errors during rotation if destination file exists due to large resolution clock on Windows (#179).
    • Fix an error using a filter function "by name" while receiving a log with record["name"] equals to None.
    • Fix incorrect record displayed while handling errors (if catch=True) occurring because of non-picklable objects (if enqueue=True).
    • Prevent hypothetical ImportError if a Python installation is missing the built-in distutils module (#118).
    • Raise TypeError instead of ValueError when a logger method is called with argument of invalid type.
    • Raise ValueError if the built-in format() and filter() functions are respectively used as format and filter arguments of the add() method. This helps the user to understand the problem, as such a mistake can quite easily occur (#177).
    • Remove inheritance of some record dict attributes to str (for "level", "file", "thread" and "process").
    • Give a name to the worker thread used when enqueue=True (#174, thanks @t-mart).
    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Jul 21, 2019)

  • 0.3.1(Jul 13, 2019)

    • Fix retention and rotation issues when file sink initiliazed with delay=True (#113).
    • Fix "sec" no longer recognized as a valid duration unit for file rotation and retention arguments.
    • Ensure stack from the caller is displayed while formatting exception of a function decorated with @logger.catch when backtrace=False.
    • Modify datetime used to automatically rename conflicting file when rotating (it happens if file already exists because "{time}" not presents in filename) so it's based on the file creation time rather than the current time.
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jun 29, 2019)

    • Remove all dependencies previously needed by loguru (on Windows platform, it solely remains colorama and win32-setctime).
    • Add a new logger.patch() method which can be used to modify the record dict on-the-fly before it's being sent to the handlers.
    • Modify behavior of sink option backtrace so it only extends the stacktrace upward, the display of variables values is now controlled with the new diagnose argument (#49).
    • Change behavior of rotation option in file sinks: it is now based on the file creation time rather than the current time, note that proper support may differ depending on your platform (#58).
    • Raise errors on unknowns color tags rather than silently ignoring them (#57).
    • Add the possibility to auto-close color tags by using </> (e.g. <yellow>message</>).
    • Add coloration of exception traceback even if diagnose and backtrace options are False.
    • Add a way to limit the depth of formatted exceptions traceback by setting the conventional sys.tracebacklimit variable (#77).
    • Add __repr__ value to the logger for convenient debugging (#84).
    • Remove colors tags mixing directives (e.g. <red,blue>) for simplification.
    • Make the record["exception"] attribute unpackable as a (type, value, traceback) tuple.
    • Fix error happening in some rare circumstances because frame.f_globals dict did not contain "__name__" key and hence prevented Loguru to retrieve the module's name. From now, record["name"] will be equal to None in such case (#62).
    • Fix logging methods not being serializable with pickle and hence raising exception while being passed to some multiprocessing functions (#102).
    • Fix exception stack trace not colorizing source code lines on Windows.
    • Fix possible AttributeError while formatting exceptions within a celery task (#52).
    • Fix logger.catch decorator not working with generator and coroutine functions (#75).
    • Fix record["path"] case being normalized for no necessary reason (#85).
    • Fix some Windows terminal emulators (mintty) not correctly detected as supporting colors, causing ansi codes to be automatically stripped (#104).
    • Fix handler added with enqueue=True stopping working if exception was raised in sink although catch=True.
    • Fix thread-safety of enable() and disable() being called during logging.
    • Use Tox to run tests (#41).
    Source code(tar.gz)
    Source code(zip)
  • 0.2.5(Jun 22, 2019)

    • Modify behavior of sink option backtrace=False so it doesn't extend traceback upward automatically (#30).
    • Fix import error on some platforms using Python 3.5 with limited localtime() support (#33).
    • Fix incorrect time formatting of locale month using MMM and MMMM tokens (#34, thanks @nasyxx).
    • Fix race condition permitting writing on a stopped handler.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.4(Jun 22, 2019)

  • 0.2.3(Jun 22, 2019)

    • Add support for PyPy.
    • Add support for Python 3.5.
    • Fix incompatibility with awscli by downgrading required colorama dependency version (#12).
    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Jun 22, 2019)

    • Deprecate logger.start() and logger.stop() methods in favor of logger.add() and logger.remove() (#3).
    • Fix ignored formatting while using logging.Handler sinks (#4).
    • Fix impossibility to set empty environment variable color on Windows (#7).
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Jun 22, 2019)

  • 0.2.0(Jun 22, 2019)

    • Remove the parser and refactor it into the logger.parse() method.
    • Remove the notifier and its dependencies (pip install notifiers should be used instead).
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Jan 11, 2020)

  • 0.0.1(Jun 22, 2019)

Outlog it's a library to make logging a simple task

outlog Outlog it's a library to make logging a simple task!. I'm a lazy python user, the times that i do logging on my apps it's hard to do, a lot of

ZSendokame 2 Mar 05, 2022
A new kind of Progress Bar, with real time throughput, eta and very cool animations!

alive-progress :) A new kind of Progress Bar, with real-time throughput, eta and very cool animations! Ever found yourself in a remote ssh session, do

Rogério Sampaio de Almeida 4k Dec 30, 2022
A lightweight logging library for python applications

cakelog a lightweight logging library for python applications This is a very small logging library to make logging in python easy and simple. config o

2 Jan 05, 2022
This is a key logger based in python which when executed records all the keystrokes of the system it has been executed on .

This is a key logger based in python which when executed records all the keystrokes of the system it has been executed on

Purbayan Majumder 0 Mar 28, 2022
metovlogs is a very simple logging library

metovlogs is a very simple logging library. Setup is one line, then you can use it as a drop-in print replacement. Sane and useful log format out of the box. Best for small or early projects.

Azat Akhmetov 1 Mar 01, 2022
Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running.

lazyprofiler Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running. Installation Use the packag

Shankar Rao Pandala 28 Dec 09, 2022
Prettify Python exception output to make it legible.

pretty-errors Prettifies Python exception output to make it legible. Install it with python -m pip install pretty_errors If you want pretty_errors to

Iain King 2.6k Jan 04, 2023
Track Nano accounts and notify via log file or email

nano-address-notifier Track accounts and notify via log file or email Required python libs

Joohansson (Json) 4 Nov 08, 2021
Python logging package for easy reproducible experimenting in research

smilelogging Python logging package for easy reproducible experimenting in research. Why you may need this package This project is meant to provide an

Huan Wang 20 Dec 23, 2022
Fuzzy-logger - Fuzzy project is here Log all your pc's actions Simple and free to use Security of datas !

Fuzzy-logger - ➡️⭐ Fuzzy ⭐ project is here ! ➡️ Log all your pc's actions ! ➡️ Simple and free to use ➡️ Security of datas !

natrix_dev 2 Oct 02, 2022
Debugging-friendly exceptions for Python

Better tracebacks This is a more helpful version of Python's built-in exception message: It shows more code context and the current values of nearby v

Clemens Korndörfer 1.2k Dec 28, 2022
Rich is a Python library for rich text and beautiful formatting in the terminal.

Rich 中文 readme • lengua española readme • Läs på svenska Rich is a Python library for rich text and beautiful formatting in the terminal. The Rich API

Will McGugan 41.5k Jan 07, 2023
A demo of Prometheus+Grafana for monitoring an ML model served with FastAPI.

ml-monitoring Jeremy Jordan This repository provides an example setup for monitoring an ML system deployed on Kubernetes.

Jeremy Jordan 176 Jan 01, 2023
Summarize LSF job properties by parsing log files.

Summarize LSF job properties by parsing log files of workflows executed by Snakemake.

Kim 4 Jan 09, 2022
A small utility to pretty-print Python tracebacks. ⛺

TBVaccine TBVaccine is a utility that pretty-prints Python tracebacks. It automatically highlights lines you care about and deemphasizes lines you don

Stavros Korokithakis 365 Nov 11, 2022
This is a wonderful simple python tool used to store the keyboard log.

Keylogger This is a wonderful simple python tool used to store the keyboard log. Record your keys. It will capture passwords and credentials in a comp

Rithin Lehan 2 Nov 25, 2021
Command-line tool that instantly fetches Stack Overflow results when an exception is thrown

rebound Rebound is a command-line tool that instantly fetches Stack Overflow results when an exception is thrown. Just use the rebound command to exec

Jonathan Shobrook 3.9k Jan 03, 2023
Python bindings for g3log

g3logPython Python bindings for g3log This library provides python3 bindings for g3log + g3sinks (currently logrotate, syslog, and a color-terminal ou

4 May 21, 2021
Multi-processing capable print-like logger for Python

MPLogger Multi-processing capable print-like logger for Python Requirements and Installation Python 3.8+ is required Pip pip install mplogger Manual P

Eötvös Loránd University Department of Digital Humanities 1 Jan 28, 2022
APT-Hunter is Threat Hunting tool for windows event logs

APT-Hunter is Threat Hunting tool for windows event logs which made by purple team mindset to provide detect APT movements hidden in the sea of windows event logs to decrease the time to uncover susp

824 Jan 08, 2023