giving — the reactive logger

Related tags

Logginggiving
Overview

giving — the reactive logger

Documentation

giving is a simple, magical library that lets you log or "give" arbitrary data throughout a program and then process it as an event stream. You can use it to log to the terminal, to wandb or mlflow, to compute minimums, maximums, rolling means, etc., separate from your program's core logic.

  1. Inside your code, give() every object or datum that you may want to log or compute metrics about.
  2. Wrap your main loop with given() and define pipelines to map, filter and reduce the data you gave.

Examples

Code Output

Simple logging

with given().display():
    a, b = 10, 20
    give()
    give(a * b, c=30)
a: 10; b: 20
a * b: 200; c: 30

Extract values into a list

with given()["s"].values() as results:
    s = 0
    for i in range(5):
        s += i
        give(s)

print(results)
[0, 1, 3, 6, 10]

Reductions (min, max, count, etc.)

def collatz(n):
    while n != 1:
        give(n)
        n = (3 * n + 1) if n % 2 else (n // 2)

with given() as gv:
    gv["n"].max().print("max: {}")
    gv["n"].count().print("steps: {}")

    collatz(2021)
max: 6064
steps: 63

Using the eval method instead of with:

st, = given()["n"].count().eval(collatz, 2021)
print(st)
63

The kscan method

with given() as gv:
    gv.kscan().display()

    give(elk=1)
    give(rabbit=2)
    give(elk=3, wolf=4)
elk: 1
elk: 1; rabbit: 2
elk: 3; rabbit: 2; wolf: 4

The throttle method

with given() as gv:
    gv.throttle(1).display()

    for i in range(50):
        give(i)
        time.sleep(0.1)
i: 0
i: 10
i: 20
i: 30
i: 40

The above examples only show a small number of all the available operators.

Give

There are multiple ways you can use give. give returns None unless it is given a single positional argument, in which case it returns the value of that argument.

  • give(key=value)

    This is the most straightforward way to use give: you write out both the key and the value associated.

    Returns: None

  • x = give(value)

    When no key is given, but the result of give is assigned to a variable, the key is the name of that variable. In other words, the above is equivalent to give(x=value).

    Returns: The value

  • give(x)

    When no key is given and the result is not assigned to a variable, give(x) is equivalent to give(x=x). If the argument is an expression like x * x, the key will be the string "x * x".

    Returns: The value

  • give(x, y, z)

    Multiple arguments can be given. The above is equivalent to give(x=x, y=y, z=z).

    Returns: None

  • x = value; give()

    If give has no arguments at all, it will look at the immediately previous statement and infer what you mean. The above is equivalent to x = value; give(x=value).

    Returns: None

Important functions and methods

See here for more details.

Operator summary

Not all operators are listed here. See here for the complete list.

Filtering

  • filter: filter with a function
  • kfilter: filter with a function (keyword arguments)
  • where: filter based on keys and simple conditions
  • where_any: filter based on keys
  • keep: filter based on keys (+drop the rest)
  • distinct: only emit distinct elements
  • norepeat: only emit distinct consecutive elements
  • first: only emit the first element
  • last: only emit the last element
  • take: only emit the first n elements
  • take_last: only emit the last n elements
  • skip: suppress the first n elements
  • skip_last: suppress the last n elements

Mapping

  • map: map with a function
  • kmap: map with a function (keyword arguments)
  • augment: add extra keys using a mapping function
  • getitem: extract value for a specific key
  • sole: extract value from dict of length 1
  • as_: wrap as a dict

Reduction

  • reduce: reduce with a function
  • scan: emit a result at each reduction step
  • roll: reduce using overlapping windows
  • kmerge: merge all dictionaries in the stream
  • kscan: incremental version of kmerge

Arithmetic reductions

Most of these reductions can be called with the scan argument set to True to use scan instead of reduce. scan can also be set to an integer, in which case roll is used.

Wrapping

  • wrap: give a special key at the beginning and end of a block
  • wrap_inherit: give a special key at the beginning and end of a block
  • inherit: add default key/values for every give() in the block
  • wrap: plug a context manager at the location of a give.wrap
  • kwrap: same as wrap, but pass kwargs

Timing

  • debounce: suppress events that are too close in time
  • sample: sample an element every n seconds
  • throttle: emit at most once every n seconds

Debugging

  • breakpoint: set a breakpoint whenever data comes in. Use this with filters.
  • tag: assigns a special word to every entry. Use with breakword.
  • breakword: set a breakpoint on a specific word set by tag, using the BREAKWORD environment variable.

Other

  • accum: accumulate into a list
  • display: print out the stream (pretty).
  • print: print out the stream.
  • values: accumulate into a list (context manager)
  • subscribe: run a task on every element
  • ksubscribe: run a task on every element (keyword arguments)

ML ideas

Here are some ideas for using giving in a machine learning model training context:

> is shorthand for .subscribe() losses >> wandb.log # Print the minimum loss at the end losses["loss"].min().print("Minimum loss: {}") # Print the mean of the last 100 losses # * affix adds columns, so we will display i, loss and meanloss together # * The scan argument outputs the mean incrementally # * It's important that each affixed column has the same length as # the losses stream (or "table") losses.affix(meanloss=losses["loss"].mean(scan=100)).display() # Store all the losses in a list losslist = losses["loss"].accum() # Set a breakpoint whenever the loss is nan or infinite losses["loss"].filter(lambda loss: not math.isfinite(loss)).breakpoint() # Filter all the lines that have the "model" key: models = gv.where("model") # Write a checkpoint of the model at most once every 30 minutes models["model"].throttle(30 * 60).subscribe( lambda model: model.checkpoint() ) # Watch with wandb, but only once at the very beginning models["model"].first() >> wandb.watch # Write the final model (you could also use models.last()) models.where(final=True)["model"].subscribe( lambda model: model.save() ) # =========================================================== # Finally, execute the code. All the pipelines we defined above # will proceed as we give data. # =========================================================== main() ">
from giving import give, given


def main():
    model = Model()

    for i in range(niters):
        # Give the model. give looks at the argument string, so 
        # give(model) is equivalent to give(model=model)
        give(model)

        loss = model.step()

        # Give the iteration number and the loss (equivalent to give(i=i, loss=loss))
        give(i, loss)

    # Give the final model. The final=True key is there so we can filter on it.
    give(model, final=True)


if __name__ == "__main__":
    with given() as gv:
        # ===========================================================
        # Define our pipeline **before** running main()
        # ===========================================================

        # Filter all the lines that have the "loss" key
        # NOTE: Same as gv.filter(lambda values: "loss" in values)
        losses = gv.where("loss")

        # Print the losses on stdout
        losses.display()                 # always
        losses.throttle(1).display()     # OR: once every second
        losses.slice(step=10).display()  # OR: every 10th loss

        # Log the losses (and indexes i) with wandb
        # >> is shorthand for .subscribe()
        losses >> wandb.log

        # Print the minimum loss at the end
        losses["loss"].min().print("Minimum loss: {}")

        # Print the mean of the last 100 losses
        # * affix adds columns, so we will display i, loss and meanloss together
        # * The scan argument outputs the mean incrementally
        # * It's important that each affixed column has the same length as
        #   the losses stream (or "table")
        losses.affix(meanloss=losses["loss"].mean(scan=100)).display()

        # Store all the losses in a list
        losslist = losses["loss"].accum()

        # Set a breakpoint whenever the loss is nan or infinite
        losses["loss"].filter(lambda loss: not math.isfinite(loss)).breakpoint()


        # Filter all the lines that have the "model" key:
        models = gv.where("model")

        # Write a checkpoint of the model at most once every 30 minutes
        models["model"].throttle(30 * 60).subscribe(
            lambda model: model.checkpoint()
        )

        # Watch with wandb, but only once at the very beginning
        models["model"].first() >> wandb.watch

        # Write the final model (you could also use models.last())
        models.where(final=True)["model"].subscribe(
            lambda model: model.save()
        )


        # ===========================================================
        # Finally, execute the code. All the pipelines we defined above
        # will proceed as we give data.
        # ===========================================================
        main()
Owner
Olivier Breuleux
Olivier Breuleux
HTTP(s) "monitoring" webpage via FastAPI+Jinja2. Inspired by https://github.com/RaymiiOrg/bash-http-monitoring

python-http-monitoring HTTP(s) "monitoring" powered by FastAPI+Jinja2+aiohttp. Inspired by bash-http-monitoring. Installation can be done with pipenv

itzk 39 Aug 26, 2022
This is a key logger based in python which when executed records all the keystrokes of the system it has been executed on .

This is a key logger based in python which when executed records all the keystrokes of the system it has been executed on

Purbayan Majumder 0 Mar 28, 2022
Structured Logging for Python

structlog makes logging in Python faster, less painful, and more powerful by adding structure to your log entries. It's up to you whether you want str

Hynek Schlawack 2.3k Jan 05, 2023
Pretty and useful exceptions in Python, automatically.

better-exceptions Pretty and more helpful exceptions in Python, automatically. Usage Install better_exceptions via pip: $ pip install better_exception

Qix 4.3k Dec 29, 2022
Integrates a UPS monitored by NUT into OctoPrint

OctoPrint UPS This OctoPrint plugin interfaces with a UPS monitored by NUT (Network UPS Tools). Requirements NUT must be configured by the user. This

Shawn Bruce 11 Jul 05, 2022
Multi-processing capable print-like logger for Python

MPLogger Multi-processing capable print-like logger for Python Requirements and Installation Python 3.8+ is required Pip pip install mplogger Manual P

Eötvös Loránd University Department of Digital Humanities 1 Jan 28, 2022
Rich is a Python library for rich text and beautiful formatting in the terminal.

Rich 中文 readme • lengua española readme • Läs på svenska Rich is a Python library for rich text and beautiful formatting in the terminal. The Rich API

Will McGugan 41.5k Jan 07, 2023
A simple package that allows you to save inputs & outputs as .log files

wolf_dot_log A simple package that allows you to save inputs & outputs as .log files pip install wolf_dot_log pip3 install wolf_dot_log |Instructions|

Alpwuf 1 Nov 16, 2021
Vibrating-perimeter - Simple helper mod that logs how fast you are mining together with a simple buttplug.io script to control a vibrator

Vibrating Perimeter This project consists of a small minecraft helper mod that writes too a log file and a script that reads said log. Currently it on

Heart[BOT] 0 Nov 20, 2022
A watchdog and logger to Discord for hosting ScPrime servers.

ScpDog A watchdog and logger to Discord for hosting ScPrime servers. Designed to work on Linux servers. This is only capable of sending the logs from

Keagan Landfried 3 Jan 10, 2022
This is a wonderful simple python tool used to store the keyboard log.

Keylogger This is a wonderful simple python tool used to store the keyboard log. Record your keys. It will capture passwords and credentials in a comp

Rithin Lehan 2 Nov 25, 2021
Soda SQL Data testing, monitoring and profiling for SQL accessible data.

Soda SQL Data testing, monitoring and profiling for SQL accessible data. What does Soda SQL do? Soda SQL allows you to Stop your pipeline when bad dat

Soda Data Monitoring 51 Jan 01, 2023
A Python package which supports global logfmt formatted logging.

Python Logfmter A Python package which supports global logfmt formatted logging. Install $ pip install logfmter Usage Before integrating this library,

Joshua Taylor Eppinette 15 Dec 29, 2022
A Python library that tees the standard output & standard error from the current process to files on disk, while preserving terminal semantics

A Python library that tees the standard output & standard error from the current process to files on disk, while preserving terminal semantics (so breakpoint(), etc work as normal)

Greg Brockman 7 Nov 30, 2022
🐑 Syslog Simulator hazır veya kullanıcıların eklediği logları belirtilen adreslere ve port'a seçilen döngüde syslog ile gönderilmesini sağlayan araçtır. | 🇹🇷

syslogsimulator hazır ürün loglarını SIEM veya log toplayıcısına istediğiniz portta belirli sürelerde göndermeyi sağlayan küçük bir araçtır.

Enes Aydın 3 Sep 28, 2021
A demo of Prometheus+Grafana for monitoring an ML model served with FastAPI.

ml-monitoring Jeremy Jordan This repository provides an example setup for monitoring an ML system deployed on Kubernetes.

Jeremy Jordan 176 Jan 01, 2023
A cool logging replacement for Python.

Welcome to Logbook Travis AppVeyor Supported Versions Latest Version Test Coverage Logbook is a nice logging replacement. It should be easy to setup,

1.4k Nov 11, 2022
Log4j alternative for Python

Log4p Log4p is the most secure logging library ever created in this and all other universes. Usage: import log4p log4p.log('"Wow, this library is sec

Isaak Uchakaev 15 Dec 16, 2022
Keylogger with Python which logs words into server terminal.

word_logger Experimental keylogger with Python which logs words into server terminal.

Selçuk 1 Nov 15, 2021
Monitor and log Network and Disks statistics in MegaBytes per second.

iometrics Monitor and log Network and Disks statistics in MegaBytes per second. Install pip install iometrics Usage Pytorch-lightning integration from

Leo Gallucci 17 May 03, 2022