Instrument your FastAPI app

Overview

Prometheus FastAPI Instrumentator

PyPI version Maintenance PyPI downloads docs

release commit CodeQL codecov Code style: black

A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator from PyPI. Here is the fast track to get started with a preconfigured instrumentator:

from prometheus_fastapi_instrumentator import Instrumentator

Instrumentator().instrument(app).expose(app)

With this, your FastAPI is instrumented and metrics ready to be scraped. The defaults give you:

  • Counter http_requests_total with handler, status and method. Total number of requests.
  • Summary http_request_size_bytes with handler. Added up total of the content lengths of all incoming requests.
  • Summary http_response_size_bytes with handler. Added up total of the content lengths of all outgoing responses.
  • Histogram http_request_duration_seconds with handler. Only a few buckets to keep cardinality low.
  • Histogram http_request_duration_highr_seconds without any labels. Large number of buckets (>20).

In addition, following behaviour is active:

  • Status codes are grouped into 2xx, 3xx and so on.
  • Requests without a matching template are grouped into the handler none.

If one of these presets does not suit your needs you can multiple things:

  • Pick one of the already existing closures from metrics and pass it to the instrumentator instance. See here how to do that.
  • Create your own instrumentation function that you can pass to an instrumentator instance. See here to learn how more.
  • Don't use this package at all and just use the sorce code as inspiration on how to instrument your FastAPI.

Important: This package is not made for generic Prometheus instrumentation in Python. Use the Prometheus client library for that. This packages uses it as well.

Table of Contents

Features

Beyond the fast track, this instrumentator is highly configurable and it is very easy to customize and adapt to your specific use case. Here is a list of some of these options you may opt-in to:

  • Regex patterns to ignore certain routes.
  • Completely ignore untemplated routes.
  • Control instrumentation and exposition with an env var.
  • Rounding of latencies to a certain decimal number.
  • Renaming of labels and the metric.
  • Metrics endpoint can compress data with gzip.
  • Opt-in metric to monitor the number of requests in progress.

It also features a modular approach to metrics that should instrument all FastAPI endpoints. You can either choose from a set of already existing metrics or create your own. And every metric function by itself can be configured as well. You can see ready to use metrics here.

Advanced Usage

This chapter contains an example on the advanced usage of the Prometheus FastAPI Instrumentator to showcase most of it's features. Fore more concrete info check out the automatically generated documentation.

Creating the Instrumentator

We start by creating an instance of the Instrumentator. Notice the additional metrics import. This will come in handy later.

from prometheus_fastapi_instrumentator import Instrumentator, metrics

instrumentator = Instrumentator(
    should_group_status_codes=False,
    should_ignore_untemplated=True,
    should_respect_env_var=True,
    should_instrument_requests_inprogress=True,
    excluded_handlers=[".*admin.*", "/metrics"],
    env_var_name="ENABLE_METRICS",
    inprogress_name="inprogress",
    inprogress_labels=True,
)

Unlike in the fast track example, now the instrumentation and exposition will only take place if the environment variable ENABLE_METRICS is true at run-time. This can be helpful in larger deployments with multiple services depending on the same base FastAPI.

Adding metrics

Let's say we also want to instrument the size of requests and responses. For this we use the add() method. This method does nothing more than taking a function and adding it to a list. Then during run-time every time FastAPI handles a request all functions in this list will be called while giving them a single argument that stores useful information like the request and response objects. If no add() at all is used, the default metric gets added in the background. This is what happens in the fast track example.

All instrumentation functions are stored as closures in the metrics module. Fore more concrete info check out the automatically generated documentation.

Closures come in handy here because it allows us to configure the functions within.

instrumentator.add(metrics.latency(buckets=(1, 2, 3,)))

This simply adds the metric you also get in the fast track example with a modified buckets argument. But we would also like to record the size of all requests and responses.

instrumentator.add(
    metrics.request_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="a",
        metric_subsystem="b",
    )
).add(
    metrics.response_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="namespace",
        metric_subsystem="subsystem",
    )
)

You can add as many metrics you like to the instrumentator.

Creating new metrics

As already mentioned, it is possible to create custom functions to pass on to add(). This is also how the default metrics are implemented. The documentation and code here is helpful to get an overview.

The basic idea is that the instrumentator creates an info object that contains everything necessary for instrumentation based on the configuration of the instrumentator. This includes the raw request and response objects but also the modified handler, grouped status code and duration. Next, all registered instrumentation functions are called. They get info as their single argument.

Let's say we want to count the number of times a certain language has been requested.

from typing import Callable
from prometheus_fastapi_instrumentator.metrics import Info
from prometheus_client import Counter

def http_requested_languages_total() -> Callable[[Info], None]:
    METRIC = Counter(
        "http_requested_languages_total", 
        "Number of times a certain language has been requested.", 
        labelnames=("langs",)
    )

    def instrumentation(info: Info) -> None:
        langs = set()
        lang_str = info.request.headers["Accept-Language"]
        for element in lang_str.split(",")
            element = element.split(";")[0].strip().lower()
            langs.add(element)
        for language in langs:
            METRIC.labels(language).inc()

    return instrumentation

The function http_requested_languages_total is used for persistent elements that are stored between all instrumentation executions (for example the metric instance itself). Next comes the closure. This function must adhere to the shown interface. It will always get an Info object that contains the request, response and a few other modified informations. For example the (grouped) status code or the handler. Finally, the closure is returned.

Important: The response object inside info can either be the response object or None. In addition, errors thrown in the handler are not caught by the instrumentator. I recommend to check the documentation and/or the source code before creating your own metrics.

To use it, we hand over the closure to the instrumentator object.

instrumentator.add(http_requested_languages_total())

Perform instrumentation

Up to this point, the FastAPI has not been touched at all. Everything has been stored in the instrumentator only. To actually register the instrumentation with FastAPI, the instrument() method has to be called.

instrumentator.instrument(app)

Notice that this will do nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Exposing endpoint

To expose an endpoint for the metrics either follow Prometheus Python Client and add the endpoint manually to the FastAPI or serve it on a separate server. You can also use the included expose method. It will add an endpoint to the given FastAPI. With should_gzip you can instruct the endpoint to compress the data as long as the client accepts gzip encoding. Prometheus for example does by default. Beware that network bandwith is often cheaper than CPU cycles.

instrumentator.expose(app, include_in_schema=False, should_gzip=True)

Notice that this will to nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Prerequesites

You can always check pyproject.toml for dependencies.

  • python = "^3.6" (tested with 3.6 and 3.9)
  • fastapi = ">=0.38.1, <=1.0.0" (tested with 0.38.1 and 0.61.0)

Development

Please refer to "DEVELOPMENT.md".

Comments
  • BrokenResourceError on metrics endpoint

    BrokenResourceError on metrics endpoint

    image image

    They mention here that it's related to MemoryStream middleware, but I'm not sure it's relevant. https://github.com/tiangolo/fastapi/issues/4041

    Versions: PFI: 5.8.2 FastAPI 0.78 Starlette 0.19.1

    Any ideas how to fix it?

    opened by mdczaplicki 16
  • refactor: change middleware implementation to pure asgi

    refactor: change middleware implementation to pure asgi

    Instead of a class solution, I went with a function layout (which makes more sense in this case). You can check how this is done in https://github.com/simonw/asgi-cors

    Closes #23

    References:

    • https://asgi.readthedocs.io/
    • https://github.com/encode/starlette/
    opened by Kludex 11
  • BaseHTTPMiddleware vs BackgroundTasks

    BaseHTTPMiddleware vs BackgroundTasks

    @trallnag I've just noticed that we use @app.middleware('http') here, I should have been able to catch this earlier... Anyway, that decorator is implemented on top of BaseHTTPMiddleware, which has a problem: https://github.com/encode/starlette/issues/919

    Solution: change the implementation to a pure ASGI app/middleware.

    PS.: I can open a PR with it, jfyk.

    opened by Kludex 9
  • Use the right type on Response for tests

    Use the right type on Response for tests

    Changes

    • Replace starlette.responses.Response by requests.Response on tests, as it's the right type returned by TestClient.
    • Replace middleware implementation from BaseHTTPMiddleware to pure ASGI.

    Can you approve the pipeline @trallnag ? cc @tiangolo - Some tests still failing atm.

    released 
    opened by Kludex 8
  • Adding FastAPI tags to metrics route

    Adding FastAPI tags to metrics route

    Is there a way to customize where the metrics route is tagged in the generated FastAPI docs? I'm using tags to group routes, but my instrumented routes ('/metrics') always ends up in "default".

    enhancement question 
    opened by chisaipete 7
  • Print in middleware component

    Print in middleware component

    In version 5.8.2 the PrometheusInstrumentatorMiddleware prints to stdout on every request, this should not be in the release:

    https://github.com/trallnag/prometheus-fastapi-instrumentator/blob/master/prometheus_fastapi_instrumentator/middleware.py#L80

    Workaround: Downgrade to 5.8.1

    opened by mattzque 5
  • actions: Support python 3.10

    actions: Support python 3.10

    Why

    • Currently python 3.10 is not officially supported and tested
    • python-version as number 3.10 is parsed as 3.1 image

    What

    • Run commit-tests with python 3.10
    • Change python-version to string as 3.10 would else be parsed as 3.1

    Additional info

    • Tested build in forked-repository https://github.com/Luke31/prometheus-fastapi-instrumentator/pull/1/files#diff-191bb5b4e97db48c9d0bdb945dd00e17b53249422f60a642e9e8d73250b5913aR7
      • Of course can't publish package because no permission to do so https://github.com/Luke31/prometheus-fastapi-instrumentator/runs/6171074586?check_suite_focus=true
    released 
    opened by Luke31 4
  • Example for

    Example for "manually" pushing a metric

    I'd love to see an example of how to "manually" submit a metric, something like:

    pseudocode:

    @app.get('/super-route')
    async def super-thing():
         business_result = await call_some_business_logic()
         metrics.push(business_result.count)
    

    it's difficult to see how to "interact" with the metrics dynamically in code without going through the request/response object.

    opened by trondhindenes 4
  • 🐛 Expose exceptions raised by other middlewares and app code

    🐛 Expose exceptions raised by other middlewares and app code

    🐛 Expose exceptions raised by other middlewares and app code

    It seems the traceback for other exceptions is currently hidden. This could fix/related to: https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/108

    It seems it could also be related to: https://github.com/encode/starlette/issues/1634

    opened by tiangolo 3
  • Unnecessary tight version constraint limits FastAPI versions

    Unnecessary tight version constraint limits FastAPI versions

    Due to this line in the pyproject.toml file:

    fastapi = "^0.38.1"
    

    FastAPI versions newer than 0.38 cannot be used with this (current version of FastAPI is 0.75.2). When explicitly requesting a higher version the version solving fails (using poetry):

    $ poetry update
    Updating dependencies
    Resolving dependencies... (0.0s)
    
      SolverProblemError
    
      Because prometheus-fastapi-instrumentator (5.8.0) depends on fastapi (>=0.38.1,<0.39.0)
       and no versions of prometheus-fastapi-instrumentator match >5.8.0,<6.0.0, prometheus-fastapi-instrumentator (>=5.8.0,<6.0.0) requires fastapi (>=0.38.1,<0.39.0).
      So, because my-repo depends on both fastapi (^0.75.0) and prometheus-fastapi-instrumentator (^5.8.0), version solving failed.
    

    One solution would be relaxing the requirements:

    fastapi = "^0.38"
    

    or

    fastapi = ">=0.38.1, <1.0.0"
    
    opened by graipher 3
  • http_requests_total is only available as a default metric

    http_requests_total is only available as a default metric

    Hello, I noticed that the default metrics contain the metric http_requests_total. As this metric is only defined inside the method default, it was necessary to create it as a custom metric:

    def http_requests_total(metric_namespace='', metric_subsystem='') -> Callable[[Info], None]:
        total = Counter(
            name="http_requests_total",
            documentation="Total number of requests by method, status and handler.",
            labelnames=(
                "method",
                "status",
                "handler",
            ),
            namespace=metric_namespace,
            subsystem=metric_subsystem,
        )
    
        def instrumentation(info: Info) -> None:
            total.labels(info.method, info.modified_status, info.modified_handler).inc()
    
        return instrumentation
    

    It would be great to have this metric available as a method like latency and response_size.

    Thanks!

    enhancement 
    opened by jpslopes 3
  • CPU and MEM metrics not available with multiworkers

    CPU and MEM metrics not available with multiworkers

    Hi,

    Following the issue #50, I was able to configure the right metrics when multi workers are in use in the system. However, I'm not able to have the metrics for CPU and memory. Do you know why?

    Thanks Matteo

    opened by Pazzeo 0
  • chore(master): release 5.9.2

    chore(master): release 5.9.2

    :robot: I have created a release beep boop

    5.9.2 (2022-12-18)

    Tests

    • Fix failures due to changes in httpx (1726297)
    • Replace deprecated httpx parameter (09aa996)

    CI/CD


    This PR was generated with Release Please. See documentation.

    autorelease: pending 
    opened by github-actions[bot] 0
  • feat: namespace and subsystem configuration.

    feat: namespace and subsystem configuration.

    • Accept namespace and subsystem parameters in instrument definition.

    Hi, I open this PR to be able to set the namespace and subsystem during instrumentation initialisation.

    Having this parameter is important for projects where several metrics endpoints are fetched and metrics are squashed together.

    This will allow us to have the same behaviour as prometheus-flask-instrumentator (when using the defaults_prefix argument of PrometheusMetrics).

    Thank you.

    opened by phbernardes 1
  • If status_code is HTTPStatus enumeration use value

    If status_code is HTTPStatus enumeration use value

    As mentioned in #190 the instrumentator has a bug when using the http.HTTPStatus enumeration for a status code response. If fixed this bug and added some tests to verify.

    I would be very thankful if you would add the HACKTOBERFEST-ACCEPTED to this pull request.

    opened by nikstuckenbrock 1
  • http request reposinse time includes brackground task's runtime

    http request reposinse time includes brackground task's runtime

    Maybe related to #20

    http_request_duration_highr_seconds_bucket seems to include in the http response time the runtime of the background tasks started in the request in question.

    Setup: python3.8 starlette 0.20.4 fastapi 0.85.0 prometheus-fastapi-instrumentator 5.9.1

    is this expected?

    opened by tonkolviktor 0
Releases(v5.9.1)
  • v5.9.1(Aug 23, 2022)

    5.9.1 (2022-08-23)

    🍀 Summary 🍀

    No bug fixes or new features. Just an important improvement of the documentation.

    ✨ Highlights ✨

    • Fix / Improve documentation of how to use package (#168). Instrumentation should happen in a function decorated with @app.on_event("startup") to prevent crashes on startup. Thanks to @mdczaplicki and others.

    CI/CD

    • Pin poetry version and improve caching configuration (6337459)

    Docs

    • Improve example in README on how to instrument app (#168) (dc36aac)
    Source code(tar.gz)
    Source code(zip)
  • v5.9.0(Aug 23, 2022)

    5.9.0 (2022-08-23)

    🍀 Summary 🍀

    This release fixes a small but annoying bug. Beyond that the release includes small internal improvements and bigger changes to CI/CD.

    ✨ Highlights ✨

    • Removed print statement polluting logs (#157). Thanks to all the people raising this issue and to @nikstuckenbrock for fixing it.
    • Added py.typed file to package to improve typing annotations (#137). Thanks to @mmaslowskicc for proposing and implementing this.
    • Changed license from MIT to ISC, which is just like MIT but shorter.
    • Migrated from Semantic Release to Release Please as release management tool.
    • Overall refactoring of project structure to match my (@trallnag) template Python repo.
    • Several improvements to the documentation. Thanks to @jabertuhin, @frodrigo, and @murphp15.
    • Coding style improvements (#155). Replaced a few for loops with list comprehensions. Defaulting an argument to None instead of an empty list. Thanks to @yezz123.

    Features

    • Add py.typed for enhanced typing annotations (#37) (0c67d1b)

    Bug Fixes

    • Remove print statement from middleware (#157) (f89792b)

    Build

    • deps-dev: bump devtools from 0.8.0 to 0.9.0 (#172) (24bb060)
    • deps-dev: bump flake8 from 4.0.1 to 5.0.4 (#179) (8f72053)
    • deps-dev: bump mypy from 0.950 to 0.971 (#174) (60e324f)

    Docs

    • Add missing colon to README (#33) (faef24c)
    • Adjust changelog formatting (b8b7b3e)
    • Fix small typo in readme (#154) (a569d4e)
    • Move docs-internal to docs/devel and adjust contributing (1b446ca)
    • Remove obsolete DEVELOPMENT.md (1c18ff7)
    • Switch license from MIT to ISC (1b0294a)

    CI/CD

    • Add .tool-versions (255ba97)
    • Add codecov.yaml (008ef61)
    • Add explicit codecov token (b264184)
    • Adjust commitlint to allow more subject case types (8b630aa)
    • Correct default branch name (5f141c5)
    • Improve and update scripts (e1d9982)
    • Move to Release Please and refactor overall CI approach (9977665)
    • Remove flake8 ignore W503 (6eab3b8)
    • Remove traces of semantic-release (f0ab8ff)
    • Remove unnecessary include of py.typed from pyproject.toml (#37) (bbad45e)
    • Rename poetry repo for TestPyPI (3f1c500)
    • Restructure poetry project layout (b439ceb)
    • Update gitignore (e0fa528)
    • Update pre-commit config (e725750)

    Refactor

    Source code(tar.gz)
    Source code(zip)
  • v5.8.2(Jun 12, 2022)

  • v5.8.1(May 3, 2022)

  • v5.8.0(May 1, 2022)

    5.8.0 (2022-05-01)

    ⚠ BREAKING CHANGES

    • Removed support for Python 3.6 and overall cleanup
    • dev: Switch from underscores to dashes for function names

    Features

    Code Refactoring

    • dev: Switch from underscores to dashes for function names (1dc0bb3)
    • remove support for python 3.6 and clean (363d353)
    Source code(tar.gz)
    Source code(zip)
Owner
Tim Schwenke
Tim Schwenke
An extension library for FastAPI framework

FastLab An extension library for FastAPI framework Features Logging Models Utils Routers Installation use pip to install the package: pip install fast

Tezign Lab 10 Jul 11, 2022
Easily integrate socket.io with your FastAPI app 🚀

fastapi-socketio Easly integrate socket.io with your FastAPI app. Installation Install this plugin using pip: $ pip install fastapi-socketio Usage To

Srdjan Stankovic 210 Dec 23, 2022
Flask-vs-FastAPI - Understanding Flask vs FastAPI Web Framework. A comparison of two different RestAPI frameworks.

Flask-vs-FastAPI Understanding Flask vs FastAPI Web Framework. A comparison of two different RestAPI frameworks. IntroductionIn Flask is a popular mic

Mithlesh Navlakhe 1 Jan 01, 2022
LuSyringe is a documentation injection tool for your classes when using Fast API

LuSyringe LuSyringe is a documentation injection tool for your classes when using Fast API Benefits The main benefit is being able to separate your bu

Enzo Ferrari 2 Sep 06, 2021
The template for building scalable web APIs based on FastAPI, Tortoise ORM and other.

FastAPI and Tortoise ORM. Powerful but simple template for web APIs w/ FastAPI (as web framework) and Tortoise-ORM (for working via database without h

prostomarkeloff 95 Jan 08, 2023
API written using Fast API to manage events and implement a leaderboard / badge system.

Open Food Facts Events API written using Fast API to manage events and implement a leaderboard / badge system. Installation To run the API locally, ru

Open Food Facts 5 Jan 07, 2023
Fastapi performans monitoring

Fastapi-performans-monitoring This project is a simple performance monitoring for FastAPI. License This project is licensed under the terms of the MIT

bilal alpaslan 11 Dec 31, 2022
FastAPI Socket.io with first-class documentation using AsyncAPI

fastapi-sio Socket.io FastAPI integration library with first-class documentation using AsyncAPI The usage of the library is very familiar to the exper

Marián Hlaváč 9 Jan 02, 2023
Hyperlinks for pydantic models

Hyperlinks for pydantic models In a typical web application relationships between resources are modeled by primary and foreign keys in a database (int

Jaakko Moisio 10 Apr 18, 2022
fastapi-cache is a tool to cache fastapi response and function result, with backends support redis and memcached.

fastapi-cache Introduction fastapi-cache is a tool to cache fastapi response and function result, with backends support redis, memcache, and dynamodb.

long2ice 551 Jan 08, 2023
Instrument your FastAPI app

Prometheus FastAPI Instrumentator A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator fro

Tim Schwenke 441 Jan 05, 2023
Simple FastAPI Example : Blog API using FastAPI : Beginner Friendly

fastapi_blog FastAPI : Simple Blog API with CRUD operation Steps to run the project: git clone https://github.com/mrAvi07/fastapi_blog.git cd fastapi-

Avinash Alanjkar 1 Oct 08, 2022
A server hosts a FastAPI application and multiple clients can be connected to it via SocketIO.

FastAPI_and_SocketIO A server hosts a FastAPI application and multiple clients can be connected to it via SocketIO. Executing server.py sets up the se

Ankit Rana 2 Mar 04, 2022
Complete Fundamental to Expert Codes of FastAPI for creating API's

FastAPI FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3 based on standard Python type hints. The key featu

Pranav Anand 1 Nov 28, 2021
Code for my FastAPI tutorial

FastAPI tutorial Code for my video tutorial FastAPI tutorial What is FastAPI? FastAPI is a high-performant REST API framework for Python. It's built o

José Haro Peralta 9 Nov 15, 2022
Local Telegram Bot With FastAPI & Ngrok

An easy local telegram bot server with python, fastapi and ngrok.

Ömer Faruk Özdemir 7 Dec 25, 2022
FastAPI-Amis-Admin is a high-performance, efficient and easily extensible FastAPI admin framework. Inspired by django-admin, and has as many powerful functions as django-admin.

简体中文 | English 项目介绍 FastAPI-Amis-Admin fastapi-amis-admin是一个拥有高性能,高效率,易拓展的fastapi管理后台框架. 启发自Django-Admin,并且拥有不逊色于Django-Admin的强大功能. 源码 · 在线演示 · 文档 · 文

AmisAdmin 318 Dec 31, 2022
FastAPI native extension, easy and simple JWT auth

fastapi-jwt FastAPI native extension, easy and simple JWT auth

Konstantin Chernyshev 19 Dec 12, 2022
Adds simple SQLAlchemy support to FastAPI

FastAPI-SQLAlchemy FastAPI-SQLAlchemy provides a simple integration between FastAPI and SQLAlchemy in your application. It gives access to useful help

Michael Freeborn 465 Jan 07, 2023
Hook Slinger acts as a simple service that lets you send, retry, and manage event-triggered POST requests, aka webhooks

Hook Slinger acts as a simple service that lets you send, retry, and manage event-triggered POST requests, aka webhooks. It provides a fully self-contained docker image that is easy to orchestrate, m

Redowan Delowar 96 Jan 02, 2023