Slack webhooks API served by FastAPI

Overview

Slackers

Slack webhooks API served by FastAPI

What is Slackers

Slackers is a FastAPI implementation to handle Slack interactions and events. It serves endpoints to receive slash commands, app actions, interactive components. It also listens for events sent to the Slack Events API Slack Events.

Installation

You can install Slackers with pip $ pip install slackers

Configuration

SLACK_SIGNING_SECRET

You must configure the slack signing secret. This will be used to verify the incoming requests signature.
$ export SLACK_SIGNING_SECRET=your_slack_signing_secret

Example usage

Slackers will listen for activity from the Events API on /events, for interactive components on /actions and for slash commands on /commands. When an interaction is received, it will emit an event. You can listen for these events as shown in the following examples.

On receiving a request, Slackers will emit an event which you can handle yourself. Slackers will also respond to Slack with an (empty) http 200 response telling Slack all is well received.

Starting the server

As said, Slackers uses the excellent FastAPI to serve it's endpoints. Since you're here, I'm assuming you know what FastAPI is, but if you don't, you can learn all about how that works with this tutorial.

Slackers offers you a router which you can include in your own FastAPI.

from fastapi import FastAPI
from slackers.server import router

app = FastAPI()
app.include_router(router)

# Optionally you can use a prefix
app.include_router(router, prefix='/slack')

Events

Once your server is running, the events endpoint is setup at /events, or if you use the prefix as shown above, on /slack/events.

Accepting the challenge

When setting up Slack to send events, it will first send a challenge to verify your endpoint. Slackers detects when a challenge is sent. You can simply start our api and Slackers will meet the challenge automatically.

Responding to events

On receiving an event, Slackers will emit a python event, which you can act upon as shown below.

import logging
from slackers.hooks import events

log = logging.getLogger(__name__)

@events.on("app_mention")
def handle_mention(payload):
    log.info("App was mentioned.")
    log.debug(payload)

Actions

Once your server is running, the actions endpoint is setup at /actions, or if you use the prefix as shown above, on /slack/actions.

Responding to actions

On receiving an action, Slackers will emit a python event, which you can listen for as shown below. You can listen for the action type, or more specifically for the action id or callback id linked to the action.

import logging
from slackers.hooks import actions

log = logging.getLogger(__name__)

# Listening for the action type.
@actions.on("block_actions")
def handle_action(payload):
    log.info("Action started.")
    log.debug(payload)

# Listen for an action by it's action_id
@actions.on("block_actions:your_action_id")
def handle_action_by_id(payload):
    log.info("Action started.")
    log.debug(payload)

# Listen for an action by it's callback_id
@actions.on("block_actions:your_callback_id")
def handle_action_by_callback_id(payload):
    log.info(f"Action started.")
    log.debug(payload)

Interactive messages

Interactive message actions do not have an action_id. They do have a name and a type. To act upon interactive messages, you can listen for the action type, interactive_message as wel as the combination of the interactive_message and name, type or both.

import logging
from slackers.hooks import actions

log = logging.getLogger(__name__)

# Listening for the action type.
@actions.on("interactive_message")
def handle_action(payload):
    log.info("Action started.")
    log.debug(payload)

# Listen for an action by it's name
@actions.on("interactive_message:action_name")
def handle_action_by_id(payload):
    log.info("Action started.")
    log.debug(payload)

# Listen for an action by it's type
@actions.on("interactive_message:action_type")
def handle_action_by_callback_id(payload):
    log.info(f"Action started.")
    log.debug(payload)

# Listen for an action by it's name and type
@actions.on("interactive_message:action_name:action_type")
def handle_action_by_callback_id(payload):
    log.info(f"Action started.")
    log.debug(payload)

Custom responses

Slackers tries to be fast to respond to Slack. The events you are listening for with the likes of @actions.on(...) are scheduled as an async task in a fire and forget fashion. After scheduling these events, Slackers will by default return an empty 200 response which might happen before the events are handled.

In some cases you might want to act on the payload and return a custom response to Slack. For this, you can use the slackers responder decorator to define your custom handler function. This function is then used as a callback instead of returning the default response. You must ensure your custom handler returns a starlette.responses.Response or one of it's subclasses. You must furthermore ensure that there is only one responder responding to your Slack request.

Please note that the events are also emitted, so you could have both @actions.on("block_action:xyz") and @responder("block_action:xyz"). Just keep in mind that the event emissions are async and are not awaited. In other words, Slackers doesn't ensure that the response (whether your custom response or the default) is returned before or after the events are emitted.

from starlette.responses import JSONResponse
from slackers.hooks import responder

@responder("block_actions:your_callback_id")
def custom_handler(payload):
    # handle your payload
    ...
    return JSONResponse(content={"custom": "Custom Response"})

Slash commands

Once your server is running, the commands endpoint is setup at /commands, or if you use the prefix as shown above, on /slack/commands. Slackers will emit an event with the name of the command, so if your command is /engage, you can listen for the event engage (without the slash)

Responding to slash commands

On receiving a command, Slackers will emit a python event, which you can listen for as shown below.

import logging
from slackers.hooks import commands

log = logging.getLogger(__name__)


@commands.on("engage")  # responds to "/engage"  
def handle_command(payload):
    log.info("Command received")
    log.debug(payload)

Async

Since events are emitted using pyee's Async event emitter, it is possible to define your event handlers as async functions. Just keep in mind that errors are in this case emitted on the 'error' event.

import logging
from slackers.hooks import commands

log = logging.getLogger(__name__)

@commands.on('error')
def log_error(exc):
    log.error(str(exc))


@commands.on("engage")  # responds to "/engage"  
async def handle_command(payload):
    ...
Owner
Niels van Huijstee
Niels van Huijstee
FastAPI Auth Starter Project

This is a template for FastAPI that comes with authentication preconfigured.

Oluwaseyifunmi Oyefeso 6 Nov 13, 2022
This code generator creates FastAPI app from an openapi file.

fastapi-code-generator This code generator creates FastAPI app from an openapi file. This project is an experimental phase. fastapi-code-generator use

Koudai Aono 632 Jan 05, 2023
Fastapi-ml-template - Fastapi ml template with python

FastAPI ML Template Run Web API Local $ sh run.sh # poetry run uvicorn app.mai

Yuki Okuda 29 Nov 20, 2022
🐞 A debug toolbar for FastAPI based on the original django-debug-toolbar. 🐞

Debug Toolbar 🐞 A debug toolbar for FastAPI based on the original django-debug-toolbar. 🐞 Swagger UI & GraphQL are supported. Documentation: https:/

Dani 74 Dec 30, 2022
LuSyringe is a documentation injection tool for your classes when using Fast API

LuSyringe LuSyringe is a documentation injection tool for your classes when using Fast API Benefits The main benefit is being able to separate your bu

Enzo Ferrari 2 Sep 06, 2021
Instrument your FastAPI app

Prometheus FastAPI Instrumentator A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator fro

Tim Schwenke 441 Jan 05, 2023
API for Submarino store

submarino-api API for the submarino e-commerce documentation read the documentation in: https://submarino-api.herokuapp.com/docs or in https://submari

Miguel 1 Oct 14, 2021
FastAPI client generator

FastAPI-based API Client Generator Generate a mypy- and IDE-friendly API client from an OpenAPI spec. Sync and async interfaces are both available Com

David Montague 283 Jan 04, 2023
FastAPI Project Template

The base to start an openapi project featuring: SQLModel, Typer, FastAPI, JWT Token Auth, Interactive Shell, Management Commands.

A.Freud 4 Dec 05, 2022
Toolkit for developing and maintaining ML models

modelkit Python framework for production ML systems. modelkit is a minimalist yet powerful MLOps library for Python, built for people who want to depl

140 Dec 27, 2022
Cache-house - Caching tool for python, working with Redis single instance and Redis cluster mode

Caching tool for python, working with Redis single instance and Redis cluster mo

Tural 14 Jan 06, 2022
:rocket: CLI tool for FastAPI. Generating new FastAPI projects & boilerplates made easy.

Project generator and manager for FastAPI. Source Code: View it on Github Features 🚀 Creates customizable project boilerplate. Creates customizable a

Yagiz Degirmenci 1k Jan 02, 2023
A web application using [FastAPI + streamlit + Docker] Neural Style Transfer (NST) refers to a class of software algorithms that manipulate digital images

Neural Style Transfer Web App - [FastAPI + streamlit + Docker] NST - application based on the Perceptual Losses for Real-Time Style Transfer and Super

Roman Spiridonov 3 Dec 05, 2022
Opinionated authorization package for FastAPI

FastAPI Authorization Installation pip install fastapi-authorization Usage Currently, there are two models available: RBAC: Role-based Access Control

Marcelo Trylesinski 18 Jul 04, 2022
An extension library for FastAPI framework

FastLab An extension library for FastAPI framework Features Logging Models Utils Routers Installation use pip to install the package: pip install fast

Tezign Lab 10 Jul 11, 2022
Simple example of FastAPI + Celery + Triton for benchmarking

You can see the previous work from: https://github.com/Curt-Park/producer-consumer-fastapi-celery https://github.com/Curt-Park/triton-inference-server

Jinwoo Park (Curt) 37 Dec 29, 2022
implementation of deta base for FastAPIUsers

FastAPI Users - Database adapter for Deta Base Ready-to-use and customizable users management for FastAPI Documentation: https://fastapi-users.github.

2 Aug 15, 2022
A set of demo of deploying a Machine Learning Model in production using various methods

Machine Learning Model in Production This git is for those who have concern about serving your machine learning model to production. Overview The tuto

Vo Van Tu 53 Sep 14, 2022
🍃 A comprehensive monitoring and alerting solution for the status of your Chia farmer and harvesters.

chia-monitor A monitoring tool to collect all important metrics from your Chia farming node and connected harvesters. It can send you push notificatio

Philipp Normann 153 Oct 21, 2022
This is an API developed in python with the FastApi framework and putting into practice the recommendations of the book Clean Architecture in Python by Leonardo Giordani,

This is an API developed in python with the FastApi framework and putting into practice the recommendations of the book Clean Architecture in Python by Leonardo Giordani,

0 Sep 24, 2022