A flexible data historian based on InfluxDB, Grafana, MQTT and more. Free, open, simple.

Overview

Kotori

https://assets.okfn.org/images/ok_buttons/ok_80x15_red_green.png https://assets.okfn.org/images/ok_buttons/oc_80x15_blue.png https://assets.okfn.org/images/ok_buttons/os_80x15_orange_grey.png

Chart recorder

Telemetry data acquisition and sensor networks for humans.



At a glance

Kotori is a data historian based on InfluxDB, Grafana, MQTT and more. Free, open, simple.

It is a telemetry data acquisition, time series data processing and graphing toolkit aiming to become a fully integrated data historian. It supports scientific environmental monitoring projects, distributed sensor networks and likewise scenarios.

The best way to find out more about Kotori is by looking at how others use it already. Enjoy visiting the gallery to read about some examples where Kotori has been used.

Features

The key features are:

  • Multi-channel and multi-protocol data-acquisition and -storage.
  • Built-in sensor adapters, flexible configuration capabilities, durable database storage and unattended graph visualization out of the box.
  • Based on an infrastructure toolkit assembled from different components suitable for data-acquisition, -storage, -fusion, -graphing and more.
  • The system is used for building flexible telemetry solutions in different scenarios. It has been used to support conceiving data logging systems, test benches, sensor networks for environmental monitoring as well as other data-gathering and -aggregation projects.
  • It integrates well with established hardware-, software- and data acquisition workflows through flexible adapter interfaces.

Technologies

Kotori is based on a number of fine infrastructure components and technologies and supports a number of protocols in one way or another. Standing on the shoulders of giants.

Installation

Kotori can be installed through a Debian package, from the Python Package Index (PyPI) or from the Git repository. Please follow up to the corresponding installation instructions:

https://getkotori.org/docs/setup/

Examples

Data acquisition

Submitting measurement data is easy and flexible, both MQTT and HTTP are supported.

First, let's define a data acquisition channel:

CHANNEL=amazonas/ecuador/cuyabeno/1

and some data to submit:

DATA='{"temperature": 42.84, "humidity": 83.1}'

MQTT:

MQTT_BROKER=daq.example.org
echo "$DATA" | mosquitto_pub -h $MQTT_BROKER -t $CHANNEL/data.json -l

HTTP:

HTTP_URI=https://daq.example.org/api/
echo "$DATA" | curl --request POST --header 'Content-Type: application/json' --data @- $HTTP_URI/$CHANNEL/data

Data export

Measurement data can be exported in a variety of formats.

This is a straight-forward example for CSV data export:

http $HTTP_URI/$CHANNEL/data.csv

Acknowledgements

Thanks to all the contributors who helped to co-create and conceive Kotori in one way or another. You know who you are.

License

This project is licensed under the terms of the AGPL license.

Comments
  • Installation from .deb package on Ubuntu 18.04 fails

    Installation from .deb package on Ubuntu 18.04 fails

    Hi! I installed the latest kotori package (0.22.7) for amd64 in my Ubuntu 18.04 LTS machine, but the kotori service fails to start. My machine has the architecture x86_64. If anyone could please help.

    ● kotori.service
       Loaded: not-found (Reason: No such file or directory)
       Active: failed (Result: exit-code) since Thu 2019-05-30 14:56:34 UTC; 19h ago
     Main PID: 15597 (code=exited, status=1/FAILURE)
    
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Main process exited, code=exited, status=1/FAILURE
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Service hold-off time over, scheduling restart.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Scheduled restart job, restart counter is at 5.
    May 30 14:56:34 igup-be systemd[1]: Stopped Kotori data acquisition and graphing toolkit.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Start request repeated too quickly.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
    May 30 14:56:34 igup-be systemd[1]: Failed to start Kotori data acquisition and graphing toolkit.
    

    The logs from kotori are:

    from .compat import unicode
      File "/opt/kotori/lib/python2.7/site-packages/twisted/python/compat.py", line 611, in <module>
        import cookielib
      File "/usr/lib/python2.7/cookielib.py", line 32, in <module>
        import re, urlparse, copy, time, urllib
      File "/usr/lib/python2.7/copy.py", line 52, in <module>
        import weakref
      File "/usr/lib/python2.7/weakref.py", line 14, in <module>
        from _weakref import (
    ImportError: cannot import name _remove_dead_weakref
    

    Regards

    opened by RuiPinto96 33
  • Kotori using Weewx (MQTT) - ERROR: Processing MQTT message failed from topic

    Kotori using Weewx (MQTT) - ERROR: Processing MQTT message failed from topic "weewx//loop":

    Hello!!

    I have a weather station Vantage Pro 2 console and i'm trying to use Kotori and Weewx to publish data into grafana and store the weather data in InfluxDB. I'm using an MQTT broker (Mosquitto) to send and receive the data from weewx to kotori. But i have a problem processing MQTT messages from the topic. The topic that i used is 'weewx/#'.

    I am getting this error from kotori.log:

    ERROR   : Processing MQTT message failed from topic "weewx//loop"
    

    If anyone could please help...

    This is the error (kotori.log):

        2019-05-22T15:24:17+0100 [kotori.daq.services.mig            ] ERROR   : Processing MQTT message failed from topic "weewx//loop":
    
        [Failure instance: Traceback: <type 'exceptions.AttributeError'>: 'dict' object has no attribute 'slot'
    /usr/lib/python2.7/threading.py:801:__bootstrap_inner
    /usr/lib/python2.7/threading.py:754:run
    /opt/kotori/lib/python2.7/site-packages/twisted/_threads/_threadworker.py:46:work
    /opt/kotori/lib/python2.7/site-packages/twisted/_threads/_team.py:190:doWork
    --- <exception caught here> ---
    /opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
    /opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
    /opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
    /opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:234:process_message
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:92:topology_to_storage
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/intercom/strategies.py:85:topology_to_storage
    
    opened by RuiPinto96 18
  • How data is flowing through Kotori

    How data is flowing through Kotori

    Coming from #12, I have a question about the export of data from InfluxDB in the weewx.ini configuration file.

    ; ----------------------------------------------------------------------
    ; Data export
    ; https://getkotori.org/docs/handbook/export/
    ; https://getkotori.org/docs/applications/forwarders/http-to-influx.html
    ; ----------------------------------------------------------------------
    [weewx.data-export]
    enable = true
    
    type = application
    application = kotori.io.protocol.forwarder:boot
    
    realm = weewx
    source = http:/api/{realm:weewx}/{network:.}/{gateway:.}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
    target = influxdb:/{database}?measurement={measurement}
    transform = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
    kotori.io.protocol.influx:QueryTransformer.transform
    

    My question is about the source, target and transform options. In the option source you utilize a HTTP GET request to the database or is to get the JSON payload published in the topic and transform it in order to put in Influx?

    The target option i understand, is for defining the database and measurment where we put the data.

    The transform option i don't really understand, i just know you guys transform the parameters in JSON to queries.

    Thanks a lot for the help already. Best Regards.

    opened by RuiPinto96 10
  • Optimize packaging

    Optimize packaging

    We've learned from @RuiPinto96 and @Dewieinns through #7, #19 and #22 that the packaging might not be done appropriately.

    Within this issue, we will try to walk through any issues observed. Thanks again for your feedback, we appreciate that very much!

    opened by amotl 9
  • How to export data from InfluxDB

    How to export data from InfluxDB

    Hi,

    thank you for creating Kotori, it is really useful.

    I have a setup that uses weewx to acquire weather data, saves it to InfluxDB and displays graphs via Grafana.

    So far everything works as expected:

    [wetter]
    enable      = true
    type        = application
    realm       = wetter
    mqtt_topics = wetter/#
    application = kotori.daq.application.mqttkit:mqttkit_application
    
    # How often to log metrics
    metrics_logger_interval = 60
    

    My question is about data export which currently yields Connection reset by peer regardless of the requested URL.

    [wetter.influx-data-export]
    enable          = true
    
    type            = application
    application     = kotori.io.protocol.forwarder:boot
    
    realm           = wetter
    source          = http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
    target          = influxdb:/{database}?measurement={measurement}
    transform       = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
                      kotori.io.protocol.influx:QueryTransformer.transform
    
    [kotori.io.protocol.forwarder       ] INFO    : Starting ProtocolForwarderService(wetter.influx-data-export-forwarder)
    [kotori.io.protocol.forwarder       ] INFO    : Forwarding payloads from http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET] to influxdb:/{database}?measurement={measurement}
    [kotori.io.protocol.http            ] INFO    : Initializing HttpChannelContainer
    [kotori.io.protocol.http            ] INFO    : Connecting to Metadata storage database
    [kotori.io.protocol.http            ] INFO    : Starting HTTP service on localhost:24642
    [kotori.io.protocol.http.LocalSite  ] INFO    : Starting factory <kotori.io.protocol.http.LocalSite instance at 0x7fcb3495b1e0>
    [kotori.io.protocol.http            ] INFO    : Registering endpoint at path '/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix}' for methods [u'GET']
    [kotori.io.protocol.target          ] INFO    : Starting ForwarderTargetService(wetter-wetter.influx-data-export) for serving address influxdb:/{database}?measurement={measurement} []
    
    $ curl http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv
    curl: (56) Recv failure: Connection reset by peer
    

    The InfluxDB database is called wetter_de. Grafana runs queries like these: SELECT mean(windSpeed_kph) FROM wetter_de.autogen.ogd_oben_sensors WHERE time >= now() - 5m GROUP BY time(500ms) successfully.

    I don't know how I need to construct the data export URL to get data from InfluxDB. As far as I understand from the docs http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv is transformed as follows:

    • wetter/de is translated to the wetter_de database,
    • ogd/oben_sensors is translated to the ogd_oben_sensors measurement

    Unfortunately, Kotori does not log my HTTP requests and their transformations. InfluxDB also does not log any failing queries. Can you help me find what goes wrong?

    I'm also not sure why I need to specify the realm twice in the export ini:

    realm           = wetter
    source          = http:/api/{realm:wetter}/...
    
    opened by agross 9
  • Installation on RaspberryPi using Docker

    Installation on RaspberryPi using Docker

    Hi,

    lots of Home Automation enthusiasts use a Raspberry Pi (model 3, 3plus or 4) for their computing needs. I made the mistake of perhaps not reading through all steps and tried the docker install and failed.

    • Issue 1 was the Grafana permissions - but resolved.
    • Issue 2 No MongoDB for the pi
    • Issue 3 I read that MongoDB is optionally, so tried docker run -it --rm daqzilla/kotori kotori --version Nope!
    $ docker run -it --rm daqzilla/kotori kotori --version
    Unable to find image 'daqzilla/kotori:latest' locally
    latest: Pulling from daqzilla/kotori
    68ced04f60ab: Pull complete 
    0f5503414412: Pull complete 
    Digest: sha256:ff3d0a569de75fda447ad108a2ec664d8aaf545ded82ecd8c9010fc50817f94b
    Status: Downloaded newer image for daqzilla/kotori:latest
    standard_init_linux.go:211: exec user process caused "exec format error"
    failed to resize tty, using default size
    

    I am a Linux noob so maybe i am doing it all wrong or perhaps Kotori is not for the Pi.

    Would love to hear from you as I am quite excited with how you have brought MQTT (even Tasmota!!), InfluxDB and Grafana all together.

    Cheers and best wishes!

    opened by timaseth 7
  • Packaging: `make package-baseline-images` croaks when building arm64v8 images

    Packaging: `make package-baseline-images` croaks when building arm64v8 images

    Hi there,

    while working on #64, by running the packaging machinery on a Linux system, we discovered that the Building baseline image for Debian "bullseye" on arm64v8 step croaks when installing libc-bin, coming from

    RUN apt-get install --yes --no-install-recommends inetutils-ping nano git build-essential pkg-config libffi-dev ruby ruby-dev
    

    The error manifests itself as

    Processing triggers for libc-bin (2.31-13+deb11u2) ...
    qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    Segmentation fault
    qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    Segmentation fault
    dpkg: error processing package libc-bin (--configure):
     installed libc-bin package post-installation script subprocess returned error exit status 139
    
    Errors were encountered while processing:
     libc-bin
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    The command '/bin/sh -c apt-get install --yes --no-install-recommends     inetutils-ping nano git     build-essential pkg-config libffi-dev     ruby ruby-dev' returned a non-zero code: 100
    Command failed
    make: *** [packaging/tasks.mk:13: package-baseline-images] Error 1
    

    With kind regards, Andreas.

    opened by amotl 5
  • Kotori server does not start

    Kotori server does not start

    Good morning, I am new to the installation part of Kotori, I have followed all the steps on your page but I get to the moment where I have to enter the server with http://kotori.example.org:3000/ and it does not work. I've started the server with "systemctl start kotori" and it doesn't work either. When I get to the step of the tutorial where you have to put the command:

    mosquitto_pub -t $CHANNEL_TOPIC -m '{"temperature": 42.84, "humidity": 83.1}'
    

    in the terminal I get this: "Error: Connection Refused". I am using Linux Mint Ulyssa as the operating system.

    Can someone help me with this please?

    Thank you

    opened by Cr4ck3r32 5
  • Building packages for Debian 11 (Bullseye) fails

    Building packages for Debian 11 (Bullseye) fails

    Hi,

    friends of Kotori have been trying to build .deb packages for Debian 11 (Bullseye). They are on a Linux environment (Intel x86, current Linux kernel, Docker 20.10.11, running within a KVM).

    So, what they are looking at, would be to run this command successfully.

    make package-debian flavor=full dist=bullseye arch=amd64 version=0.26.12
    

    However, the problem is that the preparation command make package-baseline-images already croaks.

    standard_init_linux.go:228: exec user process caused: exec format error
    The command '/bin/sh -c apt-get update && apt-get upgrade --yes' returned a non-zero code: 1
    Command failed
    

    With kind regards, Andreas.

    opened by amotl 3
  • "Basic example with MQTT" fails

    With apologies if I'm missing something terribly obvious (as I'm very much a n00b to MQTT, InfluxDB, and Grafana), the "basic example with MQTT" from the docs (https://getkotori.org/docs/getting-started/basic-mqtt.html) isn't working for me.

    Background

    I installed Kotori on a fresh, updated Ubuntu 18 server following the instructions at https://getkotori.org/docs/setup/debian.html. I then went to the "Getting Started" documentation and tried to follow the example with MQTT. The snipped in amazonas.ini looks like this:

    [amazonas]
    enable      = true
    type        = application
    realm       = amazonas
    mqtt_topics = amazonas/#
    application = kotori.daq.application.mqttkit:mqttkit_application
    

    But when I run the mosquitto_pub command, I get an error in the kotori log:

    2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.manager] INFO    : Provisioning Grafana dashboard "amazonas-ecuador" for database "amazonas_ecuador" and measurement "cuyabeno_1_sensors"
    2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.api    ] INFO    : Checking/Creating datasource "amazonas_ecuador"
    2020-12-07T19:59:40-0500 [kotori.daq.services.mig            ] ERROR   : Grafana provisioning failed for storage={"node": "1", "slot": "data.json", "realm": "amazonas", "network": "ecuador", "database": "amazonas_ecuador", "measurement_events": "cuyabeno_1_events", "label": "cuyabeno_1", "measurement": "cuyabeno_1_sensors", "gateway": "cuyabeno"}, message={u'temperature': 42.84, u'humidity': 83.1}:
    	[Failure instance: Traceback: <class 'grafana_api_client.GrafanaUnauthorizedError'>: Unauthorized
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
    	--- <exception caught here> ---
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:269:process_message
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:129:provision
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:85:create_datasource
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/api.py:104:create_datasource
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:73:create
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:64:make_request
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:171:make_raw_request
    	]
    

    I'm sure it's as a result of this that browsing to ip:3000/dashboard/db/ecuador/ fails a "Dashboard not found" message.

    I'd appreciate a pointer in the right direction.

    opened by danb35 3
  • Installation/Configuration on fresh install of Debian 10 fails.

    Installation/Configuration on fresh install of Debian 10 fails.

    Hey @amotl I am starting a new thread to follow up with a post I made where similar was happening as I'm experiencing slightly different variations now.

    After my reply last evening I set about working with a new VM. I didn't properly document everything I had done so this morning I started over COMPLETELY fresh. I was seeing weird issues where systemctl didn't seem to be running and the VM I was using had only one cpu/core. I assumed this was the reason for performance issues I was seeing so I wiped it, made a new VM (on a different drive in my server even) and set about installing Debian (debian-10.3.0-amd64-netinst.iso)

    I opted not to install the GUI and enabled the SSH server.

    With Debian successfully installed I installed screen and then started following the Setup on Debian guide again.

    Note: When I started following this guide initially (yesterday) it wasn't obvious to me (I'm a n00b) that I needed to install the package source for Debian Stretch (9) OR Debian Buster (10).

    This time I added only the package source I needed (Buster) and was good to go... until:

    Add GPG key for checking package signatures: wget -qO - https://packages.elmyra.de/elmyra/foss/debian/pubkey.txt | apt-key add -

    Error: E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation

    Fix: apt-get install gnupg

    All is good and I set about installing Kotori as well as recommended and suggested packages (14.9GB - takes about an hour and a half on my slow internet connection)

    Next thing prompted for is "Configuring jackd2"

    _Do I want to run with realtime priority? (_explains it may lead to lock-ups.) <-- Selected No

    then followed:

    Configuring Kerberos Authentication Default Kerbors version 5 realm: - Pre-populated with my domain <-- left it as is Kerberos servers for your realm: - Nothing pre-populated <-- left it blank Administrative server for your Kerberos realm: - Nothing pre-populated <-- left it blank

    It then goes through and installs everything - this takes some time - until it gets to the end where there are a couple of errors displayed:

    Setting up mh-e (8.5-2.1) ...
    ERROR: mh-e is broken - called emacs-package-install as a new-style add-on, but has no compat file.
    

    and then:

    Errors were encountered while processing:
     lirc
     lirc-x
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    At this point I rebooted the VM using systemclt reboot

    After reboot I noticed VM was SLOW again - mega slow.

    I again ran apt update, all packages were up to date.

    I then again ran apt install --install-recommends --install-suggests kotori and noticed the following:

    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    2 not fully installed or removed.
    

    I hit Y to continue and it looked like everything was successful this time.

    I attempted to execute systemctl start influxdb grafana-server but received the following warning again:

    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down
    

    It was after this that I wiped the VM last time thinking I had done something mega wrong...

    I then installed open-vm-tools on the VM which allowed me to gracefully reboot the system via the VM Host console.

    Upon rebooting systemctl still isn't running... at this point I'm kind of at a loss. I was farther than this trying to install it on Ubuntu yesterday.

    Sorry for the wall of information but I wanted to be thorough enough you may be able to provide some insight as to what's going on (Hopefully something more than "that's really weird" haha)

    If you wish me to get in touch directly I can do this also.

    opened by Dewieinns 3
  • Unhandled exception: module 'pandas' has no attribute 'tslib'

    Unhandled exception: module 'pandas' has no attribute 'tslib'

    This request croaks.

    https://swarm.hiveeyes.org/api/hiveeyes/25a0e5df-9517-405b-ab14-cb5b514ac9e8/3756782252718325761/1/data.png?include=wght2&from=20160519T040000&to=20160519T170000&renderer=ggplot

    bug 
    opened by amotl 0
  • Problem when using unicode characters in channel name or field name

    Problem when using unicode characters in channel name or field name

    From the backlog at [1], it is from 2017 already.

    Problem when using unicode characters like "Niederkrüchten-Overhetfeld" or field names like "Temperatur außen".

    exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\xdf' in position 13: ordinal not in range(128)
    

    Example URL: https://swarm.hiveeyes.org/api/hiveeyes/testdrive-aw/Niederkr%C3%BCchten-Overhetfeld/node-001/data.txt?from=2016-01-01

    [1] https://github.com/daq-tools/kotori/blob/0.26.12/doc/source/development/backlog.rst#2017-02-06-1

    bug 
    opened by amotl 0
  • Support receiving data via AMQP

    Support receiving data via AMQP

    Hi there,

    @tonkenfo started playing with AMQP on his Ratrack setup the other day [^1]. It would be sweet if Kotori could support it on the ingest side as a first citizen.

    With kind regards, Andreas.

    [^1]: On his experiments, I think he managed to use RabbitMQ as a message broker for both AMQP and MQTT.

    opened by amotl 0
  • Receiving telemetry data via UDP

    Receiving telemetry data via UDP

    Hi there,

    @ClemensGruber just asked over at ^1 whether it would be possible to make Kotori receive UDP data from specific devices. I wanted to create this as a note here in order to have a place to track it.

    With kind regards, Andreas.

    /cc @easyhive, @jacobron

    opened by amotl 1
  • Add TTS (The Things Stack) / TTN (The Things Network) decoder adapter

    Add TTS (The Things Stack) / TTN (The Things Network) decoder adapter

    Dear @thiasB, @MKO1640 and @einsiedlerkrebs,

    finally, aiming to resolve #8, and attaching to your requests at [1] ff., this patch adds a basic decoder for TTS/TTN v3 Uplink Messages [2], submitted using Webhooks [3].

    It can be improved by adding further commits to the collab/tts-ttn branch, if you want to share a hand. It will also need a dedicated documentation page within the "Channel decoders" section [4], optimally with a short but concise walkthrough tutorial how to configure a corresponding webhook within the TTS/TTN console, like [5].

    You can invoke the specific test case after checking out the referenced branch by running those commands within the toplevel directory of the repository in two different shells:

    make start-foundation-services
    
    source .venv/bin/activate
    pytest -vvv -m ttn --capture=no
    

    With kind regards, Andreas.

    [1] https://community.hiveeyes.org/t/ttn-daten-an-kotori-weiterleiten/1422/34 [2] https://www.thethingsindustries.com/docs/reference/data-formats/#uplink-messages [3] https://www.thethingsindustries.com/docs/integrations/webhooks/ [4] https://getkotori.org/docs/handbook/decoders/ [5] https://www.thethingsindustries.com/docs/integrations/webhooks/creating-webhooks/

    opened by amotl 2
Releases(0.27.0)
  • 0.27.0(Nov 26, 2022)

    What's Changed

    • Add documentation about running Kotori with RabbitMQ as MQTT broker, see Running Kotori with RabbitMQ
    • Allow connecting to individual MQTT broker per application
    • Improve MQTT logging when connection to broker fails
    • Make MQTT broker credential settings username and password optional
    • Add software tests for simulating all advanced actions against Grafana
      • Publish single reading in JSON format to MQTT broker and proof that a corresponding datasource and a dashboard was created in Grafana.
      • Publish two subsequent readings in JSON format to MQTT broker and proof that a corresponding datasource and a dashboard was first created and then updated in Grafana.
      • Publish two subsequent readings to two different topics and proof that a corresponding datasource and a dashboard with two panels has been created in Grafana.
      • Publish two subsequent readings to two different topics and proof that a corresponding datasource and two dashboards have been created in Grafana.
    • Adjust logging format re. milli/microseconds
    • Because accessing dashboards by slug has been removed with Grafana 8, Kotori will now use the slug-name of the data channel for all of Grafana's uid, name and title fields.
    • Improve decoding fractional epoch timestamps
    • Update to numpy<1.24 on Python >3.10
    • Replace Bunch with Munch

    Breaking changes

    • Stop converging latitude and longitude ingress fields to tags. It has been implemented as a convenience case when processing LDI data, but it is not applicable in standard data acquisition scenarios, specifically when recording positions of moving objects. Thanks, @tonkenfo.

    Infrastructure

    • Improve sandbox and CI setup, software tests and documentation
    • Update to Twisted <23
    • CI: Update to Grafana 7.5.17, 8.5.15, and 9.2.6
    • CI: Update to MongoDB 5.0
    • Tests: Remove nosetests test runner, replace with pytest
    • Build: Use python -m build for building sdist and wheel packages
    • Add support for Python 3.10 and 3.11
    • Drop support for Python 3.5 and 3.6
    • CI: Modernize GHA workflow recipe
    • Documentation: Add link checker and fix a few broken links
    • Documentation: Update to Sphinx 5

    Full Changelog: https://github.com/daq-tools/kotori/compare/0.26.12...0.27.0

    Source code(tar.gz)
    Source code(zip)
Owner
Open source data acquisition, processing and visualization software
Imbalaced Classification and Robust Semantic Segmentation

Imbalaced Classification and Robust Semantic Segmentation This repo implements two algoritms. The imbalance clibration (IC) algorithm for image classi

24 Jul 23, 2022
Turn your Raspberry Pi Pico into a USB Rubber Ducky

pico-ducky Turn your Raspberry Pi Pico into a USB Rubber Ducky Install Requirements CircuitPython for the Raspberry Pi Pico adafruit-circuitpython-bun

Konstantinos 5 Nov 08, 2022
Turns a compatible Raspberry Pi device into a smart USB drive for PS4/PS5.

PSBerry A WIP project for Raspberry Pi, which turns a compatible RPI device into a smart USB drive for PS4/PS5. Allows for save management of PS4 game

Filip Tomaszewski 2 Jan 15, 2022
Quasi-static control of the centroid of quadruped robot

Quasi-static control of quadruped robot   This is a demo of the quasi-static controller for the centroid of the quadruped robot. The Quadratic Program

Junwen Cui 21 Dec 12, 2022
Hardware: CTWingSKIT_BC28 Development Toolkit

IoT Portal Monitor Tools hardware: CTWingSKIT_BC28 Development Toolkit serial port driver: ST-LINK hardware development environment: Keli 5 MDK IoT pl

Fengming Zhang 1 Nov 07, 2021
emhass: Energy Management for Home Assistant

emhass EMHASS: Energy Management for Home Assistant Context This module was conceived as an energy management optimization tool for residential electr

David 70 Dec 24, 2022
Modeling and Simulation of Satellite Servicing Manipulators

Modeling and Simulation of Satellite Servicing Manipulators Final Project for the course ENPM662: Introduction to Robot Modeling (Fall 2021). This pro

Adarsh M 1 Jan 24, 2022
A PYTHON Library for Controlling Motors using SOLO Motor Controllers with RASPBERRY PI, Linux, windows, and more!

A PYTHON Library for Controlling Motors using SOLO Motor Controllers with RASPBERRY PI, Linux, windows, and more!

SOLO Motor Controllers 3 Apr 29, 2022
This Home Assistant custom component adding support for controlling Midea dehumidifiers on local network.

This custom component for Home assistant adds support for Midea dehumidifier appliances via the local area network. homeassistant-midea-dehumidifier-l

Nenad Bogojevic 91 Dec 28, 2022
Cow Feeder is a bot automatically execute trade on cowswap

Cow Feeder is a bot automatically execute trade on cowswap, includes functions: Monitoring Ethereum network gas price and execute trade whe

6 Apr 20, 2022
Terkin is a flexible data logger application for MicroPython and CPython environments.

Terkin Data logging for humans, written in MicroPython. Documentation: https://terkin.org/ Source Code: https://github.com/hiveeyes/terkin-datalogger

hiveeyes 45 Dec 15, 2022
A raspberrypi tools for python

raspberrypi-tools how to install: first clone this project: git clone https://github.com/Ardumine/rpi-tools.git then go to the folder cd rpi-tools and

1 Jan 04, 2022
Uses the Duke Energy Gateway to import near real time energy usage into Home Assistant

Duke Energy Gateway This is a custom integration for Home Assistant. It pulls near-real-time energy usage from Duke Energy via the Duke Energy Gateway

Michael Meli 28 Dec 23, 2022
rPico KMK powered macropad with IPS screen

MacroPact rPico KMK powered macropad with IPS screen Idea/Desing: Sean Yin Build/Coding: kbjunky ( In case of any problems hit me up on Discord kbjunk

81 Dec 21, 2022
A refreshed Python toolbox for building complex digital hardware

A refreshed Python toolbox for building complex digital hardware

nMigen 1k Jan 05, 2023
A blender 2.9x addon for managing camera settings

TMG-Camera-Tools A blender 2.9x addon for managing camera settings Tutorial showcasing current features

Mainman002 12 Apr 16, 2022
Brogrammer-keyboard - FIrmware for the Brogrammer Keyboard v1.0

Brogrammer Keyboard Firmware The package contains the firmware that runs on the Brogrammer Keyboard v1.0 See https://imgur.com/a/oY5QZ14 This keyboard

Devin Hartleben 1 Apr 21, 2022
Setup DevTerm to be a cool non-GUI device

DevTerm hobby project I bought this amazing device: DevTerm A-0604. It has a beefy ARM processor, runs a custom version of Armbian, embraces Open Sour

Alex Shteinikov 9 Nov 17, 2022
Resmed_myair_sensors - This is a Home Assistant custom component to pull daily CPAP data from ResMed's myAir service using an undocumented API

resmed_myair This component will set up the following platforms. Platform Description sensor Show info from the myAir API. Installation Using the tool

Preston Tamkin 17 Dec 29, 2022
Inykcal is a software written in python for selected E-Paper displays.

Inykcal is a software written in python for selected E-Paper displays. It converts these displays into useful information dashboards. It's open-source, free for personal use, fully modular and user-f

Ace 727 Jan 02, 2023