Random scripts and other bits for interacting with the SpaceX Starlink user terminal hardware

Overview

starlink-grpc-tools

This repository has a handful of tools for interacting with the gRPC service implemented on the Starlink user terminal (AKA "the dish").

For more information on what Starlink is, see starlink.com and/or the r/Starlink subreddit.

Prerequisites

Most of the scripts here are Python scripts. To use them, you will either need Python installed on your system or you can use the Docker image. If you use the Docker image, you can skip the rest of the prerequisites other than making sure the dish IP is reachable and Docker itself. For Linux systems, the python package from your distribution should be fine, as long as it is Python 3. The JSON script should actually work with Python 2.7, but the grpc scripts all require Python 3 (and Python 2.7 is past end-of-life, so is not recommended anyway).

All the tools that pull data from the dish expect to be able to reach it at the dish's fixed IP address of 192.168.100.1, as do the Starlink Android app, iOS app, and the browser app you can run directly from http://192.168.100.1. When using a router other than the one included with the Starlink installation kit, this usually requires some additional router configuration to make it work. That configuration is beyond the scope of this document, but if the Starlink app doesn't work on your home network, then neither will these scripts. That being said, you do not need the Starlink app installed to make use of these scripts.

Running the scripts within a Docker container requires Docker to be installed. Information about how to install that can be found at https://docs.docker.com/engine/install/

parseJsonHistory.py operates on a JSON format data representation of the protocol buffer messages, such as that output by gRPCurl. The command lines below assume grpcurl is installed in the runtime PATH. If that's not the case, just substitute in the full path to the command.

Required Python modules

If you don't care about the details or minimizing your package requirements, you can skip the rest of this section and just do this to install latest versions of a superset of required modules:

pip install --upgrade -r requirements.txt

The scripts that don't use grpcurl to pull data require the grpcio Python package at runtime and the optional step of generating the gRPC protocol module code requires the grpcio-tools package. Information about how to install both can be found at https://grpc.io/docs/languages/python/quickstart/. If you skip generation of the gRPC protocol modules, the scripts will instead require the yagrc Python package. Information about how to install that is at https://github.com/sparky8512/yagrc.

The scripts that use MQTT for output require the paho-mqtt Python package. Information about how to install that can be found at https://www.eclipse.org/paho/index.php?page=clients/python/index.php

The scripts that use InfluxDB for output require the influxdb Python package. Information about how to install that can be found at https://github.com/influxdata/influxdb-python. Note that this is the (slightly) older version of the InfluxDB client Python module, not the InfluxDB 2.0 client. It can still be made to work with an InfluxDB 2.0 server, but doing so requires using influx v1 CLI commands on the server to map the 1.x username, password, and database names to their 2.0 equivalents.

Note that the Python package versions available from various Linux distributions (ie: installed via apt-get or similar) tend to run a bit behind those available to install via pip. While the distro packages should work OK as long as they aren't extremely old, they may not work as well as the later versions.

Generating the gRPC protocol modules

This step is no longer required, as the grpc scripts can now get the protocol module classes at run time via reflection, but generating the protocol modules will improve script startup time, and it would be a good idea to at least stash away the protoset file emitted by grpcurl in case SpaceX ever turns off server reflection in the dish software.

The grpc scripts require some generated code to support the specific gRPC protocol messages used. These would normally be generated from .proto files that specify those messages, but to date (2020-Dec), SpaceX has not publicly released such files. The gRPC service running on the dish appears to have server reflection enabled, though. grpcurl can use that to extract a protoset file, and the protoc compiler can use that to make the necessary generated code:

grpcurl -plaintext -protoset-out dish.protoset 192.168.100.1:9200 describe SpaceX.API.Device.Device
mkdir src
cd src
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/device.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/common/status/status.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/command.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/common.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/dish.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/wifi.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/wifi_config.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/transceiver.proto

Then move the resulting files to where the Python scripts can find them in the import path, such as in the same directory as the scripts themselves.

Usage

Of the 3 groups below, the grpc scripts are really the only ones being actively developed. The others are mostly by way of example of what could be done with the underlying data.

The grpc scripts

This set of scripts includes dish_grpc_text.py, dish_grpc_influx.py, dish_grpc_sqlite.py, and dish_grpc_mqtt.py. They mostly support the same functionality, but write their output in different ways. dish_grpc_text.py writes data to standard output, dish_grpc_influx.py sends it to an InfluxDB server, dish_grpc_sqlite.py writes it a sqlite database, and dish_grpc_mqtt.py sends it to a MQTT broker.

All 4 scripts support processing status data and/or history data in various modes. The status data is mostly what appears related to the dish in the Debug Data section of the Starlink app, whereas most of the data displayed in the Statistics page of the Starlink app comes from the history data. Specific status or history data groups can be selected by including their mode names on the command line. Run the scripts with -h command line option to get a list of available modes. See the documentation at the top of starlink_grpc.py for detail on what each of the fields means within each mode group.

For example, all the currently available status groups can be output by doing:

python3 dish_grpc_text.py status obstruction_detail alert_detail

By default, dish_grpc_text.py (and parseJsonHistory.py, described below) will output in CSV format. You can use the -v option to instead output in a (slightly) more human-readable format.

To collect and record packet loss summary stats at the top of every hour, you could put something like the following in your user crontab (assuming you have moved the scripts to ~/bin and made them executable):

00 * * * * [ -e ~/dishStats.csv ] || ~/bin/dish_grpc_text.py -H >~/dishStats.csv; ~/bin/dish_grpc_text.py ping_drop >>~/dishStats.csv

By default, all of these scripts will pull data once, send it off to the specified data backend, and then exit. They can instead be made to run in a periodic loop by passing a -t option to specify loop interval, in seconds. For example, to capture status information to a InfluxDB server every 30 seconds, you could do something like this:

python3 dish_grpc_influx.py -t 30 [... probably other args to specify server options ...] status

Some of the scripts (currently only the InfluxDB one) also support specifying options through environment variables. See details in the scripts for the environment variables that map to options.

Bulk history data collection

dish_grpc_influx.py, dish_grpc_sqlite.py, and dish_grpc_text.py also support a bulk history mode that collects and writes the full second-by-second data instead of summary stats. To select bulk mode, use bulk_history for the mode argument. You'll probably also want to use the -t option to have it run in a loop.

The JSON parser script

parseJsonHistory.py takes input from a file and writes its output to standard output. The easiest way to use it is to pipe the grpcurl command directly into it. For example:

grpcurl -plaintext -d {\"get_history\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle | python parseJsonHistory.py

For more usage options, run:

python parseJsonHistory.py -h

When used as-is, parseJsonHistory.py will summarize packet loss information from the data the dish records. There's other bits of data in there, though, so that script (or more likely the parsing logic it uses, which now resides in starlink_json.py) could be used as a starting point or example of how to iterate through it.

The one bit of functionality this script has over the grpc scripts is that it supports capturing the grpcurl output to a file and reading from that, which may be useful if you're collecting data in one place but analyzing it in another. Otherwise, it's probably better to use dish_grpc_text.py, described above.

Other scripts

dump_dish_status.py is a simple example of how to use the grpc modules (the ones generated by protoc, not starlink_grpc) directly. Just run it as:

python3 dump_dish_status.py

and revel in copious amounts of dish status information. OK, maybe it's not as impressive as all that. This one is really just meant to be a starting point for real functionality to be added to it.

poll_history.py is another silly example, but this one illustrates how to periodically poll the status and/or bulk history data using the starlink_grpc module's API. It's not really useful by itself, but if you really want to, you can run it as:

python3 poll_history.py

Possibly more simple examples to come, as the other scripts have started getting a bit complicated.

Docker for InfluxDB ( & MQTT under development )

Initialization of the container can be performed with the following command:

docker run -d -t --name='starlink-grpc-tools' -e INFLUXDB_HOST={InfluxDB Hostname} \
    -e INFLUXDB_PORT={Port, 8086 usually} \
    -e INFLUXDB_USER={Optional, InfluxDB Username} \
    -e INFLUXDB_PWD={Optional, InfluxDB Password} \
    -e INFLUXDB_DB={Pre-created DB name, starlinkstats works well} \
    neurocis/starlink-grpc-tools dish_grpc_influx.py -v status alert_detail

The -t option to docker run will prevent Python from buffering the script's standard output and can be omitted if you don't care about seeing the verbose output in the container logs as soon as it is printed.

The dish_grpc_influx.py -v status alert_detail is optional and omitting it will run same but not verbose, or you can replace it with one of the other scripts if you wish to run that instead, or use other command line options. There is also a GrafanaDashboard - Starlink Statistics.json which can be imported to get some charts like:

image

You'll probably want to run with the -t option to dish_grpc_influx.py to collect status information periodically for this to be meaningful.

To Be Done (Maybe)

Maybe more data backend options. If there's one you'd like to see supported, please open a feature request issue.

There are reboot and dish_stow requests in the Device protocol, too, so it should be trivial to write a command that initiates dish reboot and stow operations. These are easy enough to do with grpcurl, though, as there is no need to parse through the response data. For that matter, they're easy enough to do with the Starlink app.

Proper Python packaging, since the dependency list keeps growing....

Some of the functionality implemented in the starlink-grpc module could be ported into starlink-json easily enough, but this won't be a priority unless someone asks for it.

Other Tidbits

The Starlink Android app actually uses port 9201 instead of 9200. Both appear to expose the same gRPC service, but the one on port 9201 uses gRPC-Web, which can use HTTP/1.1, whereas the one on port 9200 uses HTTP/2, which is what most gRPC tools expect.

The Starlink router also exposes a gRPC service, on ports 9000 (HTTP/2.0) and 9001 (HTTP/1.1).

The file get_history_notes.txt has my original ramblings on how to interpret the history buffer data (with the JSON format naming). It may be of interest if you're interested in pulling the get_history grpc data directly and don't want to dig through the convoluted logic in the starlink-grpc module.

Related Projects

ChuckTSI's Better Than Nothing Web Interface uses grpcurl and PHP to provide a spiffy web UI for some of the same data this project works on.

starlink-cli is another command line tool for interacting with the Starlink gRPC services, including the one on the Starlink router, in case Go is more your thing.

Comments
  • IndexError: list index (nnn) out of range (3155b85a fw)

    IndexError: list index (nnn) out of range (3155b85a fw)

    Just started seeing this this morning ... not sure what's going on yet, might've started with a fw update, since it started ~3:51am, local time.

    starlink-grpc-tools     | current counter:       23098
    starlink-grpc-tools     | All samples:           900
    starlink-grpc-tools     | Valid samples:         900
    starlink-grpc-tools     | Traceback (most recent call last):
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 330, in <module>
    starlink-grpc-tools     |     main()
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 311, in main
    starlink-grpc-tools     |     rc = loop_body(opts, gstate)
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 254, in loop_body
    starlink-grpc-tools     |     rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence, add_bulk=cb_add_bulk)
    starlink-grpc-tools     |   File "/app/dish_common.py", line 200, in get_data
    starlink-grpc-tools     |     rc = get_history_stats(opts, gstate, add_item, add_sequence)
    starlink-grpc-tools     |   File "/app/dish_common.py", line 296, in get_history_stats
    starlink-grpc-tools     |     groups = starlink_grpc.history_stats(parse_samples,
    starlink-grpc-tools     |   File "/app/starlink_grpc.py", line 968, in history_stats
    starlink-grpc-tools     |     if not history.scheduled[i]:
    starlink-grpc-tools     | IndexError: list index (598) out of range
    

    Using the latest docker image.

    Looking back at the data captured before the update, and I see it was on a different fw (ee5aa15c). What kind of diagnostics should I provide to help?

    opened by bdruth 19
  • Add support for pulling dish_get_obstruction_map data

    Add support for pulling dish_get_obstruction_map data

    It looks like SpaceX added a 2 dimensional map of obstruction data (actually, it looks like it's SNR per direction) to the dish firmware at some point, as the mobile Starlink app has just added support for displaying it.

    I'm not sure how useful it would be to collect this data over time, given that the dish presumably creates this map from data it has collected over time already, and the app does a decent job of visualizing it. However, it might be of some interest to poll it somewhat infrequently, say once a day or so, and see how it changes over time.

    Adding this would probably require a better approach to recording array data in the database backends, though. The existing array storage is a bit too simplistic for this level of data. Also, not all the data backends would be appropriate for this. It may be better off as a completely separate script, maybe one that outputs an image instead of the raw data.

    enhancement 
    opened by sparky8512 16
  • Add more history stats

    Add more history stats

    Right now, the stats computed from the history data are all about packet loss, because that's mostly what I'm interested in tracking myself.

    However, users over on the Starlink subreddit seem really interested in latency stats over time and there has been a question recently about tracking upload/download usage. Both these things are reported by the status info scripts, but are only instantaneous numbers. This data is also present in the history data and it should be easy enough to add computation of more robust stats from that.

    enhancement 
    opened by sparky8512 16
  • Make work on arm64 (aka Raspberry Pi)

    Make work on arm64 (aka Raspberry Pi)

    Here's what I get when trying to run the Docker container on my RPi4b 8GB:

    $ sudo docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_text.py -v status alert_detail
    WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
    standard_init_linux.go:219: exec user process caused: exec format error
    

    Any chance of making an arm64 version of the container available?

    enhancement 
    opened by DurvalMenezes 15
  • Hangs after starting influx script

    Hangs after starting influx script

    Environment: Server OS CentOS 8 - 4.18.0-305.19.1.el8_4.x86_64 Python venv version 3.6.8

    command: python3 dish_grpc_influx.py -v status alert_detail -t 10 -U <username> -P <password>

    Results: Running command, in this case it's running as a service, but does the same then when not running as a service.

    Nov 02 22:18:17 BIL-STARLINK systemd[1]: Started Starlink Influx Script.
    

    Hangs at this point and never runs.

    opened by bile0026 11
  • InfluxDB optional parameter to transform true->1 and false->0 ?

    InfluxDB optional parameter to transform true->1 and false->0 ?

    Hello @sparky8512,

    I was curious, since InfluxDB/Grafana is more friendly to null/numeric values when it comes to graphing and value mappings in Grafana. Would it be possible to add a optional parameter to convert true to 1 and false to 0 on load into InfluxDB?

    See this for an example of the issue: https://stackoverflow.com/questions/60669691/boolean-to-text-mapping-in-grafana I can create value mappings in Grafana based on int values but not string values.

    Thanks!

    enhancement 
    opened by StephenShamakian 11
  • performance issue

    performance issue

    I'm running the following (from the docker image):

    dish_grpc_influx.py -t 10 --all-samples -v status obstruction_detail ping_drop ping_run_length ping_latency ping_loaded_latency usage alert_detail
    

    This is using 100% CPU constantly. That seems excessive to convert starlink data to influxdb. Are there any known performance issues here?

    opened by dustin 10
  • Type hints for `status_data` returns

    Type hints for `status_data` returns

    So off the top of my head, there are two approaches to this we can take.

    First (and my preferred) is to create a dataclass for attributes from status_data, so we would return a Tuple[StatusData, ObstructionDetails, Alerts]. This would be a breaking change, as references such as groups[0]["id"] become groups[0].id. I prefer this one because it's slightly less work to reference variables πŸ˜†

    Second is to create a class extending TypedDict. Referencing variables will look the same as it is now, but groups[0]["id"] would return str rather than Any.

    Both of these require creating new classes, and both mean we can deprecate both status_field_names and status_field_types, since this information is present in the return type

    opened by boswelja 9
  • Dish firmware version change removed last_24h_obstructed_s field?

    Dish firmware version change removed last_24h_obstructed_s field?

    Since yesterday looks like I've been getting this error: % python ./dish_grpc_text.py status Traceback (most recent call last): File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 221, in <module> main() File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 207, in main rc = loop_body(opts, gstate) File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 174, in loop_body rc = dish_common.get_data(opts, File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 196, in get_data rc = get_status_data(opts, gstate, add_item, add_sequence) File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 231, in get_status_data groups = starlink_grpc.status_data(context=gstate.context) File "/Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py", line 641, in status_data "seconds_obstructed": status.obstruction_stats.last_24h_obstructed_s, AttributeError: last_24h_obstructed_s

    Commenting out like 641 in /Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py seems to work to solve.. Wondering if maybe a version update yesterday caused it since it looks like my dish rebooted yesterday too.

    opened by leadZERO 8
  • Help with history

    Help with history

    Hi all, I just started writing a nice python script to mimic the Starlink web page. So far I've got everything I want working.

    But now I want to look at outage history. So in my script I do this:

    h = starlink_grpc.get_history() And then look at outages: lastout=h.outages[-1] And I get this cause: NO_DOWNLINK start_timestamp_ns: 1341872909960043179 duration_ns: 15259975072 did_switch: true The timestamp when converted is off by 10 years! n=datetime.datetime.fromtimestamp(1341877680020042191/1000000000) and the value of n is datetime.datetime(2012, 7, 9, 19, 48, 0, 20042) It's only off by 10 years.... Am I missing something here?

    opened by bmillham 7
  • Multiple issues running tools on new(er) dish

    Multiple issues running tools on new(er) dish

    Running a new(er) dish/router, and ran into some issues -

    1. It seems the new port is 192.168.1.1:9000 for the RPC calls
    2. I wasn't able to use any of the "spacex.api" files until I touched a init.py
    3. My system (Older Pi running stretch) couldn't find influxdb and the whole requirements.txt install failed
    4. Traceback (most recent call last): File "dump_dish_status.py", line 25, in print("Connected" if response.dish_get_status.state == AttributeError: 'DishGetStatusResponse' object has no attribute 'state'
    5. To be able to run some things, I needed to create spacex/api/device/dish_config.proto
    6. python3 dish_obstruction_map.py -e 192.168.1.1:9000 obstruction.png Traceback (most recent call last): File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1242, in obstruction_map map_data = get_obstruction_map(context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1222, in get_obstruction_map return call_with_channel(grpc_call, context=context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 427, in call_with_channel return function(channel, *args, **kwargs) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1219, in grpc_call timeout=REQUEST_TIMEOUT) File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 946, in call return _end_unary_response_blocking(state, call, False, None) File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNIMPLEMENTED details = "Unimplemented: *device.Request_DishGetObstructionMap" debug_error_string = "{"created":"@1653509590.780266717","description":"Error received from peer ipv4:192.168.1.1:9000","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"Unimplemented: *device.Request_DishGetObstructionMap","grpc_status":12}"

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "dish_obstruction_map.py", line 186, in main() File "dish_obstruction_map.py", line 172, in main rc = loop_body(opts, context) File "dish_obstruction_map.py", line 28, in loop_body snr_data = starlink_grpc.obstruction_map(context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1244, in obstruction_map raise GrpcError(e) starlink_grpc.GrpcError: Unimplemented: *device.Request_DishGetObstructionMap

    but in spacex/api/device/dish_pb2.py I see - _DISHGETOBSTRUCTIONMAPREQUEST = _descriptor.Descriptor( name='DishGetObstructionMapRequest', full_name='SpaceX.API.Device.DishGetObstructionMapRequest',

    and

    _DISHGETOBSTRUCTIONMAPRESPONSE = _descriptor.Descriptor( name='DishGetObstructionMapResponse', full_name='SpaceX.API.Device.DishGetObstructionMapResponse',

    But not sure why now.

    Anything I can do for testing/help, lemme know!

    Tnx, Tuc

    opened by tuctboh 7
  • Make robust against field removal in grpc protocol

    Make robust against field removal in grpc protocol

    (These are mostly notes to self, for anyone wondering why I'm being so long-winded about this...)

    Several times in the past, SpaceX has removed fields from the dish's grpc protocol that the scripts in this project were using, resulting in a script crash on attempt to read any data from the same category (status, history stats, bulk history, or location). While loss of that data is unavoidable unless it can be reconstructed from some other field, it would be much better if it didn't cause a crash. I mentioned in issue #65 that I would put some thought into how to accomplish that. After all, one of the intentions of the core module (starlink_grpc.py) is to insulate the calling scripts from the details of the grpc protocol, even though it's mostly just passing the data along as-is in dictionary form instead of protobuf structure.

    The problem here is a result of using grpc reflection to pull the protobuf message structure definitions instead of pre-compiling them using protoc and delivering them along with this project. It's pretty clear from the protobuf documentation that the intended usage is to pre-compile the protocol definitions, but there's a specific reason why I don't do it that way: SpaceX has not published those protocol definitions other than by leaving the reflection service enabled, and thus have not published them under a license that would allow for redistribution without violating copyright. Whether or not they care is a different story, but I'd prefer not to get their legal team thinking about why reflection is even enabled.

    Before I built use of reflection into the scripts (by way of yagrc), I avoided the copyright question by making users generate the protocol modules themselves. This complicated installation, caused some other problems, and ultimately didn't actually fix this problem, it just moved it earlier, since it was still just getting the protocol definitions via reflection.

    So, other than use of pre-compiled protocol definitions, I can think of 2 main options:

    1. Wrap all (or at least most) access to message structure attributes inside a try clause that catches AttributeError or otherwise access them in a way that won't break if the attribute doesn't exist. This could get messy fast, since I don't want a single missing attribute to cause total failure, but would probably be manageable. It also really fights against the design intent of how rigidly these structures are defined by the protoc compiler, but I find myself not really caring about that, as it's exactly this rigidity that is causing the problem here.

    2. Stop using the protocol message structures entirely and switch to pulling fields by protobuf ID instead. This would make the guts of the code less readable, but would insolate this project against the possibility that SpaceX ever decides to disable the reflection service. I have no reason to believe they would do so, but I also have little reason to believe they wouldn't. I'm not sure how difficult this would be, though, as the low-level protobuf Python module is not meant to be used in this way, as far as I can tell. Also, this would still cause problems if SpaceX were to reuse field IDs from obsoleted fields that have been removed, but they haven't been doing that (and it's bad practice in general, as it can break their own client software), as far as I can tell. If I were writing this from scratch, knowing what I do now, I would probably try to do it this way.

    However it's done, I should note that this will make obsoleted fields less obvious. The removed data will just stop being reported by the tools. I have been using these failures as an opportunity to update the documentation on which items are obsolete, but I suspect this will still get noticed, and I'd rather have the documentation lag behind a bit than have the scripts break.

    enhancement 
    opened by sparky8512 1
Releases(v1.1.1)
  • v1.1.1(Nov 9, 2022)

    This is a bug fix release.

    Changes since 1.1.0:

    • Fix a crash when pulling status data introduced by latest firmware obsoleting 2 of the attributes in the grpc protocol used for the obstruction_detail mode group. That mode is pretty useless now, but it remains selectable for backwards compatibility purposes. Users of that mode group may want to look into the dish_obstruction_map.py script or related functions in the starlink_grpc module.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Sep 18, 2022)

    This release added a new Python module prerequisite, so either repeat the pip command from the README file's installation instructions, or just do: pip install typing-extensions to get it.

    Major changes since 1.0.2:

    • Started adding type hints to the core module (starlink_grpc). While this is probably only of interest to developers who use that module directly, it is what added the new prerequisite.
    • Added new mode group location for physical location (GPS) data. Use of this mode requires some specific configuration of the dish, see the README file for details.
    • Added new item "is_snr_above_noise_floor" to the status mode group.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Aug 19, 2022)

    This release is mostly just a roll-up of small changes in order to facilitate the export of the core module package.

    Changes since 1.0.1:

    • Added new script dish_control.py for (minimal) control of dish system state.
    • Minor changes around error case handling and small functionality enhancements.
    • Added packaging configuration for export of the starlink_grpc module as an installable pip package (named starlink-grpc-core) for use by other projects.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Mar 4, 2022)

    Major changes since 1.0.0:

    • Add separate output script for InfluxDB 2.x servers that does not require the compatibility mode commands be run on the server
    • Improvements to the --poll-loops functionality to make it keep data better across script restart and better handle some dish connection failure cases
    • Add a JSON mode to the MQTT output script that allows for reporting a single correlated data set instead of individually publishing each data field
    • Add some systemd start script examples and better support for such in some of the output scripts
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Nov 9, 2021)

    First non-pre-release version.

    Code changes since 0.4.0:

    • Improvements to behavior around interrupted network connectivity to the dish

    Documentation changes since 0.4.0:

    • Moved a bunch of content out of the README to the project Wiki
    • Changed the Docker usage instructions to point to the image published to GitHub Packages repository by this project's Action workflow, thus changing the officially supported Docker image to that one
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 25, 2021)

    Another development release. Things are basically stable at this point, but I want to re-vamp the README file before declaring a 1.0 release, as it would be dumb to have to bump the version number just for that.

    Major changes since 0.3.0:

    • Added -o option to poll history more frequently than stats are computed. Relevant only to the ping_* andusage mode groups.
    • Added dish direction and "prolonged" obstruction info to the status mode group.
    • Added new script dish_obstruction_map.py which emits a PNG image depicting directional obstruction data, as collected by the dish.
    • Removed usage of a number of fields in the gRPC response messages that have been obsoleted by recent dish firmware updates. This rendered useless the "snr" and "seconds_obstructed" items in the status mode group, the "snr" item in the bulk_history group, and the "obstructed and "scheduled/unscheduled" items in the ping_drop and bulk_history mode groups.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 16, 2021)

    Yet another development release, but interfaces should be stable at this point. Would probably have called this 1.0 if it had proper packaging.

    Major changes since 0.2.0:

    • Protocol definition modules can now be loaded via reflection instead of having to generate them via protoc. This requires the yagrc Python package to be installed.
    • New raw_wedges_fraction_obstructed field in obstruction_detail group.
    • Counter state is now tracked for history stats groups, not just bulk history.
    • Dish IP and port can now be set to something other than the standard.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 5, 2021)

    Another development release, but getting close to declaring the interfaces stable enough for a real release.

    Major changes since 0.1.0:

    • Changed command line interface to combine status+history scripts: dishHistoryInflux.py, dishHistoryMqtt.py, dishHistoryStats.py, dishStatusCsv.py, dishStatusInflux.py, and dishStatusMqtt.py are replaced with dish_grpc_influx.py, dish_grpc_mqtt.py, and dish_grpc_text.py.
    • Added new script for sqlite output: dish_grpc_sqlite.py.
    • Added option for latency and usage summary stats.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jan 28, 2021)

    Development release.

    Mostly just tagging the repository here because the interfaces are about to change in order to combine scripts and reduce code duplication across the main scripts. This will affect both the command line interface and the starlink-grpc Python module interface.

    Source code(tar.gz)
    Source code(zip)
Owner
reddit user u/CenterSpark's random bits and bobs
This is a Command Line program to interact with your NFTs, Cryptocurrencies etc

This is a Command Line program to interact with your NFTs, Cryptocurrencies etc. via the ThirdWeb Platform. This is just a fun little project that I made to be able to connect to blockchains and Web3

Arpan Pandey 5 Oct 02, 2022
The WalletsNet CLI helps you connect to WalletsNet

WalletsNet CLI The WalletsNet CLI helps you connect to WalletsNet. With the CLI, you can: Trigger webhook events or resend events for easy testing Tai

WalletsClub 8 Dec 22, 2021
Simple and convenient console ToDo list app

How do you handle remembering all that loads of plans you are going to realize everyday? Producing tons of paper notes, plastered all over the house?

3 Aug 03, 2022
A command line tool to query source code from your current Python env

wxc wxc (pronounced "which") allows you to inspect source code in your Python environment from the command line. It is based on the inspect module fro

ClΓ©ment Robert 13 Nov 08, 2022
Python CLI script to solve wordles.

Wordle Solver Python CLI script to solve wordles. You need at least python 3.8 installed to run this. No dependencies. Sample Usage Let's say the word

Rachel Brindle 1 Jan 16, 2022
sync-my-tasks is a CLI tool that copies tasks between apps.

sync-my-tasks Copy tasks between apps Report a Bug Β· Request a Feature . Ask a Question Table of Contents Table of Contents Getting Started Developmen

William Hutson 2 Dec 14, 2021
Python codecs extension featuring CLI tools for encoding/decoding anything

CodExt Encode/decode anything. This library extends the native codecs library (namely for adding new custom encodings and character mappings) and prov

Alex 210 Dec 30, 2022
Fun project to generate The Matrix Code effect on you terminal.

Fun project to generate The Matrix Code effect on you terminal.

Henrique Bastos 11 Jul 13, 2022
A simple web-based SSH client.

Kommander A simple web-based SSH client. It supports: entering SSH login details (including private key and custom ports) and connecting user authenti

KingWaffleIII 2 Jan 01, 2022
A handy command-line utility for generating and sending iCalendar events

A handy command-line utility for generating and sending iCalendar events This simple command-line utility is designed to generate an iCalendar event,

Baochun Li 17 Nov 21, 2022
texel - Command line interface for reading spreadsheets inside terminal

texel - Command line interface for reading spreadsheets inside terminal. Sometimes, you have to deal with spreadsheets. Those are sad times. Fortunate

128 Dec 19, 2022
A command line utility for tracking a stock market portfolio. Primarily featuring high resolution braille graphs.

A command line stock market / portfolio tracker originally insipred by Ericm's Stonks program, featuring unicode for incredibly high detailed graphs even in a terminal.

Conrad Selig 51 Nov 29, 2022
πŸ‘» Ghoul is an easy to use information service, allowing you to get/add information on someone or something directly from your terminal.

πŸ‘» Ghoul is an easy to use information service, allowing you to get/add information on someone or something directly from your terminal. It c

Billy 11 Nov 10, 2021
A Neat Application To Manage Your To-Do Lists.

WTD - What To Do? A Neat Application To Manage Your To-Do Lists. One folder can only have one to-do file. Running wth without any subcommands executes

Adam Vajda 1 Oct 24, 2021
Joji convert a text to corresponding emoji if emoji is available

Joji Joji convert a text to corresponding emoji if emoji is available How it Works ? 1. There is a json file with emoji names as keys and correspondin

Gopikrishnan Sasikumar 28 Nov 26, 2022
A python package to display progress of loops to the user

ProgressBars A python package to display progress of loops to the user. Installation This package can be installed using pip. pip install progressbars

Matthias 3 Jan 16, 2022
dotfilery, configuration, environment settings, automation, etc.

β”Œβ”¬β”β”Œβ”€β”β”Œβ”€β”β”Œβ”€β”β”¬ β”¬β”Œβ”¬β”β”¬ β”¬β”¬β”Œβ”€β” β”‚β”‚β”‚β”œβ”€ β”‚ β”¬β”œβ”€β”€β”‚ β”‚ β”‚ β”œβ”€β”€β”‚β”‚ :: bits & bobs, dots & things. β”΄ β”΄β””β”€β”˜β””β”€β”˜β”΄ β”΄β”΄β”€β”˜β”΄ β”΄ β”΄ β”΄β”΄β””β”€β”˜ @megalithic πŸš€ Instal

Seth Messer 89 Dec 25, 2022
Sink is a CLI tool that allows users to synchronize their local folders to their Google Drives. It is similar to the Git CLI and allows fast and reliable syncs with the drive.

Sink is a CLI synchronisation tool that enables a user to synchronise local system files and folders with their Google Drives. It follows a git C

Yash Thakre 16 May 29, 2022
A Command Line Error Parser Built using Python.

"Stalk Overflow with debuggy" Error Parser Everything is done in Python so it's extremely easy to install and use. Supports Python 3. Debuggy is used

Derhnyel 22 Nov 10, 2022
argofloats: Simple CLI for ArgoVis and Argofloats

argofloats: Simple CLI for ArgoVis and Argofloats Argo is an international program that collects information from inside the ocean using a fleet of ro

Samapriya Roy 2 Feb 13, 2022