Write python locally, execute SQL in your data warehouse

Overview

Downloads PyPI version Docs Chat on Slack License: AGPL v3

RasgoQL Hero

RasgoQL

Write python locally, execute SQL in your data warehouse
≪ Read the Docs   ·   Join Our Slack »

RasgoQL is a Python package that enables you to easily query and transform tables in your Data Warehouse directly from a notebook.

You can quickly create new features, sample data, apply complex aggregates... all without having to write SQL!

Choose from our library of predefined transformations or make your own to streamline the feature engineering process.

RasgoQL 30-second demo

Why is this package useful?

Data scientists spend much of their time in pandas preparing data for modelling. When they are ready to deploy or scale, two pain points arise:

  1. pandas cannot handle larger volumes of data, forcing the use of VMs or code refactoring.
  2. feature data must be added to the Enterprise Data Warehouse for future processing, requiring refactoring to SQL

We created RasgoQL to solve these two pain points.

Learn more at https://docs.rasgoql.com.

How does it work?

Under the covers, RasgoQL sends all processing to your Data Warehouse, enabling the efficient transformation of massive datasets. RasgoQL only needs basic metadata to execute transforms, so your private data remains secure.

RasgoQL workflow diagram

RasgoQL does these things well:

  • Pulls existing Data Warehouse tables into pandas DataFrames for analysis
  • Constructs SQL queries using a syntax that feels like pandas
  • Creates views in your Data Warehouse to save transformed data
  • Exports runnable sql in .sql files or dbt-compliant .yaml files
  • Offers dozens of free SQL transforms to use
  • Coming Soon: allows users to create & add custom transforms

Rasgo supports Snowflake, BigQuery, Postgres, and Amazon Redshift with more Data Warehouses being added soon. If you'd like to suggest another database type, submit your idea to our GitHub Discussions page so that other community members can weight in and show their support.

Can RasgoQL help you?

  • If you use pandas to build features, but you are working on a massive set of data that won't fit in your machine's memory. RasgoQL can help!

  • If your organization uses dbt or another SQL tool to run production data flows, but you prefer to build features in pandas. RasgoQL can help!

  • If you know pandas, but not SQL and want to learn how queries will translate. RasgoQL can help!

Where to get it

Just run a simple pip install.

pip install rasgoql~=1.0

Report Bug · Suggest Improvement · Request Feature

Quick Start

pip install rasgoql --upgrade

# Connect to your data warehouse
creds = rasgoql.SnowflakeCredentials(
    account="",
    user="",
    password="",
    role="",
    warehouse="",
    database="",
    schema=""
)

# Connect to DW
rql = rasgoql.connect(creds)

# List available tables
rql.list_tables('ADVENTUREWORKS').head(10)

# Allow rasgoQL to interact with an existing Table in your Data Warehouse
dataset = rql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

# Take a peek at the data
dataset.preview()

# Use the datetrunc transform to seperate things into weeks
weekly_sales = dataset.datetrunc(dates={'ORDERDATE':'week'})

# Aggregate to sum of sales for each week
agg_weekly_sales = weekly_sales.aggregate(
    group_by=['PRODUCTKEY', 'ORDERDATE_WEEK'],
    aggregations={'SALESAMOUNT': ['SUM']},
    )

# Quickly validate output
agg_weekly_sales.to_df()

# Print the SQL
print(agg_weekly_sales.sql())

Getting Stared Tutorials

The best way to get familiar with the RasgoQL basics is by running through these notebooks in the tutorials folder.

Advanced Examples

Joins

Easily join tables together using the join transform.

sales_dataset = rasgoql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

sales_product_dataset = sales_dataset.join(
  join_table='DIM_PRODUCT',
  join_columns={'PRODUCTKEY': 'PRODUCTKEY'},
  join_type='LEFT',
  join_prefix='PRODUCT')

sales_product_dataset.sql()
sales_product_dataset.preview()

Rasgo Join Example

Chain transforms together

Create a rolling average aggregation and then drops unnecessary colomns.

sales_agg_drop = sales_dataset.rolling_agg(
    aggregations={"SALESAMOUNT": ["MAX", "MIN", "SUM"]},
    order_by="ORDERDATE",
    offsets=[-7, 7],
    group_by=["PRODUCTKEY"],
).drop_columns(exclude_cols=["ORDERDATEKEY"])

sales_agg_drop.sql()
sales_agg_drop.preview()

Multiple rasgoql transforms

Transpose unique values with pivots

Quickly generate pivot tables of your data.

sales_by_product = sales_dataset.pivot(
    dimensions=['ORDERDATE'],
    pivot_column='SALESAMOUNT',
    value_column='PRODUCTKEY',
    agg_method='SUM',
    list_of_vals=['310', '345'],
)

sales_by_product.sql()
sales_by_product.preview()

Rasgoql pivot example

Does any of my data get collected?

Rasgo will not collect any personal information. We log execution of methods in transforms.py for success and failure so that we can more accurately track what's useful and what's problematic.

Where do I go for help?

If you have any questions please:

  1. RasgoQL Docs
  2. Slack
  3. GitHub Issues

How can I contribute?

Review the contributors guide

License

RasgoQL uses the GNU AGPL license, as found in the LICENSE file.

This project is sponspored by RasgoML. Find out at https://www.rasgoml.com/

Comments
  • [BigQuery] fqtn is not valid if project name contains '-'

    [BigQuery] fqtn is not valid if project name contains '-'

    Hello,

    The fqtn of the table I want to get is following this pattern : my-awesome-project.schema.table. I tried to get it using rql.dataset(fqtn="my-awesome-project.schema.table") but I get a [ValueError: my-awesome-project.schema.table is not a well-formed fqtn](). It seems that the validate_fqtn() function is applying this regex \w+\.\w+\.\w+ that isn't accepting my GCP project name pattern. Is there a way to make this work without changing my GCP project name ?

    Thank you for this awesome package, I can't wait to try it ! ❤️ 🚀

    bug 
    opened by amirbtb 10
  • [BigQuery] `to_dbt()` raises IndexError

    [BigQuery] `to_dbt()` raises IndexError

    Hi,

    I am trying to generate dbt files (.sql & .yml) from a SQLChain using to_dbt(). The source table is a regular table. I'm using rasgoql 1.0.2a2.

    Here is my code, I'm just trying to generate base sql code for casting the table :

    import rasgoql
    from rasgoql import BigQueryCredentials
    
    PROJECT = "my-project"
    DATASET = "dataset"
    
    creds = BigQueryCredentials(
        json_filepath="/credentials/path",
        project=PROJECT,
        dataset=DATASET
    )
    rql = rasgoql.connect(creds)
    
    ds = rql.dataset(fqtn=f"{PROJECT}.{DATASET}.table")
    
    schema_dict = {column:data_type for column, data_type in ds.get_schema()}
    schema_dict
    
    ds_casted = ds.transform(
      transform_name='cast',
      casts=schema_dict
    )
    
    ds_casted.to_dbt('./test')
    
    

    I get the following error :

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    /src/test.ipynb Cell [1]
    ----> 1[ ds_casted.to_dbt('./test')
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py:40, in beta.<locals>.wrapper(*args, **kwargs)
         ]()[30](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=29)[ @functools.wraps(func)
         ]()[31](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=30)[ def wrapper(*args, **kwargs):
         ]()[32](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=31)[     logger.info(
         ]()[33](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=32)[         f'{func.__name__} is a beta feature. '
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=33)[         'Its functionality and parameters may change in future versions and '
       (...)
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=37)[         'or contact us directly on slack.'
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=38)[     )
    ---> ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=39)[     return func(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:402, in SQLChain.to_dbt(self, output_directory, file_name, config_args, include_schema)
        ]()[394](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=393)[         chn_logger.warning(
        ]()[395](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=394)[             'Unexpected error generating the schema of this SQLChain. '
        ]()[396](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=395)[             'Your model.sql file will be generated without a schema.yml file. '
       (...)
        ]()[399](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=398)[             'your_chn.save() to update the view definition in your Data Warehouse.'
        ]()[400](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=399)[         )
        ]()[401](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=400)[     schema = []
    --> ]()[402](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=401)[ return create_dbt_files(
        ]()[403](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=402)[     self.transforms,
        ]()[404](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=403)[     schema,
        ]()[405](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=404)[     output_directory,
        ]()[406](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=405)[     file_name,
        ]()[407](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=406)[     config_args,
        ]()[408](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=407)[     include_schema
        ]()[409](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=408)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:105, in create_dbt_files(transforms, schema, output_directory, file_name, config_args, include_schema)
        ]()[102](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=101)[ output_directory = output_directory or os.getcwd()
        ]()[103](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=102)[ file_name = file_name or f'{transforms[-1].output_alias}.sql'
        ]()[104](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=103)[ return save_model_file(
    --> ]()[105](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=104)[     sql_definition=assemble_cte_chain(transforms),
        ]()[106](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=105)[     output_directory=output_directory,
        ]()[107](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=106)[     file_name=file_name,
        ]()[108](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=107)[     config_args=config_args,
        ]()[109](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=108)[     include_schema=include_schema,
        ]()[110](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=109)[     schema=schema
        ]()[111](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=110)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:36, in assemble_cte_chain(transforms, table_type)
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=33)[     t = transforms[0]
         ]()[35](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=34)[     create_stmt = _set_create_statement(table_type, t.fqtn)
    ---> ]()[36](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=35)[     final_select = generate_transform_sql(
         ]()[37](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=36)[         t.name,
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=37)[         t.arguments,
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=38)[         t.source_table,
         ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=39)[         None,
         ]()[41](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=40)[         t._dw
         ]()[42](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=41)[     )
         ]()[43](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=42)[     return create_stmt + final_select
         ]()[45](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=44)[ # Handle multi-transform chains
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:124, in generate_transform_sql(name, arguments, source_table, running_sql, dw)
        ]()[120](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=119)[ """
        ]()[121](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=120)[ Returns the SQL for a Transform with applied arguments
        ]()[122](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=121)[ """
        ]()[123](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=122)[ templates = rtx.serve_rasgo_transform_templates(dw.dw_type)
    --> ]()[124](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=123)[ udt: 'TransformTemplate' = [t for t in templates if t.name == name][0]
        ]()[125](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=124)[ if not udt:
        ]()[126](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=125)[     raise TransformRenderingError(f'Cannot find a transform named {name}')
    
    IndexError: list index out of range]()
    
    bug 
    opened by amirbtb 9
  • Support upstream snowflake connector

    Support upstream snowflake connector

    Is your feature request related to a problem? Please describe. Need more of the snowflake connection options that are defined here https://github.com/snowflakedb/snowflake-connector-python/blob/main/src/snowflake/connector/connection.py#L112

    Describe the solution you'd like The ability to directly use the snowflake connector, or all of its options

    enhancement 
    opened by pbarker 6
  • `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns  | BigQuery

    `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns | BigQuery

    Hi,

    The preview of ds.transform(name='cast', casts=cast_dict) shows a dataset with both old and new columns (casted). I gave a look at cast.sql and I see that it starts with a SELECT *. Suggestion : I believe the ds.transform(name='cast', casts=cast_dict) should be able to cast the provided columns, while keeping the other ones.

    Thank you 🙏

    enhancement 
    opened by amirbtb 6
  • Support fetching batches from Snowflake

    Support fetching batches from Snowflake

    Is your feature request related to a problem? Please describe. Hey, love the tool, I am loading large datasets that won't fit into memory

    Describe the solution you'd like Would like to use https://docs.snowflake.com/en/user-guide/python-connector-api.html#fetch_pandas_batches

    enhancement 
    opened by pbarker 5
  • `to_dbt()` creates a view in the schema of the source table  | BigQuery

    `to_dbt()` creates a view in the schema of the source table | BigQuery

    Hi,

    After I run to_dbt(), I noticed that a view is created in the BigQuery schema where the source table (normal table) is located. The view has the same name as the .sql file created in the output path that I provided in to_dbt() and its query is similar to the content of the .sql file outputted by to_dbt() . The ability to quickly create a view based on all the transformations performed via RasgoQL is very useful but I'm not sure if it should be a default output of to_dbt().

    Again, thank you for your work ! 🙏🏽

    bug 
    opened by amirbtb 5
  • Trouble with import

    Trouble with import

    Describe the bug I get this error when I try to pip install rasgoql[snowflake]: no matches found: rasgoql[snowflake]

    Prior to running this, I successfully downloaded the snowflake connector...

    To Reproduce go to bash and type: pip install rasgoql[snowflake]

    Expected Behavior: successful import

    Actual Behavior: bash returns: no matches found: rasgoql[snowflake]

    Version Information (please complete the following information): rasgoql==1.1.1 rasgotransforms==1.1.3

    Additional context I'm trying to connect to a trial snowflake account

    opened by mashhype 3
  • `ds.concat`doesn't accept the `name` argument | BigQuery

    `ds.concat`doesn't accept the `name` argument | BigQuery

    Hello,

    I tried to use the ds.conct() method as shown in the Example of the documentation. ds.concat(concat_list=['first_column',"'-'",'second_column'], name="both_columns") returns the following error :

    [File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:54, in TransformableClass._create_aliased_function.<locals>.f(*arg, **kwargs)
         ]()[53](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=52)[ def f(*arg, **kwargs) -> 'SQLChain':
    ---> ]()[54](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=53)[     return self.transform(name=transform.name, *arg, **kwargs)
    
    TypeError: transform() got multiple values for keyword argument 'name']()
    

    When I don't provide the name argument, the functions works.

    Thanks again !

    bug 
    opened by amirbtb 3
  • New DB Request | BigQuery

    New DB Request | BigQuery

    Would like to use RasgoQL with BigQuery

    Open questions:

    • will transform templates need changes to support BigQuery SQL syntax?
    • may need to support google oauth login since creds are typically tired to a google account
    enhancement 
    opened by cpdough 3
  • Nest Transform Arguments

    Nest Transform Arguments

    This PR allows a transform to accept a Dataset or SQLChain as an input argument. The new logic flattens the primitive to either a fqtn or a CTE wrapped in parentheses and nests it in the running CTE. I don't know why this works, but 10 Budweisers can't be wrong. JK! It was 11.

    opened by griffatrasgo 1
  • #47 RAS-2651 Adding Amazon Redshift

    #47 RAS-2651 Adding Amazon Redshift

    Adding Amazon Redshift support.

    Test file attached here _test_demo_redshift.zip

    Need the following environment variables:

    REDSHIFT_USER="<dbuser>"
    REDSHIFT_PASSWORD="<dbpass>"
    REDSHIFT_DATABASE="dev"
    REDSHIFT_SCHEMA="public"
    REDSHIFT_HOST="<cluster-host>"
    REDSHIFT_PORT=5439
    REDSHIFT_DB_USER="<dbuser>"
    
    opened by ChrisGriffithRASGO 1
  • Document connecting to data warehouses using dictionary args

    Document connecting to data warehouses using dictionary args

    What feature are you requesting?

    There aren't any docs on connecting to a data warehouse using a dictionary. The current credential classes are limited and gave the impression I wouldn't be able to connect to my warehouse

    Are you using a workaround to do it in or outside of the product today?

    Read the code and figured it out

    How important is this feature to your continued use of the package? Can you qualify the value / importance of this feature in any way?

    I think this is pretty important since it gave me the impression I couldn't use this product

    enhancement 
    opened by pbarker 2
Releases(1.6.4)
  • 1.6.4(Jul 5, 2022)

    Version 1.6.4 - 2022-07-05

    Changed

    • Changed default behavior of to_dbt function. Instead of always appending model details to the schema.yml file (which creates duplicate entries for existing models), rql will now check if a model entry already exists in the file and overwrite it. If the model does not exist, it will be appended.
    Source code(tar.gz)
    Source code(zip)
  • 1.6.3(Jun 28, 2022)

  • 1.6.2(Jun 27, 2022)

  • 1.6.1(Jun 27, 2022)

    Version 1.6.1 - 2022-06-27

    Fixed

    • Fixed a bug in the get_schema method of SQLAlchemy DW classes where users were being asked to enter an overwrite param they cannot access
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Jun 23, 2022)

    Version 1.6.0 - 2022-06-23

    Changed

    • Changed the get_schema method on all DW classes to accept a single fqtn_or_sql variable
    • Changed the behavior of transform arguments: when a Dataset or SQLChain class is passed in as an argument to a transform, it is automatically flattened to its corresponding fqtn or CTE then consumed in the transform.
    Source code(tar.gz)
    Source code(zip)
  • 1.5.6(Jun 21, 2022)

    Version 1.5.6 - 2022-06-20

    Changed

    • Changed the get_schema method on Snowflake and BigQuery DW classes to get output columns without creating views
    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jun 7, 2022)

Generate the report for OCULTest.

Sample report generated in this function Usage example from utils.gen_report import generate_report if __name__ == '__main__': # def generate_rep

Philip Guo 1 Mar 10, 2022
:small_red_triangle: Ternary plotting library for python with matplotlib

python-ternary This is a plotting library for use with matplotlib to make ternary plots plots in the two dimensional simplex projected onto a two dime

Marc 611 Dec 29, 2022
Parse Robinhood 1099 Tax Document from PDF into CSV

Robinhood 1099 Parser This project converts Robinhood Securities 1099 tax document from PDF to CSV file. This tool will be helpful for those who need

Keun Tae (Kevin) Park 52 Jun 10, 2022
Create 3d loss surface visualizations, with optimizer path. Issues welcome!

MLVTK A loss surface visualization tool Simple feed-forward network trained on chess data, using elu activation and Adam optimizer Simple feed-forward

7 Dec 21, 2022
Python package for the analysis and visualisation of finite-difference fields.

discretisedfield Marijan Beg1,2, Martin Lang2, Samuel Holt3, Ryan A. Pepper4, Hans Fangohr2,5,6 1 Department of Earth Science and Engineering, Imperia

ubermag 12 Dec 14, 2022
A simple python tool for explore your object detection dataset

A simple tool for explore your object detection dataset. The goal of this library is to provide simple and intuitive visualizations from your dataset and automatically find the best parameters for ge

GRADIANT - Centro Tecnolóxico de Telecomunicacións de Galicia 142 Dec 25, 2022
Visualization Library

CamViz Overview // Installation // Demos // License Overview CamViz is a visualization library developed by the TRI-ML team with the goal of providing

Toyota Research Institute - Machine Learning 67 Nov 24, 2022
Visualization Data Drug in thailand during 2014 to 2020

Visualization Data Drug in thailand during 2014 to 2020 Data sorce from ข้อมูลเปิดภาครัฐ สำนักงาน ป.ป.ส Inttroducing program Using tkinter module for

Narongkorn 1 Jan 05, 2022
Package managers visualization

Software Galaxies This repository combines visualizations of major software package managers. All visualizations are available here: http://anvaka.git

Andrei Kashcha 1.4k Dec 22, 2022
Calendar heatmaps from Pandas time series data

Note: See MarvinT/calmap for the maintained version of the project. That is also the version that gets published to PyPI and it has received several f

Martijn Vermaat 195 Dec 22, 2022
Shaded 😎 quantile plots

shadyquant 😎 This python package allows you to quantile and plot lines where you have multiple samples, typically for visualizing uncertainty. Your d

Mehrad Ansari 13 Sep 29, 2022
Displaying plot of death rates from past years in Poland. Data source from these years is in readme

Average-Death-Rate Displaying plot of death rates from past years in Poland The goal collect the data from a CSV file count the ADR (Average Death Rat

Oliwier Szymański 0 Sep 12, 2021
Streamlit-template - A streamlit app template based on streamlit-option-menu

streamlit-template A streamlit app template for geospatial applications based on

Qiusheng Wu 41 Dec 10, 2022
Datapane is the easiest way to create data science reports from Python.

Datapane Teams | Documentation | API Docs | Changelog | Twitter | Blog Share interactive plots and data in 3 lines of Python. Datapane is a Python lib

Datapane 744 Jan 06, 2023
Make scripted visualizations in blender

Scripted visualizations in blender The goal of this project is to script 3D scientific visualizations using blender. To achieve this, we aim to bring

Praneeth Namburi 10 Jun 01, 2022
Create matplotlib visualizations from the command-line

MatplotCLI Create matplotlib visualizations from the command-line MatplotCLI is a simple utility to quickly create plots from the command-line, levera

Daniel Moura 46 Dec 16, 2022
Visualizations of linear algebra algorithms for people who want a deep understanding

Visualising algorithms on symmetric matrices Examples QR algorithm and LR algorithm Here, we have a GIF animation of an interactive visualisation of t

ogogmad 3 May 05, 2022
649 Pokémon palettes as CSVs, with a Python lib to turn names/IDs into palettes, or MatPlotLib compatible ListedColormaps.

PokePalette 649 Pokémon, broken down into CSVs of their RGB colour palettes. Complete with a Python library to convert names or Pokédex IDs into eithe

11 Dec 05, 2022
Design your own matplotlib stylefile interactively

Tired of playing with font sizes and other matplotlib parameters every time you start a new project or write a new plotting function? Want all you plots have the same style? Use matplotlib configurat

yobi byte 207 Dec 08, 2022
This project is created to visualize the system statistics such as memory usage, CPU usage, memory accessible by process and much more using Kibana Dashboard with Elasticsearch.

System Stats Visualizer This project is created to visualize the system statistics such as memory usage, CPU usage, memory accessible by process and m

Vishal Teotia 5 Feb 06, 2022