Generate and Visualize Data Lineage from query history

Overview

Tokern Lineage Engine

CircleCI codecov PyPI image image

Tokern Lineage Engine is fast and easy to use application to collect, visualize and analyze column-level data lineage in databases, data warehouses and data lakes in AWS and GCP.

Tokern Lineage helps you browse column-level data lineage

Resources

  • Demo of Tokern Lineage App

data-lineage

Quick Start

Install a demo of using Docker and Docker Compose

Download the docker-compose file from Github repository.

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml -o docker-compose.yml

Run docker-compose

docker-compose up -d

Check that the containers are running.

docker ps
CONTAINER ID   IMAGE                                    CREATED        STATUS       PORTS                    NAMES
3f4e77845b81   tokern/data-lineage-viz:latest   ...   4 hours ago    Up 4 hours   0.0.0.0:8000->80/tcp     tokern-data-lineage-visualizer
1e1ce4efd792   tokern/data-lineage:latest       ...   5 days ago     Up 5 days                             tokern-data-lineage
38be15bedd39   tokern/demodb:latest             ...   2 weeks ago    Up 2 weeks                            tokern-demodb

Try out Tokern Lineage App

Head to http://localhost:8000/ to open the Tokern Lineage app

Install Tokern Lineage Engine

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml -o tokern-lineage-engine.yml

Run docker-compose

docker-compose up -d

If you want to use an external Postgres database, change the following parameters in tokern-lineage-engine.yml:

  • CATALOG_HOST
  • CATALOG_USER
  • CATALOG_PASSWORD
  • CATALOG_DB

You can also override default values using environement variables.

CATALOG_HOST=... CATALOG_USER=... CATALOG_PASSWORD=... CATALOG_DB=... docker-compose -f ... up -d

For more advanced usage of environment variables with docker-compose, refer to docker-compose docs

Pro-tip

If you want to connect to a database in the host machine, set

CATALOG_HOST: host.docker.internal # For mac or windows
#OR
CATALOG_HOST: 172.17.0.1 # Linux

Supported Technologies

  • Postgres
  • AWS Redshift
  • Snowflake

Coming Soon

  • SparkSQL
  • Presto

Documentation

For advanced usage, please refer to data-lineage documentation

Survey

Please take this survey if you are a user or considering using data-lineage. Responses will help us prioritize features better.

Comments
  • Error while parsing queries from json file

    Error while parsing queries from json file

    I was able to successfully load catalog using dbcat but I'm geting the following error when I tried to parse queries using a file in json format(I also tried the given test file)

    File "~/Python/3.8/lib/python/site-packages/data_lineage/parser/init.py", line 124, in parse name = str(hash(sql)) TypeError: unhashable type: 'dict'

    Here's line 124: https://github.com/tokern/data-lineage/blob/f347484c43f8cb9b97c44086dd2557e3b40904ab/data_lineage/parser/init.py#L124

    Code executed:

    from dbcat import catalog_connection
    from data_lineage.parser import parse_queries, visit_dml_query
    import json
    
    with open("queries2.json", "r") as file:
        queries = json.load(file)
    
    catalog_conf = """
    catalog:
      user:test
      password: [email protected]
      host: 127.0.0.1
      port: 5432
      database: postgres
    """
    catalog = catalog_connection(catalog_conf)
    
    parsed = parse_queries(queries)
    
    visited = visit_dml_query(catalog, parsed)
    
    Support 
    opened by siva-mudiyanur 42
  • Snowflake source defaulting to prod even though I'm specifying a different db name

    Snowflake source defaulting to prod even though I'm specifying a different db name

    I'm adding a snowflake source as follows.. where sf_db_name is my database name e.g. snowfoo (verified in debugger)...

     source = catalog.add_source(name=f"sf1_{time.time_ns()}", source_type="snowflake", database=sf_db_name, username=sf_username, password=sf_password, account=sf_account, role=sf_role, warehouse=sf_warehouse)
    

    ... but when it goes to scan, it looks like the code thinks my database name is 'prod':

    tokern-data-lineage | sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002003 (02000): SQL compilation error:
    tokern-data-lineage | Database 'PROD' does not exist or not authorized.
    tokern-data-lineage | [SQL:
    tokern-data-lineage |     SELECT
    tokern-data-lineage |         lower(c.column_name) AS col_name,
    tokern-data-lineage |         c.comment AS col_description,
    tokern-data-lineage |         lower(c.data_type) AS col_type,
    tokern-data-lineage |         lower(c.ordinal_position) AS col_sort_order,
    tokern-data-lineage |         lower(c.table_catalog) AS database,
    tokern-data-lineage |         lower(c.table_catalog) AS cluster,
    tokern-data-lineage |         lower(c.table_schema) AS schema,
    tokern-data-lineage |         lower(c.table_name) AS name,
    tokern-data-lineage |         t.comment AS description,
    tokern-data-lineage |         decode(lower(t.table_type), 'view', 'true', 'false') AS is_view
    tokern-data-lineage |     FROM
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.COLUMNS AS c
    tokern-data-lineage |     LEFT JOIN
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.TABLES t
    tokern-data-lineage |             ON c.TABLE_NAME = t.TABLE_NAME
    tokern-data-lineage |             AND c.TABLE_SCHEMA = t.TABLE_SCHEMA
    tokern-data-lineage |      ;
    tokern-data-lineage |     ]
    tokern-data-lineage | (Background on this error at: http://sqlalche.me/e/13/f405)
    

    .. I'm trying to look through the tokern code repos to see where the disconnect might be happening, but not sure yet...

    opened by peteclark3 10
  • Any way to increase timeout for scanning?

    Any way to increase timeout for scanning?

    When I add my snowflake DB for scanning, using this bit of code (with the values replaced as per my snowflake database):

    from data_lineage import Catalog
    
    catalog = Catalog(docker_address)
    
    # Register wikimedia datawarehouse with data-lineage app.
    
    source = catalog.add_source(name="wikimedia", source_type="postgresql", **wikimedia_db)
    
    # Scan the wikimedia data warehouse and register all schemata, tables and columns.
    
    catalog.scan_source(source)
    

    ... I get

    tokern-data-lineage-visualizer | 2021/10/08 21:51:40 [error] 34#34: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.10.0.1, server: , request: "POST /api/v1/catalog/scanner HTTP/1.1", upstream: "http://10.10.0.3:4142/api/v1/catalog/scanner", host: "127.0.0.1:8000"
    

    ... I think it's because snowflake isn't returning fast enough... but I'm not sure. Tried updating the warehouse size to large to make the scan faster, but getting the same thing. Seems like it times out pretty fast... at least for my large database. Any ideas?

    Python 3.8.0 in an isolated venv, 0.8.3 data-lineage. Thanks for this package!

    opened by peteclark3 10
  • CTE visiting

    CTE visiting

    Currently it doesn't appear that the dml_visitor will walk through the common table expressions to build the lineage. Am I interpreting this wrong? Within vistor.py line 45 and 61 both visit the "with clause". There doesn't seem to be any functionality for handling the commontableexpr or ctes within the parsed statments. This causes any statements with ctes to throw an error when calling parse_queries, as no table is found when attempting to bind a CTE in a FROM clause.

    opened by dhuettenmoser 8
  • Debian Buster can't find version

    Debian Buster can't find version

    Hi, I'm trying to install the 0.8 version on a docker that runs on Debian Buster and when it runs the pip command to install it prints the following warning/error:

    #12 9.444 Collecting data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    #12 9.466   Could not find a version that satisfies the requirement data-lineage==0.8.0 (from -r /project/requirements.txt (line 25)) (from versions: 0.1.2, 0.2.0, 0.3.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0)
    #12 9.541 No matching distribution found for data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    

    Is this normal behavior? Do I have to add something before trying to install?

    opened by jesusjackson 5
  • problem with docker-compose

    problem with docker-compose

    Hello! i use this docs https://tokern.io/docs/data-lineage/installation

    1 curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose-demodb/docker-compose.yml -o docker-compose.yml 2 docker-compose up -d 3 get error ERROR: In file './docker-compose.yml', the services name 404 must be a quoted string, i.e. '404'.

    opened by kirill3000 4
  • cannot import name 'parse_queries' from 'data_lineage.parser'

    cannot import name 'parse_queries' from 'data_lineage.parser'

    Hi, I am trying to parse query history from Snowflake on Jupyter notebook.

    data lineage version 0.3.0

    !pip install snowflake-connector-python[secure-local-storage,pandas] data-lineage
    
    import datetime
    end_time = datetime.datetime.now()
    start_time = end_time - datetime.timedelta(days=7)
    
    query = f"""
    SELECT query_text
    FROM table(information_schema.query_history(
        end_time_range_start=>to_timestamp_ltz('{start_time.isoformat()}'),
        end_time_range_end=>to_timestamp_ltz('{end_time.isoformat()}')));
    """
    
    cursors = conn.execute_string(
        sql_text=query
    )
    
    queries = []
    for cursor in cursors:
      for row in cursor:
        print(row[0])
        queries.append(row[0])
    
    from data_lineage.parser import parse_queries, visit_dml_queries
    
    # Parse all queries
    parsed = parse_queries(queries)
    
    # Visit the parse trees to extract source and target queries
    visited = visit_dml_queries(catalog, parsed)
    
    # Create a graph and visualize it
    
    from data_lineage.parser import create_graph
    graph = create_graph(catalog, visited)
    
    import plotly
    plotly.offline.iplot(graph.fig())
    

    Then I got this error. Would you help me find the root cause?

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-33-151c67ea977c> in <module>
    ----> 1 from data_lineage.parser import parse_queries, visit_dml_queries
          2 
          3 # Parse all queries
          4 parsed = parse_queries(queries)
          5 
    
    ImportError: cannot import name 'parse_queries' from 'data_lineage.parser' (/opt/conda/lib/python3.8/site-packages/data_lineage/parser/__init__.py)
    
    opened by yohei1126 4
  • Not an issue with data-lineage but issue with required package: pglast

    Not an issue with data-lineage but issue with required package: pglast

    Opening this issue to let everyone know till it gets fixed...the installation for pglast fails requiring a xxhash.h file. Here's the link to issue and how to resolve it: https://github.com/lelit/pglast/issues/82

    Please feel free to close if you think its inappropriate

    opened by siva-mudiyanur 3
  • What query format to pass to Analyzer.analyze(...)?

    What query format to pass to Analyzer.analyze(...)?

    I am trying to use this example: https://tokern.io/docs/data-lineage/queries ... first issue... this bit of code looks like it's just going to fetch a single row from the query history from snowflake:

    queries = []
    with connection.get_cursor() as cursor:
      cursor.execute(query)
      row = cursor.fetchone()
    
      while row is not None:
        queries.append(row[0])
    

    ... is this intended? Note that it's using .fetchone()

    Then.. second issue... when I go back to the example here: https://tokern.io/docs/data-lineage/example

    I see this bit of code...

    analyze = Analyze(docker_address)
    
    for query in queries:
        print(query)
        analyze.analyze(**query, source=source, start_time=datetime.now(), end_time=datetime.now())
    

    ... what does the queries array look like? Or better yet, what does the single query item look like? Above it, in the example, it looks to be a JSON payload....

    with open("queries.json", "r") as file:
        queries = json.load(file)
    

    .... but I've no idea what the payload is supposed to look like.

    I've tried 8 different ways of passing this **query variable into analyze(...) - using the results from the snowflake example on https://tokern.io/docs/data-lineage/queries - but I can never seem to get it right. Either I get an error saying that ** expects a mapping when I use strings or tuples (which is fine, but what's the mapping the function expects?) - or I get an error in the API console itself like

    tokern-data-lineage |     raise ValueError('Bad argument, expected a ast.Node instance or a tuple')
    tokern-data-lineage | ValueError: Bad argument, expected a ast.Node instance or a tuple
    

    .. could we get a more concrete snowflake example, or at the bare minimum please indicate what the query variable is supposed to look like?

    Note that I am also trying to inspect the unit tests and use those as examples, but still not getting very far.

    Thanks for this package!

    opened by peteclark3 2
  • Support for large queries

    Support for large queries

    calling

    analyze.analyze(**{"query":query}, source=dl_source, start_time=datetime.now(), end_time=datetime.now())
    

    with a large query, ​I get a "request too long" - seems that even though it is POSTing, it's still appending the query to the URL, thus the request fails.. e.g.

    tokern-data-lineage-visualizer | 10.10.0.1 - - [14/Oct/2021:14:39:00 +0000] "POST /api/v1/analyze?query=ANY_REALLY_LONG_QUERY_HERE
    
    opened by peteclark3 2
  • Conflicting package dependencies

    Conflicting package dependencies

    amundsen-databuilder which is one of the package dependencies for dbcat requires flask 1.0.2 whereas data-lineage requires flask 1.1

    Please feel free to close if its not a valid issue.

    opened by siva-mudiyanur 2
  • chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    Bumps certifi from 2021.5.30 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Redis dependency not documented

    Redis dependency not documented

    Trying out a demo, I saw the scan (see also https://github.com/tokern/data-lineage/issues/106) command fail, with the server looking for port 6379 on localhost. Sure enough, starting local redis removed that problem. Can this be documented ? It looks like the docker compose file includes it, just the instructions don't.

    opened by debedb 0
  • Demo is wrong

    Demo is wrong

    Trying out a demo, I tried to run catalog.scan_source(source). But that does not exist. After some digging, it looks like this works:

    from data_lineage import Scan
    
    Scan('http://127.0.0.1:8000').start(source)
    

    Please fix the demo pages.

    opened by debedb 0
  • Use markupsafe==2.0.1

    Use markupsafe==2.0.1

    $ data_lineage --catalog-user xxx --catalog-password yyy
    Traceback (most recent call last):
      File "/opt/homebrew/bin/data_lineage", line 5, in <module>
        from data_lineage.__main__ import main
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/__main__.py", line 7, in <module>
        from data_lineage.server import create_server
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/server.py", line 5, in <module>
        import flask_restless
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/__init__.py", line 22, in <module>
        from .manager import APIManager  # noqa
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/manager.py", line 24, in <module>
        from flask import Blueprint
      File "/opt/homebrew/lib/python3.9/site-packages/flask/__init__.py", line 14, in <module>
        from jinja2 import escape
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/__init__.py", line 12, in <module>
        from .environment import Environment
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/environment.py", line 25, in <module>
        from .defaults import BLOCK_END_STRING
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/defaults.py", line 3, in <module>
        from .filters import FILTERS as DEFAULT_FILTERS  # noqa: F401
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/filters.py", line 13, in <module>
        from markupsafe import soft_unicode
    ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/opt/homebrew/lib/python3.9/site-packages/markupsafe/__init__.py)
    
    

    Looks like that was removed in 2.1.0. You may want to specify markupsafe==2.0.1.

    opened by debedb 0
  • MySQL client binaries seem to be required

    MySQL client binaries seem to be required

    This is probably due to SQLAlchemy's requirement of mysqlclient, but when doing

    pip install data-lineage
    

    The following is seen

    Collecting mysqlclient<3,>=1.3.6
      Using cached mysqlclient-2.1.1.tar.gz (88 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [16 lines of output]
          /bin/sh: mysql_config: command not found
          /bin/sh: mariadb_config: command not found
          /bin/sh: mysql_config: command not found
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup.py", line 15, in <module>
              metadata, options = get_config()
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 70, in get_config
              libs = mysql_config("libs")
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 31, in mysql_config
              raise OSError("{} not found".format(_mysql_config_path))
          OSError: mysql_config not found
          mysql_config --version
          mariadb_config --version
          mysql_config --libs
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    

    Installing mysql client fixes it.

    Since you are using SQLAlchemy, this is out of your hands but this issue is to suggest maybe add a note to that effect in the docs?

    opened by debedb 0
  •  could not translate host name

    could not translate host name "---" to address

    changed the CATALOG_PASSWORD,CATALOG_USER, CATALOG_DB, CATALOG_HOST accordingly and ran this command docker-compose -f tokern-lineage-engine.yml up. Throwing me an error
    return self.dbapi.connect(*cargs, **cparams) tokern-data-lineage | File "/opt/pysetup/.venv/lib/python3.8/site-packages/psycopg2/init.py", line 122, in connect tokern-data-lineage | conn = _connect(dsn, connection_factory=connection_factory, **kwasync) tokern-data-lineage | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "-xxxxxxx.amazonaws.com" to address: Temporary failure in name resolution

    opened by Opperessor 0
Releases(v0.8.5)
  • v0.8.5(Oct 13, 2021)

  • v0.8.4(Oct 13, 2021)

  • v0.8.3(Aug 19, 2021)

  • v0.8.2(Aug 17, 2021)

  • v0.8.0(Jul 29, 2021)

  • v0.7.8(Jul 17, 2021)

  • v0.7.7(Jul 14, 2021)

  • v0.7.6(Jul 10, 2021)

  • v0.7.5(Jul 7, 2021)

    v0.7.5 (2021-07-07)

    Chore

    • Prepare release 0.7.5

    Feature

    • Update to pglast 3.3 for inbuilt visitor pattern

    Fix

    • Fix docker build to compile psycopg2
    • Update dbcat (connection labels, default schema). CTAS, Subqueries support
    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Jul 4, 2021)

    v0.7.4 (2021-07-04)

    Chore

    • Prepare release 0.7.4

    Fix

    • Fix DB connection leak in resources. Set execution start/end time.
    • Update dbcat to 0.5.4. Pass source to binding and lineage functions
    • Fix documentation and examples for latest version.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Jun 26, 2021)

  • v0.7.2(Jun 22, 2021)

  • v0.7.1(Jun 21, 2021)

  • v0.7.0(Jun 16, 2021)

    v0.7.0 (2021-06-16)

    Chore

    • Prepare release 0.7.0
    • Pin flask version to ~1.1
    • Update README with information on the app and installation

    Feature

    • Add Parser and Scanner API. Examples use REST APIs
    • Add install manifests for a complete example with notebooks
    • Expose data lineage models through a REST API

    Fix

    • Remove /api/[node|main]. Build remaining APIs using flask_restful
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 7, 2021)

    v0.6.0 (2021-05-07)

    Chore

    • Prepare 0.6.0

    Feature

    • Build docker images on release

    Fix

    • Fix golang release scripts. Prepare 0.5.2
    • Catch parse exceptions and warn instead of exiting
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 4, 2021)

  • v0.3.0(Nov 7, 2020)

    v0.3.0 (2020-11-07)

    Chore

    • Prepare release 0.3.0
    • Enable pre-commit and pre-push checks
    • Add links to docs and survey
    • Change supported databases list
    • Improve message on differentiation.

    Feature

    • data-lineage as a plotly dash server

    Fix

    • fix coverage generation. clean up pytest configuration
    • Install with editable dependency option
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 29, 2020)

    v0.2.0 (2020-03-29)

    Chore

    • Prepare release 0.2.0
    • Fill up README with overview and installation
    • Fill out setup.py to add information on the project

    Feature

    • Support sub-graphs for specific tables. Plot graphs using plotly
    • Create a di-graph from DML queries
    • Parse and visit a list of queries

    Fix

    • Fix coverage report
    • Reduce size by Removing outputs from notebook
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Mar 27, 2020)

Owner
Tokern
Automate Data Engineering Tasks with Column-Level Data Lineage
Tokern
Calendars for various securities exchanges.

IMPORTANT NOTE This package is currently unmaintained as the sponsor, quantopian, is going through corporate changes. As such there is a fork of this

Quantopian, Inc. 545 Jan 07, 2023
KalmanFilterExercise - A Kalman Filter is a algorithmic filter that is used to estimate the state of an unknown variable

Kalman Filter Exercise What are Kalman Filters? A Kalman Filter is a algorithmic

4 Feb 26, 2022
DeKrypt 24 Sep 21, 2022
A multipurpose Telegram Bot writen in Python for mirroring files

Deepak Clouds Mirror Deepak Clouds Torrent is a multipurpose Telegram Bot writen in Python for mirroring files on the Internet to our beloved Google D

MR.SHAGGY 0 Dec 19, 2021
A modern, easy to use, feature-rich, and async ready API wrapper improved and revived from original discord.py.

A Python API wrapper that is improved and revived from the original discord.py

Orion 19 Nov 06, 2021
Instagram Bot posting earthquakes with magnitude greater than or equal to 3.5.

Instagram Bot posting earthquakes with magnitude greater than or equal to 3.5

Alican Yüksel 4 Aug 22, 2022
Retrieve information from DBLP and update BibTex files automatically

Rebib TLDR: This script retrieves information from DBLP to update your BibTex files. python rebib.py --bibfile xxx.bib It first parses the bib entries

Shangtong Zhang 49 Jan 01, 2023
Unofficial YooMoney API python library

API Yoomoney - unofficial python library This is an unofficial YooMoney API python library. Summary Introduction Features Installation Quick start Acc

Aleksey Korshuk 136 Dec 30, 2022
Get charts, top artists and top songs WITHOUT LastFM API

LastFM Get charts, top artists and top songs WITHOUT LastFM API Usage Get stats (charts) We provide many filters and options to customize. Geo filter

4 Feb 11, 2022
A solution designed to extract, transform and load Chicago crime data from an RDS instance to other services in AWS.

This project is intended to implement a solution designed to extract, transform and load Chicago crime data from an RDS instance to other services in AWS.

Yesaswi Avula 1 Feb 04, 2022
WhatsApp Multi Device Client

WhatsApp Multi Device Client

23 Nov 18, 2022
Assistant made in python to control your spotify via voice

Spotify-Assistant Assistant made in python to control your spotify via voice Overview 🚀 PLAY, PAUSE, NEXT, PREVIOUS, VOLUME COMMANDS 📝 Toast notific

Mauri 6 Jan 18, 2022
A module grouping multiple translation APIs

translatepy (originally: translate) An aggregation of multiple translation API Translate, transliterate, get the language of texts in no time with the

349 Jan 06, 2023
ВКонтакте бот для управления Sugar кошельком

Sugarchain VK ВКонтакте бот для управления Sugar кошельком Установка Установить зависимости можно командой: pip install -r requirements.txt Запуск (из

Vladimir 4 Jun 06, 2021
The most fresh and updateable Telegram userbot. By one of the most active contibutors to GeekTG

Installation Script installation: Simply run this command out of root: . (wget -qO- http://gg.gg/get_hikka) Manual installation: apt update && apt in

Dan Gazizullin 150 Jan 04, 2023
A modular Telegram Python bot running on python3 with a sqlalchemy database.

Nao Tomori Robot Found Me On Telegram As Nao Tomori 🌼 A modular Telegram Python bot running on python3 with a sqlalchemy database. How to setup/deplo

Stinkyproject 1 Nov 24, 2021
It's a discord.py simulator.

DiscordPySimulator It's a discord.py simulator. ⚠️ Things to fix Context As you may know, discord py commands provide the context as the first paramet

Juan Sebastián 11 Oct 24, 2022
Twitter-Scrapping - Tweeter tweets extracting using python

Twitter-Scrapping Twitter tweets extracting using python This project is to extr

Suryadeepsinh Gohil 2 Feb 04, 2022
A little proxy tool based on Tencent Cloud Function Service.

SCFProxy 一个基于腾讯云函数服务的免费代理池。 安装 python3 -m venv .venv source .venv/bin/activate pip3 install -r requirements.txt 项目配置 函数配置 开通腾讯云函数服务 在 函数服务 新建 中使用自定义

Mio 716 Dec 26, 2022
A Python script that wraps the gitleaks tool to enable scanning of multiple repositories in parallel

mpgitleaks A Python script that wraps the gitleaks tool to enable scanning of multiple repositories in parallel. The motivation behind writing this sc

Emilio Reyes 7 Dec 29, 2022