Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Overview

hatchet Hatchet

Build Status Read the Docs codecov Code Style: Black

Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing performance data that has a hierarchy (for example, serial or parallel profiles that represent calling context trees, call graphs, nested regions’ timers, etc.). Hatchet implements various operations to analyze a single hierarchical data set or compare multiple data sets, and its API facilitates analyzing such data programmatically.

To use hatchet, install it with pip:

$ pip install llnl-hatchet

Or, if you want to develop with this repo directly, run the install script from the root directory, which will build the cython modules and add the cloned directory to your PYTHONPATH:

$ source install.sh

Documentation

See the Getting Started page for basic examples and usage. Full documentation is available in the User Guide.

Examples of performance analysis using hatchet are available here.

Contributing

Hatchet is an open source project. We welcome contributions via pull requests, and questions, feature requests, or bug reports via issues.

Authors

Many thanks go to Hatchet's contributors.

Citing Hatchet

If you are referencing Hatchet in a publication, please cite the following paper:

  • Abhinav Bhatele, Stephanie Brink, and Todd Gamblin. Hatchet: Pruning the Overgrowth in Parallel Profiles. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '19). ACM, New York, NY, USA. DOI

License

Hatchet is distributed under the terms of the MIT license.

All contributions must be made under the MIT license. Copyrights in the Hatchet project are retained by contributors. No copyright assignment is required to contribute to Hatchet.

See LICENSE and NOTICE for details.

SPDX-License-Identifier: MIT

LLNL-CODE-741008

Comments
  • Has anyone tried loading the flamegraph output from hatchet into https://www.speedscope.app/?

    Has anyone tried loading the flamegraph output from hatchet into https://www.speedscope.app/?

    I can generate a Flamegraph using Brendan Gregg's Flamegraph perl script as shown in the documentation for Hatchet. https://www.speedscope.app/ is a nice visualizer for Flamegraphs that is suppose to support the folded stacks format, but it does not recognize my file (the one that Flamegraph.pl reads fine). I was just wondering if anyone has successfully used https://www.speedscope.app/ with output from hatchet or not (in which case, maybe it is something specific to file file).

    opened by balos1 4
  • Adds a GitHub Action to test PyPI Releases on a Regular Schedule

    Adds a GitHub Action to test PyPI Releases on a Regular Schedule

    This PR is created in response to https://github.com/hatchet/hatchet/issues/443.

    This PR adds a new GitHub Action that essentially performs automated regression testing on PyPI releases. It will install each considered version of Hatchet under each considered version of Python, checkout that Hatchet version's release branch, and perform the version's unit tests.

    The following Hatchet versions are currently considered:

    • v1.2.0 (omitted, missing writers module)
    • v1.3.0 (omitted, missing writers module)
    • v1.3.1a0 (omitted, missing writers module)
    • 2022.1.0 (omitted, missing writers module)
    • 2022.1.1 (omitted, missing writers module)
    • 2022.2.0 (omitted, missing writers module)

    Similarly, the following versions of Python are currently considered:

    • 2.7 (omitted, missing version in docker)
    • 3.5 (omitted, missing version in docker)
    • 3.6
    • 3.7
    • 3.8
    • 3.9

    Before merging, the following tasks must be done:

    • [X] ~Replace the workflow_dispatch (i.e., manual) trigger with the commented out schedule trigger in pip_unit_tester.yaml~ Superseded by a task in a later comment
    • [x] Change the "Install Hatchet" step to install llnl-hatchet instead of hatchet. This will be changed once the llnl-hatchet package goes live on PyPI
    area-ci area-deployment priority-high status-ready-for-review type-feature 
    opened by ilumsden 4
  • Changes GitHub Action OS Image to Avoid Python Caching Issues

    Changes GitHub Action OS Image to Avoid Python Caching Issues

    This PR allows us to avoid this issue with the setup-python Action used in Hatchet's CI.

    To do so, it simply changes the OS image for the CI from ubuntu-latest to ubuntu-20.04. When the linked issue is resolved, we can switch back to ubuntu-latest.

    area-ci priority-high status-ready-for-review type-bug 
    opened by ilumsden 2
  • Added tojson writer and from_dict and from_json readers.

    Added tojson writer and from_dict and from_json readers.

    • Added to_dict and to_json readers to the graphframe. Added a from_json reader.
    • Added tests to verify that these readers and writers work in addition to a thicket generated json file to verify backwards compatibility.
    • Added json files for tests.
    area-readers area-writers priority-high type-feature 
    opened by cscully-allison 1
  • BeautifulSoup not a dependency

    BeautifulSoup not a dependency

    In the 2022.1.0 release, running the install.sh script produces a ModuleNotFoundError for the bs4 package (BeautifulSoup). The import of this package is in hatchet/vis/static_fixer.py.

    @slabasan do we want to include BeautifulSoup as a dependency of Hatchet?

    area-deployment priority-normal type-question area-visualization 
    opened by ilumsden 1
  • Modifications to the Interactive CCT Visualization

    Modifications to the Interactive CCT Visualization

    Work in Progress

    Added:

    1. Object Oriented Refactor of tree code
    2. Redesign of "collapsed" nodes
    3. Additional legend
    4. Menu Bar
    5. Improved interface for mass pruning

    Note: Merge after PR #26

    priority-normal status-approved area-visualization 
    opened by cscully-allison 1
  • Calculates exclusive metrics from corresponding inclusive metrics

    Calculates exclusive metrics from corresponding inclusive metrics

    This PR adds the generate_exclusive_columns function to calculate exclusive metrics from inclusive metrics. It does this by calculating the sum of the inclusive metric for each node's children and then subtracting that from the node's inclusive metric. It will only attempt to calculate exclusive metrics in certain situations, namely:

    • The inclusive metric name ends in "(inc)", but there is not an exclusive metric with the same name, minus the "(inc)"
    • There is an inclusive metric without the "(inc)" suffix

    This might not be ideal. However, Hatchet currently provides no mechanism internally for explicitly correlating exclusive and inclusive metrics. So, until such functionality is added, this PR must use some solution based on metric names to determine what to calculate. When the internal mechanism for recording inclusive and exclusive metrics is updated, this function will be updated to use that new feature.

    This PR builds off of #18, so it will be marked as a Draft until that PR is merged.

    area-graphframe priority-normal status-approved type-feature 
    opened by ilumsden 1
  • Preserve existing inc_metrics in update_inclusive_columns

    Preserve existing inc_metrics in update_inclusive_columns

    This is a small PR to fix a bug in GraphFrame.update_inclusive_columns that causes existing values in GraphFrame.inc_columns to be dropped.

    As an example, consider a GraphFrame with the following metrics:

    • exc_metrics: ["time"]
    • inc_metrics: ["foo"]

    Currently, after calling update_inclusive_columns, inc_metrics will no longer contain "foo". Instead, inc_metrics will simply be ["time (inc)"].

    This PR will extend inc_metrics instead of overriding. So, in the above example, inc_metrics will now be ["foo", "time (inc)"].

    area-graphframe priority-normal status-ready-for-review type-bug 
    opened by ilumsden 1
  • Creates a new function that unifies a list of GraphFrames into a single GraphFrame

    Creates a new function that unifies a list of GraphFrames into a single GraphFrame

    This PR implements a new function called unify_ensemble that takes a list of GraphFrame objects with equal graphs and returns a new GraphFrame containing the data of all the inputs. In the output data, a new DataFrame column, called dataset, is added that informs the user which GraphFrame that row came from. If the dataset attribute of the GraphFrame (explained below) is set, that value will be used for the corresponding rows in the output. Otherwise, the string "gframe_#" is used, with "#" being replaced by the index of the GraphFrame in the input list.

    To help link output data to input data, this PR also adds a new dataset attribute to the GraphFrame class and a graphframe_reader decorator to help set this attribute. The dataset attribute is meant to be a string that labels the GraphFrame. For most readers, this attribute will be set automatically by the graphframe_reader decorator. This decorator is meant to be applied to from_X static methods in the GraphFrame class. This decorator does 3 things:

    1. Runs the from_X function it decorates
    2. If the from_X function did not set the dataset attribute and the first argument to from_X is a string, this first argument will be considered a path to the read data, and it will be used to set dataset
    3. Returns the (potentially) modified GraphFrame produced by from_X
    area-graphframe area-utils priority-normal status-ready-for-review type-feature 
    opened by ilumsden 1
  • add clean to install to remove prior build artifacts

    add clean to install to remove prior build artifacts

    Tiny update to the install script to remove build artifacts before rebuilding Cython modules. Especially useful when switching between major versions of Python.

    opened by jonesholger 0
  • caliperreader: handle root nodes in _create_parent

    caliperreader: handle root nodes in _create_parent

    When creating parent nodes, we need to handle the case that the parent might be a root node. Previously, the recursive _create_parent calls were being made on root nodes, and we incorrectly tried to index into the grandparent callpath tuple, even though it was empty. This ends the recursion if we encounter an empty callpath tuple.

    area-readers priority-urgent status-ready-to-merge type-bug 
    opened by slabasan 0
  • Enables support for multi-indexed DataFrames in the Query Language

    Enables support for multi-indexed DataFrames in the Query Language

    Summary

    Currently, the Object-based dialect and String-based dialect of the Query Language cannot handle GraphFrames containing a DataFrame with a multi-index (e.g., when you have rank and thread info).

    This PR adds support for that type of data to the Object-based Dialect and String-based Dialect. This support comes in the form of a new multi_index_mode argument to the ObjectQuery constructor, the StringQuery constructor, the parse_string_dialect function, and the GraphFrame.filter function. This argument can have one of three values:

    • "off" (default): query will be applied under the assumption that the DataFrame does not have a MultiIndex (i.e., the currently behavior of the QL)
    • "all": when applying a predicate to a particular node's data in the DataFrame, all rows associated with the node must satisfy the predicate
    • "any": when applying a predicate to a particular node's data in the DataFrame, at least one row associate with the node must satisfy the predicate

    The implementation of these three modes is performed within the ObjectQuery and StringQuery classes. In these classes, the translation of predicates from dialects to the "base" syntax (represented by the Query class) will differ depending on the value of multi_index_mode. Since the implementation of this functionality is in ObjectQuery and StringQuery, the multi_index_mode arguments to parse_string_dialect and GraphFrame.filter are simply passed through to the correct class.

    Finally, one important thing to note is that this functionality is ONLY implemented for new-style queries (as defined in PR #72). Old-style queries (e.g., using the QueryMatcher class) do not support this behavior.

    What's Left to Do?

    In short, all that's left in this PR is unit testing. I still need to implement tests in test/query.py and confirm that everything is working correctly.

    area-query-lang priority-normal status-work-in-progress type-feature 
    opened by ilumsden 0
  • Refactors Query Language for Thicket

    Refactors Query Language for Thicket

    Summary

    This PR refactors the Query Language (QL) to prepare it for use in Thicket, improve its overall extensibility, and make its terminology more in line with that of the QL paper.

    First and foremost, the QL is no longer contained within a single file. Now, all code for the QL is contained in the new query directory. This directory contains the following files:

    • __init__.py: contains re-exports for everything in the QL so it can all be imported with from hatchet.query import ... (same as before)
    • engine.py: contains a class containing the algorithm for applying queries to GraphFrames
    • errors.py: contains any errors the QL may raise
    • query.py: contains the class representing the base QL syntax and compound queries (i.e., classes for operations like "and," "or," "xor," and "not"
    • object_dialect.py: contains the class representing the Object-based dialect
    • string_dialect.py: contains the class representing the String-based dialect
    • compat.py: contains various classes that ensure (deprecated) backwards compatibility with earlier versions of Hatchet

    In this PR, queries are represented by one of 3 classes:

    • Query: represents the base syntax for the QL
    • StringQuery: represents the String-based dialect. This class extends Query and implements the conversion from String-based dialect to base syntax
    • ObjectQuery: represents the Object-based dialect. This class extends Query and implements the conversion from Object-based dialect to base syntax

    Additionally, as before, there are classes to allow queries to combined via set operations. All of these classes extend the CompoundQuery class. These classes are:

    • ConjunctionQuery: combines the results of a set of queries through set conjunction (equivalent to logical AND)
    • DisjunctionQuery: combines the results of a set of queries through set disjunction (equivalent to logical OR)
    • ExclusiveDisjunctionQuery: combines the results of a set of queries through exclusive set disjunction (equivalent to logical XOR)
    • NegationQuery: inverts the results of a query (equivalent to logical NOT)

    As before, these "compound queries" can easily be created from the 3 main query classes using the &, |, ^, and ~ operators.

    New in this PR, the algorithm for applying queries to GraphFrames has been separated from query composition. The algorithm is now contained within the new QueryEngine class.

    Finally, all the old QL classes and functions have been reimplemented to be thin wrappers around the classes mentioned above. As a result, this PR should ensure full backwards compatibility with old QL code. However, if this PR is merged, all "old-style" query code should be considered deprecated.

    What's left to do

    All the implementation has been completed for this PR. Additionally, all existing unit tests that do not involve query composition are passing, which validates my claims about backwards compatibility. All that's left to do before this PR can be merged is:

    • [x] Move the existing QL unit tests into a new file (e.g., query_compat.py)
    • [x] Create a new QL unit tests file for "new-style" queries
    • [x] Move query construction unit tests into the new file and refactor as needed
    • [x] Add tests (based on the old ones) to confirm that new-style queries are working as intended
    area-query-lang priority-normal status-work-in-progress type-feature type-internal-cleanup 
    opened by ilumsden 1
  • Adding roundtrip auto update functionality to the CCT Visualization

    Adding roundtrip auto update functionality to the CCT Visualization

    The CCT visualization now supports auto-updating. If the user places a "?" in front of the input variable name passed as an argument to the visualization it will reload automatically when that variable updates anywhere in the notebook. A second argument is added for the automatic return of selection based and snapshot based queries.

    Original functionality is maintained.

    Example Syntax:

    %cct ?gf ?queries
    

    The data stored in queries is an dictionary comprised of two fields:

    {
        tree_state: <string> query describing the current state/shape of the tree,
        selection: <string> query describing the currently selected subtree
    }
    
    opened by cscully-allison 0
  • Update basic tutorial on RTD

    Update basic tutorial on RTD

    Update basic tutorial to walk through hatchet-tutorial github: https://llnl-hatchet.readthedocs.io/en/latest/basic_tutorial.html#installing-hatchet-and-tutorial-setup

    area-docs priority-normal type-feature 
    opened by slabasan 0
Releases(v2022.2.2)
  • v2022.2.2(Oct 25, 2022)

  • v2022.2.1(Oct 17, 2022)

    This is a minor release on the 2022.2 series.

    Notable Changes

    • updates caliper reader to convert caliper metadata values into correct Python objects
    • adds to_json writer and from_dict and from_json readers
    • adds render_header parameter to tree() to toggle the header on/off
    • adds the ability to match leaf nodes in the Query Language

    Other Changes

    • exposes version module to query hatchet version from the command line
    • docs: update to using hatchet at llnl page
    • adds a GitHub Action to test PyPI releases on a regular schedule
    Source code(tar.gz)
    Source code(zip)
  • v2022.2.0(Aug 19, 2022)

    Version 2022.2.0 is a major release, and resolves package install of hatchet.

    • Adds writers module to installed modules to resolve package install
    • CaliperReader bug fixes: filter records to parse, ignore function metadata field
    • Modify graphframe copy/deepcopy
    • Adds beautiful soup 4 to requirements.txt
    • Add new page on using hatchet on LLNL systems
    Source code(tar.gz)
    Source code(zip)
  • v2022.1.1(Jun 8, 2022)

    This is a minor release on the 2022.1 series. It addresses a bug fix in Hatchet's query language and Hatchet's flamegraph output:

    • flamegraph: change count to be an int instead of a float
    • query language: fix edge cases with + wildcard/quantifier by replacing it with . followed by *
    Source code(tar.gz)
    Source code(zip)
  • v2022.1.0(Apr 28, 2022)

    Version 2022.1.0 is a major release.

    New features

    • 3 new readers: TAU, SpotDB, and Caliper python reader
    • Query language extensions: compound queries, not query, and middle-level API
    • Adds GraphFrame checkpoints in HDF5 format
    • Interactive CCT visualization enhancements: pan and zoom, module encoding, multivariate encoding and adjustable mass pruning on large datasets
    • HPCToolkit: extend for GPU stream data
    • New color maps for terminal tree visualization
    • New function for calculating exclusive metrics from corresponding inclusive metrics

    Changes to existing APIs

    • Precision parameter applied to second metric in terminal tree visualization (e.g., gf.tree(precision=3))
    • Deprecates from_caliper_json(), augments existing from_caliper() to accept optional cali-query parameter and cali file or just a json file
    • Metadata now stored on the GraphFrame
    • New interface for calling the Hatchet calling context tree from Roundtrip: %cct <graphframe or list>. Deprecated interface: %loadVisualization <roundtrip_path> <literal_tree>
    • Add recursion limit parameter to graphframe filter(rec_limit=1000), resolving recursion depth errors on large graphs

    Tutorials and documentation

    • New tutorial material from the ECP Annual Meeting 2021
    • New developer and contributor guides
    • Added section on how to generate datasets for Hatchet and expanded
    • documentation on the query language

    Internal updates

    • Extend update_inclusive_columns() for multi-indexed trees
    • Moves CI from Travis to GitHub Actions
    • Roundtrip refactor
    • New unit test for formatting license headers

    Bugfixes

    • Return default_metric and metadata in filter(), squash(), copy(), and deepcopy()
    • flamegraph: extract name from dataframe column instead of frame
    • Preserve existing inc_metrics in update_inclusive_columns
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1a0(Feb 7, 2022)

    New features

    • Timemory reader
    • Query dataframe columns with GraphFrame.show_metric_columns()
    • Query nodes within a range using the call path query language
    • Extend readers to define their own default metric
    • Changes to existing APIs
    • Tree visualization displays 2 metrics
    • Literal output format: add hatchet node IDs
    • Parallel implementation of filter function
    • Caliper reader: support multiple hierarchies in JSON format
    • Adds multiprocessing dependency
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Feb 7, 2022)

    New features:

    • Interactive tree visualization in Jupyter
    • Add mult and division API
    • Update hatchet installation steps for cython integration
    • Readers: cprofiler, pyinstrument
    • Graph output formats: to_literal
    • Add profiling APIs to profile Hatchet APIs
    • Update basic tutorial for hatchet

    Changes to existing APIs

    • Remove threshold=, color=, and unicode= from tree API
    • Highlighting name disabled by default in terminal tree output is kept in sync with the dataframe
    • Internal performance improvements to unify and HPCToolkit reader, enabling analysis of large datasets
    • For mathematical operations, insert nan values for missing nodes, show values as nan and inf as necessary in dataframe
    • Extend callpath query language to support non-dataframe metrics (e.g., depth, hatchet ID)
    • Literal reader: A node can be defined with a "duplicate": True field if it should be the same node (though in a different callpath). A node also needs "frame" field, which is a dict containing the node "name" and "type" (if necessary).
    Source code(tar.gz)
    Source code(zip)
Owner
Lawrence Livermore National Laboratory
For more than 65 years, the Lawrence Livermore National Laboratory has applied science and technology to make the world a safer place.
Lawrence Livermore National Laboratory
The micro-framework to create dataframes from functions.

The micro-framework to create dataframes from functions.

Stitch Fix Technology 762 Jan 07, 2023
Single machine, multiple cards training; mix-precision training; DALI data loader.

Template Script Category Description Category script comparison script train.py, loader.py for single-machine-multiple-cards training train_DP.py, tra

2 Jun 27, 2022
Automatic earthquake catalog building workflow: EQTransformer + Siamese EQTransformer + PickNet + REAL + HypoInverse

Automatic regional-scale earthquake catalog building workflow: EQTransformer + Siamese EQTransforme

Xiao Zhuowei 9 Nov 27, 2022
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
High Dimensional Portfolio Selection with Cardinality Constraints

High-Dimensional Portfolio Selecton with Cardinality Constraints This repo contains code for perform proximal gradient descent to solve sample average

Du Jinhong 2 Mar 22, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022
Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List.

tldextract Python Module tldextract accurately separates the gTLD or ccTLD (generic or country code top-level domain) from the registered domain and s

John Kurkowski 1.6k Jan 03, 2023
small package with utility functions for analyzing (fly) calcium imaging data

fly2p Tools for analyzing two-photon (2p) imaging data collected with Vidrio Scanimage software and micromanger. Loading scanimage data relies on scan

Hannah Haberkern 3 Dec 14, 2022
An extension to pandas dataframes describe function.

pandas_summary An extension to pandas dataframes describe function. The module contains DataFrameSummary object that extend describe() with: propertie

Mourad 450 Dec 30, 2022
Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance companies

Insurance-Fraud-Claims Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance com

1 Jan 27, 2022
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022
OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase working capital.

Overview OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase

Tom 3 Feb 12, 2022
Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python 📊

Thomas 2 May 26, 2022
Handle, manipulate, and convert data with units in Python

unyt A package for handling numpy arrays with units. Often writing code that deals with data that has units can be confusing. A function might return

The yt project 304 Jan 02, 2023
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

Blaze 3.1k Jan 05, 2023
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
Time ranges with python

timeranges Time ranges. Read the Docs Installation pip timeranges is available on pip: pip install timeranges GitHub You can also install the latest v

Micael Jarniac 2 Sep 01, 2022
X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

Nguyễn Quang Huy 5 Sep 28, 2022
Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks

The following Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks (MOFs). The training set is extracted from the Cambridge S

1 Jan 09, 2022