A benchmark of data-centric tasks from across the machine learning lifecycle.
Getting Started | What is dcbench? | Docs | Contributing | Website | About
⚡️
Quickstart
pip install dcbench
Optional: some parts of Meerkat rely on optional dependencies. If you know which optional dependencies you'd like to install, you can do so using something like
pip install dcbench[dev]
instead. See setup.py for a full list of optional dependencies.
Installing from dev:
pip install "dcbench[dev] @ git+https://github.com/data-centric-ai/[email protected]"
Using a Jupyter notebook or some other interactive environment, you can import the library and explore the data-centric problems in the benchmark:
import dcbench
dcbench.tasks
To learn more, follow the walkthrough in the docs.
💡
What is dcbench?
This benchmark evaluates the steps in your machine learning workflow beyond model training and tuning. This includes feature cleaning, slice discovery, and coreset selection. We call these “data-centric” tasks because they're focused on exploring and manipulating data – not training models. dcbench
supports a growing list of them:
dcbench
includes tasks that look very different from one another: the inputs and outputs of the slice discovery task are not the same as those of the minimal data cleaning task. However, we think it important that researchers and practitioners be able to run evaluations on data-centric tasks across the ML lifecycle without having to learn a bunch of different APIs or rewrite evaluation scripts.
So, dcbench
is designed to be a common home for these diverse, but related, tasks. In dcbench
all of these tasks are structured in a similar manner and they are supported by a common Python API that makes it easy to download data, run evaluations, and compare methods.
✉️
About
dcbench
is being developed alongside the data-centric-ai benchmark. Reach out to Bojan Karlaš (karlasb [at] inf [dot] ethz [dot] ch) and Sabri Eyuboglu (eyuboglu [at] stanford [dot] edu if you would like to get involved or contribute!)