Evidently helps analyze machine learning models during validation or production monitoring

Overview

Evidently

Dashboard example

Interactive reports and JSON profiles to analyze, monitor and debug machine learning models.

Docs | Join Discord | Newsletter | Blog | Twitter

What is it?

Evidently helps analyze machine learning models during validation or production monitoring. The tool generates interactive visual reports and JSON profiles from pandas DataFrame or csv files. Currently 6 reports are available.

1. Data Drift

Detects changes in feature distribution. Dashboard example

2. Numerical Target Drift

Detects changes in numerical target and feature behavior. Dashboard example

3. Categorical Target Drift

Detects changes in categorical target and feature behavior. Dashboard example

4. Regression Model Performance

Analyzes the performance of a regression model and model errors. Dashboard example

5. Classification Model Performance

Analyzes the performance and errors of a classification model. Works both for binary and multi-class models. Dashboard example

6. Probabilistic Classification Model Performance

Analyzes the performance of a probabilistic classification model, quality of model calibration, and model errors. Works both for binary and multi-class models. Dashboard example

Installing from PyPI

MAC OS and Linux

Evidently is available as a PyPI package. To install it using pip package manager, run:

$ pip install evidently

The tool allows building interactive reports both inside a Jupyter notebook and as a separate HTML file. If you only want to generate interactive reports as HTML files or export as JSON profiles, the installation is now complete.

To enable building interactive reports inside a Jupyter notebook, we use jupyter nbextension. If you want to create reports inside a Jupyter notebook, then after installing evidently you should run the two following commands in the terminal from evidently directory.

To install jupyter nbextention, run:

$ jupyter nbextension install --sys-prefix --symlink --overwrite --py evidently

To enable it, run:

jupyter nbextension enable evidently --py --sys-prefix

That's it!

Note: a single run after the installation is enough. No need to repeat the last two commands every time.

Note 2: if you use Jupyter Lab, you may experience difficulties with exploring report inside a Jupyter notebook. However, the report generation in a separate .html file will work correctly.

Windows

Evidently is available as a PyPI package. To install it using pip package manager, run:

$ pip install evidently

The tool allows building interactive reports both inside a Jupyter notebook and as a separate HTML file. Unfortunately, building reports inside a Jupyter notebook is not yet possible for Windows. The reason is Windows requires administrator privileges to create symlink. In later versions we will address this issue.

Getting started

Jupyter Notebook

To start, prepare your data as two pandas DataFrames. The first should include your reference data, the second - current production data. The structure of both datasets should be identical.

  • For Data Drift report, include the input features only.
  • For Target Drift reports, include the column with Target and/or Prediction.
  • For Model Performance reports, include the columns with Target and Prediction.

Calculation results can be available in one of the two formats:

  • Option 1: an interactive Dashboard displayed inside the Jupyter notebook or exportable as a HTML report.
  • Option 2: a JSON Profile that includes the values of metrics and the results of statistical tests.

Option 1: Dashboard

After installing the tool, import Evidently dashboard and required tabs:

import pandas as pd
from sklearn import datasets

from evidently.dashboard import Dashboard
from evidently.tabs import DataDriftTab

iris = datasets.load_iris()
iris_frame = pd.DataFrame(iris.data, columns = iris.feature_names)

To generate the Data Drift report, run:

iris_data_drift_report = Dashboard(tabs=[DataDriftTab])
iris_data_drift_report.calculate(iris_frame[:100], iris_frame[100:], column_mapping = None)
iris_data_drift_report.save("reports/my_report.html")

To generate the Data Drift and the Categorical Target Drift reports, run:

iris_data_and_target_drift_report = Dashboard(tabs=[DataDriftTab, CatTargetDriftTab])
iris_data_and_target_drift_report.calculate(iris_frame[:100], iris_frame[100:], column_mapping = None)
iris_data_and_target_drift_report.save("reports/my_report_with_2_tabs.html")

If you get a security alert, press "trust html". HTML report does not open automatically. To explore it, you should open it from the destination folder.

To generate the Regression Model Performance report, run:

regression_model_performance = Dashboard(tabs=[RegressionPerfomanceTab]) 
regression_model_performance.calculate(reference_data, current_data, column_mapping = column_mapping) 

You can also generate a Regression Model Performance for a single DataFrame. In this case, run:

regression_single_model_performance = Dashboard(tabs=[RegressionPerformanceTab])
regression_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

To generate the Classification Model Performance report, run:

classification_performance_report = Dashboard(tabs=[ClassificationPerformanceTab])
classification_performance_report.calculate(reference_data, current_data, column_mapping = column_mapping)

For Probabilistic Classification Model Performance report, run:

classification_performance_report = Dashboard(tabs=[ProbClassificationPerformanceTab])
classification_performance_report.calculate(reference_data, current_data, column_mapping = column_mapping)

You can also generate either of the Classification reports for a single DataFrame. In this case, run:

classification_single_model_performance = Dashboard(tabs=[ClassificationPerformanceTab])
classification_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

or

prob_classification_single_model_performance = Dashboard(tabs=[ProbClassificationPerformanceTab])
prob_classification_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

Option 2: Profile

After installing the tool, import Evidently profile and required sections:

import pandas as pd
from sklearn import datasets

from evidently.model_profile import Profile
from evidently.profile_sections import DataDriftProfileSection

iris = datasets.load_iris()
iris_frame = pd.DataFrame(iris.data, columns = iris.feature_names)

To generate the Data Drift profile, run:

iris_data_drift_profile = Profile(sections=[DataDriftProfileSection])
iris_data_drift_profile.calculate(iris_frame, iris_frame, column_mapping = None)
iris_data_drift_profile.json() 

To generate the Data Drift and the Categorical Target Drift profile, run:

iris_target_and_data_drift_profile = Profile(sections=[DataDriftProfileSection, CatTargetDriftProfileSection])
iris_target_and_data_drift_profile.calculate(iris_frame[:75], iris_frame[75:], column_mapping = None) 
iris_target_and_data_drift_profile.json() 

You can also generate a Regression Model Performance for a single DataFrame. In this case, run:

regression_single_model_performance = Profile(sections=[RegressionPerformanceProfileSection])
regression_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

To generate the Classification Model Performance profile, run:

classification_performance_profile = Profile(sections=[ClassificationPerformanceProfileSection])
classification_performance_profile.calculate(reference_data, current_data, column_mapping = column_mapping)

For Probabilistic Classification Model Performance profile, run:

classification_performance_report = Profile(sections=[ProbClassificationPerformanceProfileSection])
classification_performance_report.calculate(reference_data, current_data, column_mapping = column_mapping)

You can also generate either of the Classification profiles for a single DataFrame. In this case, run:

classification_single_model_performance = Profile(sections=[ClassificationPerformanceProfileSection])
classification_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

or

prob_classification_single_model_performance = Profile(sections=[ProbClassificationPerformanceProfileSection])
prob_classification_single_model_performance.calculate(reference_data, None, column_mapping=column_mapping)

Terminal

You can generate HTML reports or JSON profiles directly from the bash shell. To do this, prepare your data as two csv files. In case you run one of the performance reports, you can have only one file. The first one should include your reference data, the second - current production data. The structure of both datasets should be identical.

To generate a HTML report, run the following command in bash:

python -m evidently calculate dashboard --config config.json 
--reference reference.csv --current current.csv --output output_folder --report_name output_file_name

To generate a JSON profile, run the following command in bash:

python -m evidently calculate profile --config config.json 
--reference reference.csv --current current.csv --output output_folder --report_name output_file_name

Here:

  • reference is the path to the reference data,
  • current is the path to the current data,
  • output is the path to the output folder,
  • report_name is name of the output file,
  • config is the path to the configuration file,
  • pretty_print to print the JSON profile with indents (for profile only).

Currently, you can choose the following Tabs or Sections:

  • data_drift to estimate the data drift,
  • num_target_drift to estimate target drift for numerical target,
  • cat_target_drift to estimate target drift for categorical target,
  • classification_performance to explore the performance of a classification model,
  • prob_classification_performance to explore the performance of a probabilistic classification model,
  • regression_performance to explore the performance of a regression model.

To configure a report or a profile you need to create the config.json file. This file configures the way of reading your input data and the type of the report.

Here is an example of a simple configuration for a report, where we have comma separated csv files with headers and there is no date column in the data.

Dashboard:

{
  "data_format": {
    "separator": ",",
    "header": true,
    "date_column": null
  },
  "column_mapping" : {},
  "dashboard_tabs": ["cat_target_drift"]
}

Profile:

{
  "data_format": {
    "separator": ",",
    "header": true,
    "date_column": null
  },
  "column_mapping" : {},
  "profile_sections": ["data_drift"],
  "pretty_print": true
}

Here is an example of a more complicated configuration, where we have comma separated csv files with headers and datetime column. We also specified the column_mapping dictionary to add information about datetime, target and numerical_features.

Dashboard:

{
  "data_format": {
    "separator": ",",
    "header": true,
    "date_column": "datetime"
  },
  "column_mapping" : {
    "datetime":"datetime",
    "target":"target",
    "numerical_features": ["mean radius", "mean texture", "mean perimeter", 
      "mean area", "mean smoothness", "mean compactness", "mean concavity", 
      "mean concave points", "mean symmetry"]},
  "dashboard_tabs": ["cat_target_drift"],
  "sampling": {
      "reference": {
      "type": "none"
    },
      "current": {
      "type": "nth",
      "n": 2
    }
  }
}

Profile:

{
  "data_format": {
    "separator": ",",
    "header": true,
    "date_column": null
  },
  "column_mapping" : {
    "target":"target",
    "numerical_features": ["mean radius", "mean texture", "mean perimeter", 
      "mean area", "mean smoothness", "mean compactness", "mean concavity", 
      "mean concave points", "mean symmetry"]},
  "profile_sections": ["data_drift", "cat_target_drift"],
  "pretty_print": true,
  "sampling": {
    "reference": {
      "type": "none"
    },
    "current": {
      "type": "random",
      "ratio": 0.8
    }
  }
}

Telemetry

When you use Evidently in the command-line interface, we collect basic telemetry (starting from 0.1.21.dev0 version). It includes data on the environment (e.g. Python version) and usage (type of report or profile generated). You can read more about what we collect here.

You can opt-out from telemetry collection by setting the environment variable EVIDENTLY_DISABLE_TELEMETRY=1

Large datasets

As you can see from the above example, you can specify sampling parameters for large files. You can use different sampling strategies for reference and current data, or apply sampling only to one of the files. Currently we have 3 sampling types available:

  • none - there will be no sampling for the file,
  • nth - each Nth row of the file will be taken. This option works together with n parameter (see the example with the Dashboard above)
  • random - random sampling will be applied. This option works together with ratio parameter (see the example with the Profile above)

Documentation

For more information, refer to a complete Documentation.

Examples

  • See Data Drift Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate .html file: Iris, Boston

  • See Numerical Target and Data Drift Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate file: Iris, Breast Cancer

  • See Categorical Target and Data Drift Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate file: Boston

  • See Regression Performance Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate file: Bike Sharing Demand

  • See Classification Performance Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate file: Iris

  • See Probabilistic Classification Performance Dashboard and Profile generation to explore the results both inside a Jupyter notebook and as a separate .html file: Iris, Breast Cancer

Stay updated

We will be releasing more reports soon. If you want to receive updates, follow us on Twitter, or sign up for our newsletter. You can also find more tutorials and explanations in our Blog. If you want to chat and connect, join our Discord community!

Comments
  • Add Maximum-Mean-Discrepancy (MMD) test for distributions

    Add Maximum-Mean-Discrepancy (MMD) test for distributions

    Issue: #378

    What does this implement/fix? This adds MMD test for comparing distributions.

    • [x] test implementation
    • [x] unittests
    • [x] doc
    • [x] examples

    Why use it ? Quoting from the paper https://arxiv.org/pdf/0805.2368.pdf This test has application in a variety of areas. In bioinformatics, it is of interest to compare microarray data from different tissue types, either to determine whether two subtypes of cancer may be treated as statistically indistinguishable from a diagnosis perspective, or to detect differences in healthy and cancerous tissue. In database attribute matching, it is desirable to merge databases containing multiple fields, where it is not known in advance which fields correspond: the fields are matched by maximising the similarity in the distributions of their entries.

    Refs:

    1. https://jmlr.csail.mit.edu/papers/volume13/gretton12a/gretton12a.pdf (Paper cited 3800 times, this method is also mentioned in made with ml)
    2. Birds eye view of MMD and its applications (at the end of the video 11:20 onwards)
    3. Kernel methods (MMD in detail)
    4. MMD intuition
    5. Kernel mean embeddings
    enhancement hacktoberfest hacktoberfest-accepted 
    opened by SangamSwadiK 13
  • Add  fisher's exact

    Add fisher's exact

    Issue: #345 (Add fisher's exact test for binary data)

    What does this implement/fix ? This adds fisher's exact test, its doc and necessary changes in the How to examples.

    • [x] test implementation
    • [x] unittests
    • [x] doc
    • [x] examples

    cc @mertbozkir

    enhancement hacktoberfest hacktoberfest-accepted 
    opened by SangamSwadiK 10
  • added hellinger_distance unittest and updated doc

    added hellinger_distance unittest and updated doc

    Issue: #347 (Hellinger distance for drift analysis)

    What does this implement/fix? Adds unittests for hellinger_distance drift detection test

    @emeli-dral Please let me know if any changes are required. Thanks

    documentation enhancement hacktoberfest hacktoberfest-accepted 
    opened by inderpreetsingh01 9
  • Integrating in Prediction pipeline

    Integrating in Prediction pipeline

    This package really helps the Data Science Team to find out the data drift . Please let me know is there any way we can integrate into prediction pipeline to generate the metrics. Is there any way we can generate the json file with this metrics ?

    enhancement 
    opened by rambabusure 9
  • How can get data quality report with single dataset?

    How can get data quality report with single dataset?

    Hi, I have been trying to generate the data quality report by using evidently using single dataset. I have been passing my single dataset in "current_data" argument while preparing the report and it was throwing error that "referenece_data" is not passed. Can you please help me how i can generate all the data quality reports using my single dataset? If you can also refer me to some examples notebook it would be great. Also, I want to add few more additional checks into the report apart from default checks how i can do that?

    Screenshot 2022-12-16 at 4 20 29 PM
    opened by karndeepsingh 8
  • Compatible with Metaflow-card-html

    Compatible with Metaflow-card-html

    I would love to integrate Evidently solution into my current ML deployment framework: Metaflow It supports visualizing rich HTML as Metaflow Cards.

    Specifically, it is possible to use a special card (type='html') that support interactive visualizations in html format.

    But, Metaflow requires the final html file to be stored as a class attribute (self.html) so it can be rendered.

    I identified the protected method, _repr_html_(), which outputs the content of HTML. But in order to make it work, it has to be using the "inline" mode, instead of thecurrent hard-coded "auto"

    Would it be possible to make this a selectable parameter, that by default = "auto"? Or even better, expose a non-protected method to print the html output so then it becomes fully compatible with Metaflow?

    contents of data_stability._repr_html_():

    image

    changing to "inline" mode:

            from evidently.renderers.notebook_utils import determine_template
            from evidently.utils.dashboard import TemplateParams
    
            dashboard_id, dashboard_info, graphs = data_stability._build_dashboard_info()
            template_params = TemplateParams(
                dashboard_id=dashboard_id,
                dashboard_info=dashboard_info,
                additional_graphs=graphs,
            )
            self.html = data_stability._render(
                determine_template("inline"), template_params
            )
    

    image

    opened by marcellovictorino 8
  • library issue : DataDrift,DataQuality

    library issue : DataDrift,DataQuality

    Hello community,

    when I try to import libraries

    1. from evidently.metric_preset import DataDrift, NumTargetDrift
    2. from evidently.test_preset import DataQuality, DataStability

    I am getting following error

    1. cannot import name 'DataDrift' from 'evidently.metric_preset'
    2. cannot import name 'DataQuality' from 'evidently.test_preset'

    this was just example notebook from evidently.

    Any suggestions are highly appreciated.

    thanks,

    opened by IITGoaPyVidya 8
  • Tests for `CatTargetDriftAnalyzer`, `chi_stat_test` and code simplifications

    Tests for `CatTargetDriftAnalyzer`, `chi_stat_test` and code simplifications

    • [x] Add tests for CatTargetDriftAnalyzer
    • [x] Refactor
    • [x] Ask a bunch of questions :-)
      • [x] What about target_type ?
      • [x] Tests with options

    closes #96

    opened by burkovae 8
  • DataQualityTestPreset error with Categorical features

    DataQualityTestPreset error with Categorical features

    Hi team, first of all - thank you for creating this amazing package for DS/MLEs all over the world! :)

    Describe the bug I am using Evidently to assess the Data Quality of my current and reference pandas dataframe datasets.

    But, I see the following message on the AWS SageMaker notebook: ValueError: could not convert string to float: 'Canteen' (one of my categories in the features)

    I was able to successfully run the Education Dataset mentioned here. Error Message:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
        343             method = get_real_method(obj, self.print_method)
        344             if method is not None:
    --> 345                 return method()
        346             return None
        347         else:
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/suite/base_suite.py in _repr_html_(self)
        105 
        106     def _repr_html_(self):
    --> 107         dashboard_id, dashboard_info, graphs = self._build_dashboard_info()
        108         template_params = TemplateParams(
        109             dashboard_id=dashboard_id, dashboard_info=dashboard_info, additional_graphs=graphs
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/test_suite/test_suite.py in _build_dashboard_info(self)
        126             renderer.color_options = color_options
        127             by_status[test_result.status] = by_status.get(test_result.status, 0) + 1
    
    --> 128             test_results.append(renderer.render_html(test))
        129 
        130         summary_widget = BaseWidgetInfo(
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/tests/data_quality_tests.py in render_html(self, obj)
        822             raise ValueError("column_name should be present")
        823 
    --> 824         counts_data = obj.metric.get_result().plot_data.counts_of_values
        825         if counts_data is not None:
        826             curr_df = counts_data["current"]
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/metrics/base_metric.py in get_result(self)
         51         result = self.context.metric_results.get(self, None)
         52         if isinstance(result, ErrorResult):
    ---> 53             raise result.exception
         54         if result is None:
         55             raise ValueError(f"No result found for metric {self} of type {type(self).__name__}")
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/suite/base_suite.py in run_checks(self)
        260             try:
        261                 logging.debug(f"Executing {type(test)}...")
    --> 262                 test_results[test] = test.check()
        263             except BaseException as ex:
        264                 test_results[test] = TestResult(
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/tests/data_quality_tests.py in check(self)
        443         #     return result
        444 
    --> 445         result = super().check()
        446 
        447         if self.value is None:
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/tests/base_test.py in check(self)
        302             status=TestResult.SKIPPED,
        303         )
    --> 304         value = self.calculate_value_for_test()
        305         self.value = value
        306         result.description = self.get_description(value)
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/tests/data_quality_tests.py in calculate_value_for_test(self)
        786 
        787     def calculate_value_for_test(self) -> Optional[Numeric]:
    --> 788         features_stats = self.metric.get_result().current_characteristics
        789         most_common_percentage = features_stats.most_common_percentage
        790 
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/metrics/base_metric.py in get_result(self)
         51         result = self.context.metric_results.get(self, None)
         52         if isinstance(result, ErrorResult):
    ---> 53             raise result.exception
         54         if result is None:
         55             raise ValueError(f"No result found for metric {self} of type {type(self).__name__}")
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/suite/base_suite.py in run_calculate(self, data)
        242                     logging.debug(f"Executing {type(calculation)}...")
        243                     try:
    --> 244                         calculations[calculation] = calculation.calculate(data)
        245                     except BaseException as ex:
        246                         calculations[calculation] = ErrorResult(ex)
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/metrics/data_integrity/column_summary_metric.py in calculate(self, data)
        192         if data.reference_data is not None:
        193             reference_data = data.reference_data
    --> 194             ref_characteristics = self.map_data(get_features_stats(data.reference_data[self.column_name], column_type))
        195         curr_characteristics = self.map_data(get_features_stats(data.current_data[self.column_name], column_type))
        196 
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/evidently/calculations/data_quality.py in get_features_stats(feature, feature_type)
        186         # round most common feature value for numeric features to 1e-5
        187         if not np.issubdtype(feature, np.number):
    --> 188             feature = feature.astype(float)
        189         result.most_common_value = np.round(result.most_common_value, 5)
        190         result.infinite_count = int(np.sum(np.isinf(feature)))
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/generic.py in astype(self, dtype, copy, errors)
       5813         else:
       5814             # else, only a single dtype is given
    -> 5815             new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
       5816             return self._constructor(new_data).__finalize__(self, method="astype")
       5817 
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/internals/managers.py in astype(self, dtype, copy, errors)
        416 
        417     def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
    --> 418         return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
        419 
        420     def convert(
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/internals/managers.py in apply(self, f, align_keys, ignore_failures, **kwargs)
        325                     applied = b.apply(f, **kwargs)
        326                 else:
    --> 327                     applied = getattr(b, f)(**kwargs)
        328             except (TypeError, NotImplementedError):
        329                 if not ignore_failures:
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/internals/blocks.py in astype(self, dtype, copy, errors)
        589         values = self.values
        590 
    --> 591         new_values = astype_array_safe(values, dtype, copy=copy, errors=errors)
        592 
        593         new_values = maybe_coerce_values(new_values)
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/dtypes/cast.py in astype_array_safe(values, dtype, copy, errors)
       1307 
       1308     try:
    -> 1309         new_values = astype_array(values, dtype, copy=copy)
       1310     except (ValueError, TypeError):
       1311         # e.g. astype_nansafe can fail on object-dtype of strings
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/dtypes/cast.py in astype_array(values, dtype, copy)
       1255 
       1256     else:
    -> 1257         values = astype_nansafe(values, dtype, copy=copy)
       1258 
       1259     # in pandas we don't store numpy str dtypes, so convert to object
    
    ~/anaconda3/envs/python3/lib/python3.8/site-packages/pandas/core/dtypes/cast.py in astype_nansafe(arr, dtype, copy, skipna)
       1199     if copy or is_object_dtype(arr.dtype) or is_object_dtype(dtype):
       1200         # Explicit copy, or required since NumPy can't view from / to object.
    -> 1201         return arr.astype(dtype, copy=True)
       1202 
       1203     return arr.astype(dtype, copy=copy)
    
    ValueError: could not convert string to float: 'Canteen'
    

    Expected behavior: To see the Data Quality report

    To Reproduce: After Evidently imports,

    data_quality = TestSuite(tests=[
        DataQualityTestPreset(),
    ])
    
    data_quality.run(reference_data=reference_df, current_data=current_df)
    data_quality
    

    System: AWS SageMaker with Python 3.8.12 and Evidently 0.2.0

    opened by AbhiPawar5 7
  • data missing in the report

    data missing in the report

    Hi

    I was trying to generate a classification dashboard. The "reference" dataframe has 128770 rows, however, the dashboard only shows that it has 122 rows (number of objects). And for "current" dataframe, it has 632337 rows, but the dashboard only shows it has 1079 rows. I am not sure what went wrong.

    Yang

    opened by superyang713 7
  • AttributeError when trying to save report

    AttributeError when trying to save report

    When saving a report using

    data_drift_report.save("/dbfs/FileStore/drift/drift_report.html")

    I get

    AttributeError: 'DataDriftTableWidget' object has no attribute 'wi'

    caused by

    /databricks/python/lib/python3.8/site-packages/evidently/widgets/data_drift_table_widget.py in get_info(self) 25 26 def get_info(self) -> BaseWidgetInfo: ---> 27 if self.wi: 28 return self.wi 29 raise ValueError("no widget info provided")

    I'm using azure databricks, and can save the json profile, and get the same error if I use data_drift_report.show(), though I never installed nbextension.

    I can save the iris report, and am using numerical and categorical feature mappings.

    opened by lrumanes 7
  • [BUG] Broken links in the readme

    [BUG] Broken links in the readme

    The following links in the READme are currently broken:

    • https://docs.evidentlyai.com/reports/data-quality
    • https://docs.evidentlyai.com/reports/classification-performance
    • https://github.com/evidentlyai/evidently/blob/main/sample_notebooks/getting_started_tutorial.ipynb
    • https://github.com/evidentlyai/evidently/blob/main/sample_notebooks/evidently_metric_presets.ipynb
    • https://github.com/evidentlyai/evidently/blob/main/sample_notebooks/evidently_metrics.ipynb
    • https://github.com/evidentlyai/evidently/blob/main/sample_notebooks/evidently_test_presets.ipynb
    • https://github.com/evidentlyai/evidently/blob/main/sample_notebooks/evidently_tests.ipynb

    Also, some links to tutorials (e.g., https://github.com/evidentlyai/evidently/tree/main/examples/integrations/mlflow_logging) point to un-rendered python notebooks rather than an HTML page in the documentation. It'd be nice to be able to see the examples without having to run them locally.

    bug 
    opened by OlivierBinette 1
  • TestAccuracyScore calculating

    TestAccuracyScore calculating

    Hello, I'm trying to run the Test: TestAccuracyScore, manually, however I'm getting an Error using the earlier version 0.2 In the previous version the code was running fine.

    My code:

    from sklearn import datasets, ensemble, model_selection
    #Dataset for Multiclass Classifcation (labels)
    
    iris_data = datasets.load_iris(as_frame='auto')
    iris = iris_data.frame
    
    iris_ref = iris.sample(n=75, replace=False)
    iris_cur = iris.sample(n=75, replace=False)
    
    model = ensemble.RandomForestClassifier(random_state=1, n_estimators=3)
    model.fit(iris_ref[iris_data.feature_names], iris_ref.target)
    
    iris_ref['prediction'] = model.predict(iris_ref[iris_data.feature_names])
    iris_cur['prediction'] = model.predict(iris_cur[iris_data.feature_names])
    
    schema = ColumnMapping(
        target='target',
        prediction='prediction',
        datetime=None,
        id= None,
        numerical_features = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)',
           'petal width (cm)'],
        categorical_features = None,
        datetime_features = None,
        task = 'classification'
    )
    
    columns = [
        ColumnDefinition(column_name='sepal length (cm)',column_type="num"),
        ColumnDefinition(column_name='sepal width (cm)',column_type="num"),
        ColumnDefinition(column_name='petal length (cm)',column_type="num"),
        ColumnDefinition(column_name='petal width (cm)',column_type="num")
    ]
    
    
    ires_data_definition = DataDefinition(
            columns = columns,
            target = ColumnDefinition(column_name='target',column_type="num"),
            prediction_columns = ColumnDefinition(column_name='prediction',column_type="num"),
            id_column = None,
            datetime_column = None,
            task = 'classification',
            classification_labels = None
    )
    
    ## Running
    test_classification = TestAccuracyScore()
    input_data = InputData(
        reference_data = iris_ref,
        current_data = iris_cur,
        column_mapping = schema,
        data_definition = ires_data_definition
    )
    
    test_classification.metric.calculate(data = input_data)
    
    

    ERROR:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    Input In [39], in <cell line: 11>()
          3 test_classification = TestAccuracyScore()
          4 input_data = InputData(
          5     reference_data = iris_ref,
          6     current_data = iris_cur,
          7     column_mapping = schema,
          8     data_definition = ires_data_definition
          9 )
    ---> 11 test_classification.metric.calculate(data = input_data)
    
    File ~/Desktop/Codes/CoachMe/API-Dev/carrot_latest/lib/python3.9/site-packages/evidently/metrics/classification_performance/classification_quality_metric.py:48, in ClassificationQualityMetric.calculate(self, data)
         44     raise ValueError("The columns 'target' and 'prediction' columns should be present")
         45 target, prediction = self.get_target_prediction_data(data.current_data, data.column_mapping)
         46 current = calculate_metrics(
         47     data.column_mapping,
    ---> 48     self.confusion_matrix_metric.get_result().current_matrix,
         49     target,
         50     prediction,
         51 )
         53 reference = None
         54 if data.reference_data is not None:
    
    File ~/Desktop/Codes/CoachMe/API-Dev/carrot_latest/lib/python3.9/site-packages/evidently/metrics/base_metric.py:50, in Metric.get_result(self)
         48 def get_result(self) -> TResult:
         49     if self.context is None:
    ---> 50         raise ValueError("No context is set")
         51     result = self.context.metric_results.get(self, None)
         52     if isinstance(result, ErrorResult):
    
    ValueError: No context is set
    
    help wanted 
    opened by samuelamico 3
  • Column Mapping error: 'dict' object has no attribute 'datetime'

    Column Mapping error: 'dict' object has no attribute 'datetime'

    Hi, I have been trying to create a regression performance report using evidently. But, I am stuck and I cannot find any solution to the error I am getting in column mapping.

    image

    This is the error I am getting after running the code. image

    Can you please help me in identifying what I am missing?

    opened by rijulml 2
  • Add test for date ranges

    Add test for date ranges

    Test Group: Data Quality

    Feature request: to add the ability to test that values in a DateTime column are within a specific range.

    Source: Discord conversation https://discord.com/channels/815991896916361276/815992391076806657/threads/1044973406929571860

    enhancement 
    opened by elenasamuylova 0
  • Target names from column mapping not showing in reports

    Target names from column mapping not showing in reports

    Even after setting column_mapping.target_names, class names are not used in the legend for Reports with TargetDriftPreset() and for ClassificationPreset() (class representation and confusion matrix) See example in Colab Notebook https://colab.research.google.com/drive/14WNUC36p86T4kV4KABtWA5CZCi-2Gl9G?usp=sharing

    bug 
    opened by NataliaTarasovaNatoshir 0
Releases(v0.2.1)
  • v0.2.1(Dec 8, 2022)

  • v0.2.0(Nov 23, 2022)

    Breaking Changes:

    NOTE: Dashboards, Profiles, Tabs and Profile Sections are now DEPRECATED and will be completely REMOVED in the nearest releases.

    Deleted NumTargetDriftPreset (use TargetDriftPreset instead) Deleted CatTargetDriftPreset (use TargetDriftPreset instead)

    Renamed Parameters:

    • classification_threshold -> probas_threshold
      this afects: ClassificationQualityMetric , TestAccuracyScore, TestPrecisionScore, TestRecallScore, TestF1Score, TestTPR, TestTNR, TestFPR, TestFNR, TestPrecisionByClass, TestRecallByClass, TestF1ByClass, ClassificationPreset, BinaryClassificationTestPreset

    • threshold-> stattest_threshold this afects: ColumnDriftMetric, TestColumnValueDrift, BinaryClassificationTestPreset, BinaryClassificationTopKTestPreset, MulticlassClassificationTestPreset

    • all_features_stattest  -> stattest & all_features_threshold -> stattest_threshold this afects: DataDriftTable, DatasetDriftMetric, TestNumberOfDriftedColumns, TestShareOfDriftedColumns, DataDriftPreset, TargetDriftPreset, DataDriftTestPreset, NoTargetPerformanceTestPreset

    • cat_features_stattest -> cat_stattest  & cat_features_threshold -> cat_stattest_threshold  this afects: DataDriftTable, DatasetDriftMetric, TestNumberOfDriftedColumns, TestShareOfDriftedColumns, DataDriftPreset, TargetDriftPreset, DataDriftTestPreset, NoTargetPerformanceTestPreset

    • num_features_stattest -> num_stattest & num_features_stattest -> num_stattest_threshold  this afects: DataDriftTable, DatasetDriftMetric, TestNumberOfDriftedColumns, TestShareOfDriftedColumns, DataDriftPreset, TargetDriftPreset, DataDriftTestPreset, NoTargetPerformanceTestPreset

    • per_feature_stattest -> per_column_stattest & per_feature_stattest -> per_column_stattest_threshold this afects: DataDriftTable, DatasetDriftMetric, TestNumberOfDriftedColumns, TestShareOfDriftedColumns, DataDriftPreset, TargetDriftPreset, DataDriftTestPreset, NoTargetPerformanceTestPreset

    Renamed Tests:

    • TestColumnValueDrift  -> TestColumnDrift
    • TestColumnValueRegExp -> TestColumnRegExp 
    • TestValueQuantile -> TestColumnQuantile

    Updates:

    Added top_error parameter to RegressionErrorBiasTable metric #422 Added ClassificationDummyMetric metric #445 Added RegressionDummyMetric metric #445 Added ConflictPredictionMetric metric #455 Added ConflictTargetMetric metric #455

    Added API reference DRAFT https://docs.evidentlyai.com/reference/api-reference

    Added new Statistical Tests:

    • Apps-Singleton test #363
    • Total Variation Distance (TVD) #391

    Fixes:

    #431 #438 #451 #458

    Source code(tar.gz)
    Source code(zip)
  • v0.1.59.dev2(Oct 26, 2022)

    Breaking Changes: Metrics Rename: ClassificationQuality -> ClassificationQualityMetric ProbabilityDistribution -> ClassificationProbDistribution

    Tests Rename: TestHighlyCorrelatedFeatures -> TestHighlyCorrelatedColumns TestFeatureValueMin -> TestColumnValueMin TestFeatureValueMax -> TestColumnValueMax TestFeatureValueMean -> TestColumnValueMean TestFeatureValueMedian -> TestColumnValueMedian TestFeatureValueStd -> TestColumnValueStd TestNumberOfDriftedFeatures -> TestNumberOfDriftedColumns TestShareOfDriftedFeatures -> TestShareOfDriftedColumns TestFeatureValueDrift -> TestColumnValueDrift

    Updates: #400 #373

    Fixes: #371 #421 #419 #418 #417 #416 #415

    Source code(tar.gz)
    Source code(zip)
  • v0.1.59.dev0(Oct 24, 2022)

    Breaking Changes:

    All Test Presets were renamed.
    TestPreset suffix was added to original names:

    • NoTargetPerformance -> NoTargetPerformanceTestPreset
    • DataQuality -> DataQualityTestPreset
    • DataStability -> DataStabilityTestPreset
    • DataDrift -> DataDriftTestPreset
    • Regression -> RegressionTestPreset
    • MulticlassClassification -> MulticlassClassificationTestPreset
    • BinaryClassificationTopK -> BinaryClassificationTopKTestPreset
    • BinaryClassification -> BinaryClassificationTestPreset

    Updates:

    Added DataDrift metrics:

    • DatasetDriftMetric
    • DataDriftTable
    • ColumnValuePlot
    • TargetByFeaturesTable

    Added DataQuality metrics:

    • ColumnDistributionMetric
    • ColumnQuantileMetric
    • ColumnCorrelationsMetric
    • ColumnValueListMetric
    • ColumnValueRangeMetric
    • DatasetCorrelationsMetric

    Added DataIntegrity metrics:

    • ColumnSummaryMetric
    • ColumnMissingValuesMetric
    • DatasetSummaryMetric
    • DatasetMissingValuesMetric

    Added Classification metrics:

    • ClassificationQuality
    • ClassificationClassBalance
    • ClassificationConfusionMatrix
    • ClassificationQualityByClass
    • ClassificationClassSeparationPlot
    • ProbabilityDistribution
    • ClassificationRocCurve
    • ClassificationPRCurve
    • ClassificationPRTable
    • ClassificationQualityByFeatureTable

    Added Regression metrics:

    • RegressionQualityMetric
    • RegressionPredictedVsActualScatter
    • RegressionPredictedVsActualPlot
    • RegressionErrorPlot
    • RegressionAbsPercentageErrorPlot
    • RegressionErrorDistribution
    • RegressionErrorNormality
    • RegressionTopErrorMetric
    • RegressionErrorBiasTable

    Added MetricPresets:

    • DataDriftPreset
    • DataQualityPreset
    • RegressionPreset
    • ClassificationPreset

    Added New Statistical Tests

    • Anderson-Darling test for numerical features
    • Cramer Von Mises test for numerical features
    • Hellinger distance test for numerical and categorical features
    • Mann-Whitney U-rank test for numerical features
    • Cressie-Read power divergence test for categorical features

    Fixes: #334 #353 #361 #367

    Source code(tar.gz)
    Source code(zip)
  • v0.1.58.dev0(Sep 30, 2022)

    Updates:

    • Replaced BaseWidgetInfo with helpers: https://github.com/evidentlyai/evidently/pull/326
    • Added metrics generator for column-based metrics: https://github.com/evidentlyai/evidently/pull/323
    • Added black and isort: https://github.com/evidentlyai/evidently/pull/322

    Fixes:

    • https://github.com/evidentlyai/evidently/pull/340
    • https://github.com/evidentlyai/evidently/pull/341
    • https://github.com/evidentlyai/evidently/pull/336
    Source code(tar.gz)
    Source code(zip)
  • v0.1.57.dev0(Sep 7, 2022)

    Updates:

    • Introduced Report - an object, that unites Dashboard and Profile functionality
    • Introduced MetricPreset - an object, that replaces Tab and ProfileSection
    • Implemented following MetricPresets: DataDrift, DataQuality (limited content), CatTargetDrift, NumTargetDrift, RegressionPerformance, ClassificationPerformance

    Fixes:

    • #312
    Source code(tar.gz)
    Source code(zip)
  • v0.1.56.dev0(Aug 16, 2022)

    Updates:

    • Implemented function generate_column_tests() to generate similar tests for many columns automatically

    Dataset Null-related tests

    • Implemented TestNumberOfNulls to replace TestNumberOfNANs and TestNumberOfNullValues
    • Implemented TestShareOfNulls
    • Implemented TestShareOfColumnsWithNulls
    • Implemented TestShareOfRowsWithNulls
    • Implemented TestNumberOfDifferentNulls

    Column Null-related tests

    • Implemented TestColumnNumberOfNulls to replace TestColumnNumberOfNullValues
    • Implemented TestColumnShareOfNulls to replace TestColumnNANShare

    Fixes:

    • Fixed metric duplication to reduce an amount of calculations while building TestSuites (basically, now same metrics from the one test suite are not recalculated multiple times)
    • Implemented NAN filtering for all dashboards in way, that each column is filtered separately
    Source code(tar.gz)
    Source code(zip)
  • v0.1.55.dev0(Aug 5, 2022)

    Updates:

    • added TPR, TNR, FPR, FNR Tests for Binary Classification Model Performance
    • Renamed status "No Group" to "Dataset-level tests" in TestSuites filtering menu

    Fixes:

    • #207
    • #265
    • #256
    • fixed unit tests for different versions of python and pandas
    Source code(tar.gz)
    Source code(zip)
  • v0.1.54.dev0(Jul 29, 2022)

    Updates:

    1. Updated the UI to let users group tests by the following properties:
    • All tests
    • By Status
    • By feature
    • By test type
    • By test group
    1. New Tests:
    • Added tests for binary probabilistic classification models
    • Added tests for multiclass classification models
    • Added tests for multiclass probabilistic classification models The full list of tests will be available in the docs.
    1. New Tests Presets:
    • Regression
    • MulticlassClassification
    • BinaryClassificationTopK
    • BinaryClassification
    Source code(tar.gz)
    Source code(zip)
  • v0.1.53.dev0(Jul 19, 2022)

    • added default configurations for Data Quality Tests
    • added default configurations for Data Integrity Tests
    • added visualisation for Data Quality Tests
    • added visualisation for Data Integrity Tests
    • Test descriptions are updated (column names are highlighted)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.52.dev0(Jul 7, 2022)

    Implemented new interfaces to test data and models in a batch: Test Suite.

    Implemented the following Individual tests:

    • TestNumberOfColumns()
    • TestNumberOfRows()
    • TestColumnNANShare()
    • TestShareOfOutRangeValues()
    • TestNumberOfOutListValues()
    • TestMeanInNSigmas()
    • TestMostCommonValueShare()
    • TestNumberOfConstantColumns()
    • TestNumberOfDuplicatedColumns()
    • TestNumberOfDuplicatedRows()
    • TestHighlyCorrelatedFeatures()
    • TestTargetFeaturesCorrelations()
    • TestShareOfDriftedFeatures()
    • TestValueDrfit()
    • TestColumnsType()

    Implemented the following test presets:

    • Data Quality. This preset is focused on the data quality issues like duplicate rows or null values.  
    • Data Stability. This preset identifies the changes in the data or differences between the batches.
    • Data Drift. This one compares feature distributions using statistical tests and distance metrics.  
    • NoTargetPerformance. This preset combines several checks to run when there are model predictions, there are no actuals or ground truth labels. This includes checking for prediction drift and some of the data quality and stability checks.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.51.dev0(May 31, 2022)

    Updates:

    • Updated DataDriftTab: added target and prediction rows in DataDrift Table widget
    • Updated CatTargetDriftTab: added additional widgets for probabilistic cases in both binary and multiclasss probabilistic classification, particularly widget for label drift and class probability distributions.

    Fixes:

    • #233
    • fixed previes in DataDrift Table widget. Now histogram previews for refernce and current data share an x-axis. This means that bins order in refernce and current histograms is the same, it makes visual distribution comparion esier.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.50.dev0(May 19, 2022)

    Release scope:

    1. Stat test auto selection algorithm update: https://docs.evidentlyai.com/reports/data-drift#how-it-works

    For small data with <= 1000 observations in the reference dataset:

    • For numerical features (n_unique > 5): two-sample Kolmogorov-Smirnov test.
    • For categorical features or numerical features with n_unique <= 5: chi-squared test.
    • For binary categorical features (n_unique <= 2), we use the proportion difference test for independent samples based on Z-score. All tests use a 0.95 confidence level by default.

    For larger data with > 1000 observations in the reference dataset:

    1. Added options for setting custom statistical test for Categorical and Numerical Target Drift Dashboard/Profile: cat_target_stattest_func: Defines a custom statistical test to detect target drift in CatTargetDrift. num_target_stattest_func: Defines a custom statistical test to detect target drift in NumTargetDrift.

    2. Added options for setting custom threshold for drift detection for Categorical and Numerical Target Drift Dashboard/Profile: cat_target_threshold: Optional[float] = None num_target_threshold: Optional[float] = None These thresholds highly depends on selected stattest, generally it is either threshold for p_value or threshold for a distance.

    Fixes:
    #207

    Source code(tar.gz)
    Source code(zip)
  • v0.1.49.dev0(Apr 30, 2022)

    StatTests The following statistical tests now can be used for both numerical and categorical features:

    • 'jensenshannon'
    • 'kl_div'
    • 'psi

    Grafana monitoring example

    • Updated the example to be used with several ML models
    • Added monitors for NumTargetDrift, CatTargetDrift
    Source code(tar.gz)
    Source code(zip)
  • v0.1.48.dev0(Apr 13, 2022)

    Colour Scheme Support for custom colours in the Dashboards:

    • primary_color
    • secondary_color
    • current_data_color
    • reference_data_color
    • color_sequence
    • fill_color
    • zero_line_color
    • non_visible_color
    • underestimation_color
    • overestimation_color
    • majority_color

    Statistical Test: Support for user implemented statistical tests Support for custom statistical tests in Dashboards and Profiles Available tests:

    • 'ks'
    • 'z'
    • 'chisquare'
    • 'jensenshannon'
    • 'kl_div'
    • 'psi'
    • 'wasserstein' more info: docs

    Fixes: #193

    Source code(tar.gz)
    Source code(zip)
  • v0.1.47.dev0(Mar 23, 2022)

    Custom Text Comments in Dashboards

    • Added type type="text” for BaseWidgetInfo (for text widgets implementation)
    • Markdown syntax is supported

    see the example: https://github.com/evidentlyai/evidently/blob/main/examples/how_to_questions/text_widget_usage_iris.ipynb

    Source code(tar.gz)
    Source code(zip)
  • v0.1.46.dev0(Mar 12, 2022)

    • Data Quality Dashboard: add dataset overview widget
    • Data Quality Dashboard: add correlations widget
    • Speeded uploading via preview plots optimisation
    • Paging in Data Quality feature table widget
    Source code(tar.gz)
    Source code(zip)
  • v0.1.45.dev0(Feb 22, 2022)

    • DataQualityTab() is now available for Dashboards! The Tab helps to observe data columns, explore their properties and compare datasets.
    • DataQualityProfileSection() is available for Profiles as well.
    • ColumnMapping update: added task parameter to specify the type of machine learning problem. This parameter is used by DataQualityAnalyzer and some data quality widgets to calculate metrics and visuals with the respect to the task.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.44.dev0(Feb 14, 2022)

    • Added monitors for NumTargetDrift, CatTargetDrift, ClassificationPerformance, ProbClassificationPerformance
    • Fixed RegressionPerformance monitor
    • Supported strings as a categorical features in DataDrift and CatTargetDrift dashboards
    • Supported boolean features in DataDrift dashboard
    Source code(tar.gz)
    Source code(zip)
  • v0.1.43.dev0(Jan 31, 2022)

    Analyzers Refactoring: analyzer result became a structured object instead of a dictionary for all Analyzers

    The following Quality Metrics Options are added:

    • conf_interval_n_sigmas (the width of confidence intervals ): int = DEFAULT_CONF_INTERVAL_SIZE
    • classification_treshold (the threshold for true labels): float = DEFAULT_CLASSIFICATION_TRESHOLD
    • cut_quantile (cut the data by right, left and two-sided quantiles): Union[None, Tuple[str, float], Dict[str, Tuple[str, float]]] = None
    Source code(tar.gz)
    Source code(zip)
  • v0.1.42.dev0(Jan 24, 2022)

    Added backward compatibility for imports:

    • Widgets and Tabs can be imported from evidently directly, but this is deprecated behavior and cause warning
    • Sections can be imported from evidently directly, but this is deprecated behavior and cause warning
    Source code(tar.gz)
    Source code(zip)
  • v0.1.41.dev0(Jan 19, 2022)

    • Library source code is moved to the src/evidently folder
    • Docs, Tests, and Examples are moved to the top level of the repo
    • Widgets and Tabs are moved inside of the src/evidently/dashboard folder, as those are parts of the Dashboard
    • Sections are moved inside of the src/evidently/model_profile folder, as those are parts of the Model_profiles
    • Docs are stored in the repo: docs/book folder
    • DataDriftAnalyzer refactoring: analyzer results became a structured object instead of a dictionary
    Source code(tar.gz)
    Source code(zip)
  • v0.1.40.dev0(Dec 30, 2021)

    • fixed: input DataFrames cannot be changed during any calculations (fixed by making shallow copies)
    • fixed: chi-square statistical test uses normalized frequencies (with respect to the latest scipy version)
    • current dataset is optional for Performace Tabs and Sections calculation (None value can be passed)
    • improved readme
    Source code(tar.gz)
    Source code(zip)
  • v0.1.39.dev0(Dec 23, 2021)

    Data Drift Options:

    • Created confidence: Union[float, Dict[str, float]] - option can take a float or a dict as an argument. If float has passed, then this confidence level will be used for all features. If dict has passed, then specified features will have a custom confidence levels (all the rest will have default confidence level = 0.95)
    • Updated nbinsx: Union[int, Dict[str, int]] - option can take an int or a dict as an argument. If int has passed, then this number of bins will be used for all features. If dict has passed, then specified features will have a custom number of bins (all the rest will have default number of bins = 10)
    • Updated feature_stattest_func: Union[None, Callable, Dict[str, Callable]] - option can take a function or a dict as an argument. If a function has passed, then this function will be used to measure drift for all features. If dict has passed, then for specified features custom functions will be used (all the rest features will be processed by an internal algorithm of drift measurement)

    Package building:

    • Fixed dependencies
    Source code(tar.gz)
    Source code(zip)
  • v0.1.35.dev0(Dec 9, 2021)

    • Support widgets order for include_widgets parameter
    • Support an ability to add a custom widget to Tabs with include_widgets parameter
    • Moved options to a separate module
    • Added options to specify statistical tests for DataDrift and TargetDrift Dashboards: stattest_func - to set a custom statistical test for all the features feature_stattest_func - to set a custom statistical tests for each individual feature cat_target_stattest_func - to set a custom statistical test for categorical target num_target_stattest_func - to set a custom statistical test for numerical target
    • Refactored Widgets and Tabs for simpler customisation
    Source code(tar.gz)
    Source code(zip)
  • v0.1.33.dev0(Dec 1, 2021)

    • Supported custom list of Widgets for Tabs in Dashboard with help of verbose_level and include_widgets parameters
    • Added parameter verbose_level: 0 - to create a Tab with the shortest list of Widgets, 1 - to create a full Tab
    • Added parameter include_widgets: ["Widget Name 1", "Widget Name 2", etc]. This parameter overwrites verbose_level (if both are specified) and allows to set a custom list of Widgets
    • Added Tab.list_widgets() method to list all the available Widgets for the current Tab
    • Created Options entity to specify Widgets and Tabs customisable settings
    • Created ColumnMapping entity instead column_mapping python dictionary
    Source code(tar.gz)
    Source code(zip)
  • v0.1.32.dev0(Nov 25, 2021)

  • v0.1.31.dev0(Nov 21, 2021)

  • v0.1.30.dev0(Nov 12, 2021)

    1. Supported dashboard visualization in Google Colab
    2. Supported dashboard visualization in python Pylab
    3. Added a parameter mode for dashboard.show(), which can take the following options:
    • auto - the default option. Ideally, you will not need to specify the value for mode and use the default. But, if it does not work (in case we failed to determine the environment automatically), consider setting the correct value explicitly.
    • nbextention - to show the UI using nbextension. Use this option to display dashboards in jupyter notebooks (should work automatically).
    • inline - to insert the UI directly into the cell. Use this option for Google Colab, Kaggle Kernels and Deepnote. For Google Colab this should work automatically, for Kaggle Kernels and Deepnote option should be specified explicitly.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.28.dev0(Nov 10, 2021)

    • Supported dashboard visualization in Google Colab
    • Supported dashboard visualization in python Pylab
    • Added a parameter to switch on pylab visualization model: dashboard.show(mode='pylab')
    Source code(tar.gz)
    Source code(zip)
Owner
Evidently AI
Open-source tools to analyze, monitor, and debug machine learning models in production
Evidently AI
Markov bot - A Writing bot based on Markov Chain for Data Structure Lab

基于马尔可夫链的写作机器人 前端 用html/css完成 Demo展示(已给出文本的相应展示) 用户提供相关的语料库后训练的成果 后端 要完成的几个接口 解析文

DysprosiumDy 9 May 05, 2022
Code Repository for Machine Learning with PyTorch and Scikit-Learn

Code Repository for Machine Learning with PyTorch and Scikit-Learn

Sebastian Raschka 1.4k Jan 03, 2023
pandas, scikit-learn, xgboost and seaborn integration

pandas, scikit-learn and xgboost integration.

299 Dec 30, 2022
Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters

Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters. It also works with any regressor compatible with the scikit-learn API (pipelines, CatBoost, LightGBM

Joaquín Amat Rodrigo 297 Jan 09, 2023
List of Data Science Cheatsheets to rule the world

Data Science Cheatsheets List of Data Science Cheatsheets to rule the world. Table of Contents Business Science Business Science Problem Framework Dat

Favio André Vázquez 11.7k Dec 30, 2022
MegFlow - Efficient ML solutions for long-tailed demands.

Efficient ML solutions for long-tailed demands.

旷视天元 MegEngine 371 Dec 21, 2022
Fundamentals of Machine Learning

Fundamentals-of-Machine-Learning This repository introduces the basics of machine learning algorithms for preprocessing, regression and classification

Happy N. Monday 3 Feb 15, 2022
ArviZ is a Python package for exploratory analysis of Bayesian models

ArviZ (pronounced "AR-vees") is a Python package for exploratory analysis of Bayesian models. Includes functions for posterior analysis, data storage, model checking, comparison and diagnostics

ArviZ 1.3k Jan 05, 2023
healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

health-lesion-stovol healthy and lesion models for learning based on the joint estimation of stochasticity and volatility Reference please cite this p

5 Nov 01, 2022
Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogramas anuais com spark, em pyspark e SQL!

Olá! Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogr

Henrique de Paula 10 Apr 04, 2022
Basic Docker Compose for Machine Learning Purposes

Docker-compose for Machine Learning How to use: cd docker-ml-jupyterlab

Chris Chen 1 Oct 29, 2021
Greykite: A flexible, intuitive and fast forecasting library

The Greykite library provides flexible, intuitive and fast forecasts through its flagship algorithm, Silverkite.

LinkedIn 1.4k Jan 15, 2022
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm

Daniel Han-Chen 1.4k Jan 01, 2023
Greykite: A flexible, intuitive and fast forecasting library

The Greykite library provides flexible, intuitive and fast forecasts through its flagship algorithm, Silverkite.

LinkedIn 1.7k Jan 04, 2023
Flightfare-Prediction - It is a Flightfare Prediction Web Application Using Machine learning,Python and flask

Flight_fare-Prediction It is a Flight_fare Prediction Web Application Using Machine learning,Python and flask Using Machine leaning i have created a F

1 Dec 06, 2022
Can a machine learning project be implemented to estimate the salaries of baseball players whose salary information and career statistics for 1986 are shared?

END TO END MACHINE LEARNING PROJECT ON HITTERS DATASET Can a machine learning project be implemented to estimate the salaries of baseball players whos

Pinar Oner 7 Dec 18, 2021
Estudos e projetos feitos com PySpark.

PySpark (Spark com Python) PySpark é uma biblioteca Spark escrita em Python, e seu objetivo é permitir a análise interativa dos dados em um ambiente d

Karinne Cristina 54 Nov 06, 2022
Backtesting an algorithmic trading strategy using Machine Learning and Sentiment Analysis.

Trading Tesla with Machine Learning and Sentiment Analysis An interactive program to train a Random Forest Classifier to predict Tesla daily prices us

Renato Votto 31 Nov 17, 2022
vortex particles for simulating smoke in 2d

vortex-particles-method-2d vortex particles for simulating smoke in 2d -vortexparticles_s

12 Aug 23, 2022
Anomaly Detection and Correlation library

luminol Overview Luminol is a light weight python library for time series data analysis. The two major functionalities it supports are anomaly detecti

LinkedIn 1.1k Jan 01, 2023