cuML - RAPIDS Machine Learning Library

Overview

 cuML - GPU Machine Learning Algorithms

Build Status

cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects.

cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn.

For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.

As an example, the following Python snippet loads input and computes DBSCAN clusters, all on GPU, using cuDF:

import cudf
from cuml.cluster import DBSCAN

# Create and populate a GPU DataFrame
gdf_float = cudf.DataFrame()
gdf_float['0'] = [1.0, 2.0, 5.0]
gdf_float['1'] = [4.0, 2.0, 1.0]
gdf_float['2'] = [4.0, 2.0, 1.0]

# Setup and fit clusters
dbscan_float = DBSCAN(eps=1.0, min_samples=1)
dbscan_float.fit(gdf_float)

print(dbscan_float.labels_)

Output:

0    0
1    1
2    2
dtype: int32

cuML also features multi-GPU and multi-node-multi-GPU operation, using Dask, for a growing list of algorithms. The following Python snippet reads input from a CSV file and performs a NearestNeighbors query across a cluster of Dask workers, using multiple GPUs on a single node:

Initialize a LocalCUDACluster configured with UCX for fast transport of CUDA arrays

# Initialize UCX for high-speed transport of CUDA arrays
from dask_cuda import LocalCUDACluster

# Create a Dask single-node CUDA cluster w/ one worker per device
cluster = LocalCUDACluster(protocol="ucx",
                           enable_tcp_over_ucx=True,
                           enable_nvlink=True,
                           enable_infiniband=False)

Load data and perform k-Nearest Neighbors search. cuml.dask estimators also support Dask.Array as input:

from dask.distributed import Client
client = Client(cluster)

# Read CSV file in parallel across workers
import dask_cudf
df = dask_cudf.read_csv("/path/to/csv")

# Fit a NearestNeighbors model and query it
from cuml.dask.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors = 10, client=client)
nn.fit(df)
neighbors = nn.kneighbors(df)

For additional examples, browse our complete API documentation, or check out our example walkthrough notebooks. Finally, you can find complete end-to-end examples in the notebooks-contrib repo.

Supported Algorithms

Category Algorithm Notes
Clustering Density-Based Spatial Clustering of Applications with Noise (DBSCAN) Multi-node multi-GPU via Dask
K-Means Multi-node multi-GPU via Dask
Dimensionality Reduction Principal Components Analysis (PCA) Multi-node multi-GPU via Dask
Incremental PCA
Truncated Singular Value Decomposition (tSVD) Multi-node multi-GPU via Dask
Uniform Manifold Approximation and Projection (UMAP) Multi-node multi-GPU Inference via Dask
Random Projection
t-Distributed Stochastic Neighbor Embedding (TSNE)
Linear Models for Regression or Classification Linear Regression (OLS) Multi-node multi-GPU via Dask
Linear Regression with Lasso or Ridge Regularization Multi-node multi-GPU via Dask
ElasticNet Regression
LARS Regression (experimental)
Logistic Regression Multi-node multi-GPU via Dask-GLM demo
Naive Bayes Multi-node multi-GPU via Dask
Stochastic Gradient Descent (SGD), Coordinate Descent (CD), and Quasi-Newton (QN) (including L-BFGS and OWL-QN) solvers for linear models
Nonlinear Models for Regression or Classification Random Forest (RF) Classification Experimental multi-node multi-GPU via Dask
Random Forest (RF) Regression Experimental multi-node multi-GPU via Dask
Inference for decision tree-based models Forest Inference Library (FIL)
K-Nearest Neighbors (KNN) Classification Multi-node multi-GPU via Dask+UCX, uses Faiss for Nearest Neighbors Query.
K-Nearest Neighbors (KNN) Regression Multi-node multi-GPU via Dask+UCX, uses Faiss for Nearest Neighbors Query.
Support Vector Machine Classifier (SVC)
Epsilon-Support Vector Regression (SVR)
Time Series Holt-Winters Exponential Smoothing
Auto-regressive Integrated Moving Average (ARIMA) Supports seasonality (SARIMA)
Model Explanation SHAP Kernel Explainer Based on SHAP (experimental)
SHAP Permutation Explainer Based on SHAP (experimental)
Other K-Nearest Neighbors (KNN) Search Multi-node multi-GPU via Dask+UCX, uses Faiss for Nearest Neighbors Query.

Installation

See the RAPIDS Release Selector for the command line to install either nightly or official release cuML packages via Conda or Docker.

Build/Install from Source

See the build guide.

Contributing

Please see our guide for contributing to cuML.

References

The RAPIDS team has a number of blogs with deeper technical dives and examples. You can find them here on Medium.

For additional details on the technologies behind cuML, as well as a broader overview of the Python Machine Learning landscape, see Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence (2020) by Sebastian Raschka, Joshua Patterson, and Corey Nolet.

Please consider citing this when using cuML in a project. You can use the citation BibTeX:

@article{raschka2020machine,
  title={Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence},
  author={Raschka, Sebastian and Patterson, Joshua and Nolet, Corey},
  journal={arXiv preprint arXiv:2002.04803},
  year={2020}
}

Contact

Find out more details on the RAPIDS site

Open GPU Data Science

The RAPIDS suite of open source software libraries aim to enable execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.

Comments
  • [QST] Best practices to work with big datasets in RandomForestRegressor

    [QST] Best practices to work with big datasets in RandomForestRegressor

    What is your question?

    Hi, I have a sample script here that reads in a DF of 500000000 rows and 20 columns. This is just a example to mimic some real data that is being used.

    On this example the system is hitting GPU OOM errors.

    import numpy as np
    import pandas as pd
    from cuml.dask.common import utils as dask_utils 
    from dask.distributed import Client, wait 
    from dask_cuda import LocalCUDACluster 
    import dask_cudf
    import dask.dataframe as dd
    from cuml.dask.ensemble import RandomForestRegressor as cumlDaskRF 
    
    cluster = LocalCUDACluster()
    c = Client(cluster)
    
    workers = c.has_what().keys()
    n_workers = len(workers)
    n_streams = 8
    n_partitions = n_workers
    
    # Desired parameters
    max_depth = 50
    n_trees = 100
    rows, cols = 500000000, 20
    
    cols_names = ["C{}".format(i) for i in range(1,cols+1)]
    cols_names_train = cols_names
    cols_names_train.remove('C2')
    
    # Generate fake data for example's sake
    x = np.random.random((rows, cols))
    df = pd.DataFrame(x, columns=["C{}".format(i) for i in range(1,cols+1)])
    
    df_dask = dask_cudf.from_dask_dataframe(dd.from_pandas(df, npartitions=n_partitions))
    X_train_dask, y_train_dask = dask_utils.persist_across_workers(c, [df_dask[cols_names_train], df_dask['C2']], workers=workers)
    
    cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees,n_streams=n_streams,workers = workers)
    cuml_model.fit(X_train_dask, y_train_dask)
    
    wait(cuml_model.rfs)
    
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
     in 
          1 cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees,n_streams=n_streams,workers = workers)
    ----> 2 cuml_model.fit(X_train_dask, y_train_dask)
          3 
          4 wait(cuml_model.rfs)
    
    /opt/conda/lib/python3.7/site-packages/cuml/dask/ensemble/randomforestregressor.py in fit(self, X, y)
        362 
        363         wait(futures)
    --> 364         raise_exception_from_futures(futures)
        365 
        366         return self
    
    /opt/conda/lib/python3.7/site-packages/cuml/dask/common/utils.py in raise_exception_from_futures(futures)
        139     if errs:
        140         raise RuntimeError("%d of %d worker jobs failed: %s" % (
    --> 141             len(errs), len(futures), ", ".join(map(str, errs))
        142             ))
        143 
    
    RuntimeError: 16 of 16 worker jobs failed: Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c2c01339e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c2c013eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3abb8fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3abb9af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3abbb3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3abbb224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c21104446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c21104b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c4e01339e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c4e013eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ac38fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ac39af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ac3b3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ac3b224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c45304446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c45304b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5a66339e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5a663eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aa759cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aa764f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aa77dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aa77c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c472a2446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c472a2b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c3971639e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c39716eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ac38fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ac39af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ac3b3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ac3b224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c39318446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c39318b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5565539e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c55655eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aab8fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aab9af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aabb3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aabb224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c430c3446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c430c3b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5ce6839e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5ce68eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aaf59cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aaf64f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aaf7dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aaf7c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c51325446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c51325b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c576ce39e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c576ceeb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3abb59cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3abb64f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3abb7dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3abb7c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c57310446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c57310b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c52e6339e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c52e63eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aaf59cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aaf64f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aaf7dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aaf7c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c452b7446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c452b7b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5f20939e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5f209eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3abb8fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3abb9af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3abbb3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3abbb224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c5b291446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c5b291b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c54e3939e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c54e39eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ac38fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ac39af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ac3b3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ac3b224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c49299446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c49299b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5b60c39e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5b60ceb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ab38fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ab39af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ab3b3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ab3b224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c57248446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c57248b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5840b39e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5840beb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aa759cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aa764f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aa77dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aa77c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c48637446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c48637b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5ce6b39e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5ce6beb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ab359cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ab364f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ab37dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ab37c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c43268446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c43268b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c5b66239e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c5b662eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3aa759cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3aa764f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3aa77dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3aa77c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c49ff1446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c49ff1b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3bcf61f39e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3bcf61feb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ac38fcf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ac39af757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ac3b3da60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ac3b224a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3bc46d6446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3bc46d6b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    , Exception occured! file=/conda/conda-bld/libcuml_1583811942451/work/cpp/include/cuml/common/cuml_allocator.hpp line=109: FAIL: call='cudaMalloc(&ptr, n)'. Reason:out of memory
    
    Obtained 29 stack frames
    #0 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9Exception16collectCallStackEv+0x3e) [0x7f3c6565939e]
    #1 in /opt/conda/lib/python3.7/site-packages/cuml/utils/pointer_utils.cpython-37m-x86_64-linux-gnu.so(_ZN8MLCommon9ExceptionC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x80) [0x7f3c65659eb0]
    #2 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN8MLCommon22defaultDeviceAllocator8allocateEmP11CUstream_st+0x102) [0x7f3ab759cf52]
    #3 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN15TemporaryMemoryIddE17LevelMemAllocatorEiifiiiib+0x1247) [0x7f3ab764f757]
    #4 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML11rfRegressorIdE3fitERKNS_10cumlHandleEPKdiiPdRPNS_20RandomForestMetaDataIddEE+0x900) [0x7f3ab77dda60]
    #5 in /opt/conda/lib/python3.7/site-packages/cuml/utils/../../../../libcuml++.so(_ZN2ML3fitERKNS_10cumlHandleERPNS_20RandomForestMetaDataIddEEPdiiS7_NS_9RF_paramsE+0x204) [0x7f3ab77c24a4]
    #6 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28446) [0x7f3c4f2c4446]
    #7 in /opt/conda/lib/python3.7/site-packages/cuml/ensemble/randomforestregressor.cpython-37m-x86_64-linux-gnu.so(+0x28b1b) [0x7f3c4f2c4b1b]
    #8 in /opt/conda/bin/python(_PyObject_FastCallKeywords+0x49b) [0x55d29bc3e8fb]
    #9 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x52f8) [0x55d29bca26e8]
    #10 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #11 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #12 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #13 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #14 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #15 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #16 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #17 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x1e42) [0x55d29bc9f232]
    #18 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #19 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #20 in /opt/conda/bin/python(_PyFunction_FastCallKeywords+0xfb) [0x55d29bc35ccb]
    #21 in /opt/conda/bin/python(_PyEval_EvalFrameDefault+0x6a3) [0x55d29bc9da93]
    #22 in /opt/conda/bin/python(_PyFunction_FastCallDict+0x10b) [0x55d29bbe756b]
    #23 in /opt/conda/bin/python(_PyObject_Call_Prepend+0x63) [0x55d29bc05e53]
    #24 in /opt/conda/bin/python(PyObject_Call+0x6e) [0x55d29bbf8dbe]
    #25 in /opt/conda/bin/python(+0x223817) [0x55d29bcf5817]
    #26 in /opt/conda/bin/python(+0x1de788) [0x55d29bcb0788]
    #27 in /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f3c79bb26db]
    #28 in /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f3c798db88f]
    
    bug inactive-30d inactive-90d 
    opened by nikiforov-sm 64
  • [QST] Best practices to achieve greater max_depth and n_trees parameters in RandomForestRegressor

    [QST] Best practices to achieve greater max_depth and n_trees parameters in RandomForestRegressor

    What is your question?

    Hi, I have a sample script here that reads in a DF of 10k rows and 74 columns. This is just a toy example to mimic some real data that is being used.

    The desire is to have large values for max_depth / n_trees on something like a DGX-1 / DGX-2, but on this toy example the system is hitting GPU OOM errors.

    import numpy as np
    import sklearn
    
    import pandas as pd
    import cudf
    import cuml
    
    from sklearn import model_selection, datasets
    
    from cuml.dask.common import utils as dask_utils
    from dask.distributed import Client, wait
    from dask_cuda import LocalCUDACluster
    import dask_cudf
    
    from sklearn.metrics import mean_squared_error
    from cuml.dask.ensemble import RandomForestRegressor as cumlDaskRF
    from sklearn.ensemble import RandomForestRegressor as sklRF
    
    if __name__ == '__main__':
        # Desired parameters
        max_depth = 20
        n_trees = 30
        rows, cols = 10000, 74
    
        cluster = LocalCUDACluster(threads_per_worker=1)
        if 'c' in globals():
            c.close()
        c = Client(cluster)
    
        workers = c.has_what().keys()
        n_workers = len(workers)
        n_streams = 8
    
        # Generate fake data for example's sake
        x = np.random.random((rows, cols))
        df = pd.DataFrame(x, columns=["C{}".format(i) for i in range(cols)])
    
        X = df.drop(['C2'],1).to_numpy().astype(np.float, 32)
        y = df['C2'].astype(np.float, 32)
        X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2)
    
        n_partitions = n_workers
        X_train_cudf = cudf.DataFrame.from_pandas(pd.DataFrame(X_train))
        y_train_cudf = cudf.Series(y_train)
        X_train_dask = dask_cudf.from_cudf(X_train_cudf, npartitions=n_partitions)
        y_train_dask = dask_cudf.from_cudf(y_train_cudf, npartitions=n_partitions)
        X_train_dask, y_train_dask = \
          dask_utils.persist_across_workers(c, [X_train_dask, y_train_dask], workers=workers)
    
        skl_model = sklRF(max_depth=max_depth, n_estimators=n_trees, n_jobs=-1)
        skl_model.fit(X_train, y_train)
    
        cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees,
                                n_streams=n_streams,
                                workers = workers
                               )
        cuml_model.fit(X_train_dask, y_train_dask)
    
        wait(cuml_model.rfs)
    
        skl_y_pred = skl_model.predict(X_test)
        print("SKLearn accuracy:  ", mean_squared_error(y_test, skl_y_pred))
    
        cuml_y_pred = cuml_model.predict(X_test)
        print("CuML accuracy:     ", mean_squared_error(y_test, cuml_y_pred))                                                                                             
    

    The goal is to use parameters such as these on large datasets:

        max_depth = 20
        n_trees = 30
    

    Are there any tips/tricks that can be done here to better manage the memory to work with large datasets without running OOM?

    question ? - Needs Triage 
    opened by rmccorm4 44
  • [BUG] RMM-only context destroyed error with Random Forest in loop

    [BUG] RMM-only context destroyed error with Random Forest in loop

    It seems we may have an RMM-only memory leak with RandomForestRegressor. This could come up in a wide range of workloads, such as using RandomForestRegressor with RMM during hyper-parameter optimization.

    In the following example:

    • Without an RMM pool, repeatedly fitting the model, predicting, and deleting the model/predictions causes peak memory of 1.2GB
    • With an RMM pool, repeatedly fitting the model, predicting, and deleting the model/predictions causes memory to grow uncontrollably. This can be triggered by uncommenting the rmm related lines. After 15-17 iterations, we exhaust the entire 5 GB pool.

    Is it possible there is a place where RMM isn't getting visibility of a call to free memory?

    import cudf
    import cuml
    import rmm
    import cupy as cp
    from dask.utils import parse_bytes
    from sklearn.datasets import make_regression
    
    # cudf.set_allocator(pool=True, initial_pool_size=parse_bytes("5GB"))
    # cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
    
    NFEATURES = 20
    
    X, y = make_regression(
        n_samples=10000,
        n_features=NFEATURES,
        random_state=12,
    )
    
    X = X.astype("float32")
    X = cp.asarray(X)
    y = cp.asarray(y)
    
    for i in range(30):
        print(i)
        clf = cuml.ensemble.RandomForestRegressor(n_estimators=50)
        clf.fit(X, y)
        preds = clf.predict(X)
        del clf, preds
    

    Environment: 2020-07-31 nightly at ~ 9AM EDT

    bug ? - Needs Triage 
    opened by beckernick 40
  • [REVIEW] Symbolic Regression/Classification C/C++

    [REVIEW] Symbolic Regression/Classification C/C++

    This PR contains the implementation of the core algorithms of gplearn(tournaments + mutations + program evaluations) in cuml. Tagging all involved: @teju85 @venkywonka @vinaydes

    The goal is to complete the following tasks:

    • [x] Implement program execution and metric evaluation for a given dataset on the GPU
    • [x] Implement a batched version of the above for all programs in a generation
    • [x] Run tournaments for program selection on the GPU
    • [x] Perform all mutations on the CPU
    • [x] Fit, Predict and Transform functions for api
    • [x] Tests for all individual functions
    • [x] Add an example demonstrating how to perform symbolic regression (a similar approach can be taken for transformation too)
    feature request 3 - Ready for Review CUDA / C++ New Algorithm 4 - Waiting on Reviewer non-breaking CMake Experimental CUDA/C++ 
    opened by vimarsh6739 31
  • [BUG] Bootstrapping causes accuracy drop in cuML RF

    [BUG] Bootstrapping causes accuracy drop in cuML RF

    Describe the bug I have been investing the accuracy bug in cuML RF (#2518), and I managed to isolate the cause of the accuracy drop. The bootstrapping option causes cuML RF to do worse than sklearn.

    Steps/Code to reproduce bug Download the dataset in NumPy, which has been obtained from #2561:

    Then run the following script:

    import itertools
    
    import numpy as np
    from sklearn.model_selection import cross_validate, KFold
    from sklearn.ensemble import RandomForestClassifier
    from cuml.ensemble import RandomForestClassifier as cuml_RandomForestClassifier
    
    # Preprocessed data
    X = np.load('data/loans_X.npy')
    y = np.load('data/loans_y.npy')
    
    param_range = {
        'n_estimators': [1, 10, 100],
        'max_features': [1.0],
        'bootstrap': [False, True],
        'random_state': [0]
    }
    
    max_depth = 21
    n_bins = 64
    
    cv_fold = KFold(n_splits=10, shuffle=True, random_state=2020)
    
    param_set = (dict(zip(param_range, x)) for x in itertools.product(*param_range.values()))
    for params in param_set:
        print(f'==== params = {params} ====')
        skl_clf = RandomForestClassifier(n_jobs=-1, max_depth=max_depth, **params)
        scores = cross_validate(skl_clf, X, y, cv=cv_fold, n_jobs=-1, return_train_score=True)
        skl_train_acc = scores['train_score']
        skl_cv_acc = scores['test_score']
        print(f'sklearn: Training accuracy = {skl_train_acc.mean()} (std={skl_train_acc.std()}), ' +
              f'CV accuracy = {skl_cv_acc.mean()} (std={skl_cv_acc.std()})')
        
        for split_algo in [0, 1]:
            cuml_clf = cuml_RandomForestClassifier(n_bins=n_bins, max_depth=max_depth, n_streams=1,
                                                   split_algo=split_algo, **params)
            scores = cross_validate(cuml_clf, X, y, cv=cv_fold, return_train_score=True)
            cuml_train_acc = scores['train_score']
            cuml_cv_acc = scores['test_score']
            print(f'cuml, split_algo = {split_algo}: Training accuracy = {cuml_train_acc.mean()} ' +
                  f'(std={cuml_train_acc.std()}), CV accuracy = {cuml_cv_acc.mean()} ' +
                  f'(std={cuml_cv_acc.std()})')
    

    cuML RF gives substantially lower training accuracy than sklearn (up to 9%p lower):

    Training accuracy, bootstrap=True n_estimators | sklearn | cuML (split_algo=0) | cuML (split_algo=1) -- | -- | -- | -- 1 | 0.876951 | 0.822472 | 0.821807 10 | 0.925004 | 0.857921 | 0.861096 100 | 0.931354 | 0.84961 | 0.852527

    On the other hand, turning off bootstrapping with bootstrap=False improves the accuracy of cuML RF relative to sklearn:

    Training accuracy, bootstrap=False n_estimators | sklearn | cuML (split_algo=0) | cuML (split_algo=1) -- | -- | -- | -- 1 | 0.92087 | 0.921404 | 0.928852 10 | 0.922088 | 0.921404 | 0.928852 100 | 0.92228 | 0.921404 | 0.928852

    To make sure that bootstrapping is the issue, I wrote the following script to generate bootstraps with NumPy and fed the same bootstraps into both cuML RF and sklearn:

    import time
    
    import numpy as np
    from sklearn.base import clone
    from sklearn.metrics import accuracy_score
    from sklearn.ensemble import RandomForestClassifier
    from cuml.ensemble import RandomForestClassifier as cuml_RandomForestClassifier
    
    def fit_with_custom_bootstrap(base_estimator, X, y, *, n_estimators, random_state):
        assert len(X.shape) == 2 and len(y.shape) == 1
        assert X.shape[0] == y.shape[0]
        rng = np.random.default_rng(seed=random_state)
        estimators = []
        for _ in range(n_estimators):
            estimator = clone(base_estimator)
            indices = rng.choice(X.shape[0], size=(X.shape[0],), replace=True)
            bootstrap_X, bootstrap_y = X[indices, :], y[indices]
            assert bootstrap_X.shape == X.shape
            assert bootstrap_y.shape == y.shape
            estimator.fit(bootstrap_X, bootstrap_y)
    
            estimators.append(estimator)
        return estimators
    
    def predict_unweighted_vote(estimators, X_test):
        s = np.zeros((X_test.shape[0], 2))
        for estimator in estimators:
            s[np.arange(X_test.shape[0]), estimator.predict(X_test).astype(np.int32)] += 1.0
        s /= len(estimators)
        return np.argmax(s, axis=1)
    
    def predict_weighted_vote(estimators, X_test):
        s = estimators[0].predict_proba(X_test)
        for estimator in estimators[1:]:
            s += estimator.predict_proba(X_test)
        s /= len(estimators)
        return np.argmax(s, axis=1)
    
    X = np.load('data/loans_X.npy')
    y = np.load('data/loans_y.npy')
    assert np.array_equal(np.unique(y), np.array([0., 1.]))
    
    max_depth = 21
    n_bins = 64
    split_algo = 0
    n_estimators = 1  # Also number of bootstraps
    
    # Since we generate our own bootstraps, disable bootstrap in cuML / sklearn
    params = {
        'n_estimators': 1,
        'max_features': 1.0,
        'bootstrap': False,
        'random_state': 0
    }
    
    cuml_clf = cuml_RandomForestClassifier(n_bins=n_bins, max_depth=max_depth, n_streams=1,
                                           split_algo=split_algo, **params)
    
    tstart = time.perf_counter()
    estimators = fit_with_custom_bootstrap(cuml_clf, X, y, n_estimators=n_estimators, random_state=0)
    tend = time.perf_counter()
    print(f'cuml, Training: {tend - tstart} sec')
    tstart = time.perf_counter()
    y_pred = predict_unweighted_vote(estimators, X)
    tend = time.perf_counter()
    print(f'cuml, Prediction: {tend - tstart} sec')
    print(accuracy_score(y, y_pred))
    
    skl_clf = RandomForestClassifier(n_jobs=-1, max_depth=max_depth, **params)
    
    tstart = time.perf_counter()
    estimators = fit_with_custom_bootstrap(skl_clf, X, y, n_estimators=n_estimators, random_state=0)
    tend = time.perf_counter()
    print(f'sklearn, Training: {tend - tstart} sec')
    tstart = time.perf_counter()
    y_pred = predict_weighted_vote(estimators, X)
    tend = time.perf_counter()
    print(f'sklearn, Prediction: {tend - tstart} sec')
    print(accuracy_score(y, y_pred))
    

    The results now look a lot better: cuML RF gives competitive training accuracy as sklearn.

    n_estimators | sklearn | cuML (split_algo=0) | cuML (split_algo=1) -- | -- | -- | -- 1 | 0.87526379 | 0.875951111 | 0.875735555 10 | 0.92300364 | 0.921437212 | 0.931396502 100 | 0.9296966 | 0.919802517 | 0.930890215

    bug inactive-30d inactive-90d 
    opened by hcho3 24
  • [REVIEW] Refactor src prims for header name rules

    [REVIEW] Refactor src prims for header name rules

    This is hopefully the last PR in the series for #1675 This PR has all the changes to header file renames in src_prims while following the rules specified by @teju85

    @teju85 , really sorry but as we discussed this became a bulky PR for review, however, I have split the work logically into version commits, each of which indivtually ensure that the build passes, and ensuring that the commit works on a part of code / src_prim files as mentioned in commit message.

    I will list down a set of questions and concerns.

    4 - Waiting on Reviewer 
    opened by chaithyagr 23
  • [WIP] LOBPCG Solver

    [WIP] LOBPCG Solver

    • This is a WIP initial implementation of the locally optimal block preconditioned conjugate gradient solver in CuPy
    • Used to solve for the largest/smallest K eigen pairs of a generalized eigen value problem more info
    2 - In Progress Cython / Python New Prim 
    opened by venkywonka 22
  • [BUG] make_regression() much slower after certain dataset size.

    [BUG] make_regression() much slower after certain dataset size.

    Describe the bug Hi,

    I have noticed that, if I try to create a dataset with 'n' elements, it takes much longer than if I create two datasets with 0.8 times 'n' elements and 0.2 times 'n' elements.

    Steps/Code to reproduce bug Please, find attached a notebook to reproduce this behaviour.

    Expected behaviour Not such a significative difference when invoking make_regression() with bigger datasets.

    Environment details (please complete the following information): DGX-1. Nightly build from today. make_regression benchmark.ipynb.zip

    bug 0 - Backlog doc Perf 
    opened by miguelusque 22
  • [REVIEW] Enable clang format

    [REVIEW] Enable clang format

    I have re-created this PR since the equivalent #591 was severely outdated due to the latest folder restructure.

    This is a proposal for issue #560 . Tagging @cjnolet @dantegd @JohnZed for the discussions related to the coding style. The current set of rules encoded in the accompanying .clang-format file are open to discussion (obviously, everyone has their own preferred coding style, but better to have something enforced for a project like this!)

    Let me now answer @harrism's questions to the original PR here.

    But before I explain how the formatting gets applied, let's understand the 3 modes the run-clang-format.py script works:

    1. developer environment: If the developer has any uncommitted cpp files in his/her local repo, only those files will be checked for any formatting rule violations.
    2. CI environment: the script is also smart enough to detect the CI run and if so, it'll check across all files that have been modified wrt the PR_TARGET_BRANCH.
    3. One can also bulk check for violations across all the source files in specific dirs as well.

    Now, let's look at how the formatting gets applied. This script, by default, will never update the format, it only checks for formatting violations and if found, raises exception. But before it errors out, it also prints a commandline which one can use to update the format. Thus, this means, in the Mode1 above, the developer can update his/her formatting violations by running this command. This also means that in Mode2, if the developer has not updated the formatting violations, the CI will fail, notifying the developer of the same!

    I have kept the Mode3 available, just in case if we update our formatting rules in future and thereby needing a full bulk formatting.

    What do you guys think?

    3 - Ready for Review CUDA / C++ gcc g++ Build or Dep 
    opened by teju85 22
  • [REVIEW] Proposal for

    [REVIEW] Proposal for "proper" C/C++ API (issue #92)

    This is a proposal for a imporved C and C++ API. When reviewing this I suggest to start with cuML/DEVELOPER_GUIDE.md as that explains the used concepts.

    2 - In Progress proposal 
    opened by jirikraus 22
  • [FEA] Consider adopting C++20 style Span class abstraction

    [FEA] Consider adopting C++20 style Span class abstraction

    Problem.

    #3107 was caused by an out-of-bounds access to an array. Many cuML algorithms such as random forest uses lots of arrays, and it is easy to introduce an out-of-bounds access to the codebase by accident. In addition, out-of-bounds access in CUDA kernels is quite difficult to pinpoint and debug, as the kernel crash with cudaErrorIllegalAddress exception, and we'd have to manually run cuda-memcheck, which can take a while to run.

    Proposal. We should adopt C++20 style Span class to model arrays with defined bounds. The Span class will conduct automatic bounds checks, allowing us to quickly detect and fix out-of-bound access bugs. It also follows the Fail-Fast principle of software development. Furthermore, it makes a nicer abstraction to pack arrays with their bounds information. (Think Java-style arrays, where every array has the length field.) So passing arrays between functions will become less error-prone when we pass them as Span objects.

    XGBoost has a device-side Span class (credit to @trivialfis): https://github.com/dmlc/xgboost/blob/2fcc4f2886340e66e988688473a73d2c46525630/include/xgboost/span.h#L412

    Possible disadvantages. Bounds check may introduce performance penalty due to the addition of extra branching. My opinion is that the benefit of automatic bounds checking (fewer bugs, improved developer productivity) outweighs a slight performance penalty. Performance impact can be mitigated by supplying data() method to the Span class. For performance-critical loops, we can extract the raw pointer from the Span class and use the pointer directly to avoid the overhead of bounds checking in operator[]. This should be done sparingly, and only for small tight loops where the bounds of the loop variable is clearly identified.

    feature request CUDA / C++ 
    opened by hcho3 21
  • [BUG] lasso interop test failure in CI

    [BUG] lasso interop test failure in CI

    Seen in https://github.com/rapidsai/cuml/actions/runs/3753915301/jobs/6378200836:

    FAILED test_device_selection.py::test_train_cpu_infer_cpu[/lasso_test_data-False-cupy-random] - AssertionError: 
    Not equal to tolerance rtol=0.15, atol=0
    
    Mismatched elements: 1 / 200 (0.5%)
    Max absolute difference: 1.6273651
    Max relative difference: 0.33033487
     x: array([-138.52025 ,  135.74544 ,   34.26355 ,   79.94445 ,  545.09015 ,
            264.5219  , -345.28632 , -681.7092  , -142.38684 ,  -27.778336,
            141.82169 ,  160.76947 , -138.95457 ,  179.74213 ,  590.30347 ,...
     y: array([-138.39569 ,  135.65765 ,   34.477875,   79.87479 ,  543.94617 ,
            265.3012  , -345.4904  , -682.1642  , -142.94472 ,  -27.591265,
            141.41536 ,  160.13036 , -137.65547 ,  178.674   ,  590.8144  ,...
    FAILED test_device_selection.py::test_train_gpu_infer_cpu[/lasso_test_data-False-cupy-random] - AssertionError: 
    Not equal to tolerance rtol=0.15, atol=0
    
    bug ? - Needs Triage 
    opened by dantegd 0
  • [FEA] Add support for more TfIdf parameters

    [FEA] Add support for more TfIdf parameters

    Is your feature request related to a problem? Please describe. Currently, cuml doesn't support TfIdf parameters that are available in Sklearn version:

    • input
    • encoding
    • decode_error
    • strip_accents
    • tokenizer
    • token_pattern

    See here, the parameters are accepted but not supported.

    feature request ? - Needs Triage 
    opened by lowener 0
  • [BUG] CuPy/Cumlarray UnownedMemory error on ARM CI jobs

    [BUG] CuPy/Cumlarray UnownedMemory error on ARM CI jobs

    Describe the bug

    FAILED test_adapters.py::test_check_array - RuntimeError: UnownedMemory requires explicit device ID for a null pointer.
    [2147](https://github.com/rapidsai/cuml/actions/runs/3734738186/jobs/6337919174#step:6:2148)
    FAILED test_svm.py::test_svm_no_support_vectors - RuntimeError: UnownedMemory requires explicit device ID for a null pointer.
    

    Steps/Code to reproduce bug Tests will be xfailed to merge GHA PR, but can be seen here: https://github.com/rapidsai/cuml/actions/runs/3734738186/jobs/6337919174

    Full error:

     _______________________________ test_check_array _______________________________
     [gw0] linux -- Python 3.9.15 /opt/conda/envs/test/bin/python3.9
     
         def test_check_array():
             # accept_sparse
             arr = coo_matrix((3, 4), dtype=cp.float64)
             check_array(arr, accept_sparse=True)
             with pytest.raises(ValueError):
                 check_array(arr, accept_sparse=False)
         
             # dtype
             arr = cp.array([[1, 2]], dtype=cp.int64)
             check_array(arr, dtype=cp.int64, copy=False)
         
             arr = cp.array([[1, 2]], dtype=cp.float32)
             new_arr = check_array(arr, dtype=cp.int64)
             assert new_arr.dtype == cp.int64
         
             # order
             arr = cp.array([[1, 2]], dtype=cp.int64, order='F')
             new_arr = check_array(arr, order='F')
             assert new_arr.flags.f_contiguous
             new_arr = check_array(arr, order='C')
             assert new_arr.flags.c_contiguous
         
             # force_all_finite
             arr = cp.array([[1, cp.inf]])
             check_array(arr, force_all_finite=False)
             with pytest.raises(ValueError):
                 check_array(arr, force_all_finite=True)
         
             # ensure_2d
             arr = cp.array([1, 2], dtype=cp.float32)
             check_array(arr, ensure_2d=False)
             with pytest.raises(ValueError):
                 check_array(arr, ensure_2d=True)
         
             # ensure_2d
             arr = cp.array([[1, 2, 3], [4, 5, 6]], dtype=cp.float32)
             check_array(arr, ensure_2d=True)
         
             # ensure_min_samples
             arr = cp.array([[1, 2]], dtype=cp.float32)
             check_array(arr, ensure_min_samples=1)
             with pytest.raises(ValueError):
                 check_array(arr, ensure_min_samples=2)
         
             # ensure_min_features
             arr = cp.array([[]], dtype=cp.float32)
     >       check_array(arr, ensure_min_features=0)
     
     python/cuml/tests/test_adapters.py:124: 
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/thirdparty_adapters/adapters.py:288: in check_array
         X, n_rows, n_cols, dtype = input_to_cupy_array(array,
     /opt/conda/envs/test/lib/python3.9/contextlib.py:79: in inner
         return func(*args, **kwds)
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/internals/input_utils.py:434: in input_to_cupy_array
         return out_data._replace(array=out_data.array.to_output("cupy"))
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/internals/memory_utils.py:85: in cupy_rmm_wrapper
         return func(*args, **kwargs)
     /opt/conda/envs/test/lib/python3.9/contextlib.py:79: in inner
         return func(*args, **kwds)
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/internals/array.py:630: in to_output
         return output_mem_type.xpy.asarray(
     /opt/conda/envs/test/lib/python3.9/site-packages/cupy/_creation/from_data.py:76: in asarray
         return _core.array(a, dtype, False, order)
     cupy/_core/core.pyx:2249: in cupy._core.core.array
         ???
     cupy/_core/core.pyx:2261: in cupy._core.core.array
         ???
     cupy/_core/core.pyx:2301: in cupy._core.core._array_from_cuda_array_interface
         ???
     cupy/_core/core.pyx:2632: in cupy._core.core._convert_object_with_cuda_array_interface
         ???
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
     
     >   ???
     E   RuntimeError: UnownedMemory requires explicit device ID for a null pointer.
     
     cupy/cuda/memory.pyx:190: RuntimeError
     _________________________ test_svm_no_support_vectors __________________________
     [gw1] linux -- Python 3.9.15 /opt/conda/envs/test/bin/python3.9
     
         def test_svm_no_support_vectors():
             n_rows = 10
             n_cols = 3
             X = cp.random.uniform(size=(n_rows, n_cols), dtype=cp.float64)
             y = cp.ones((n_rows, 1))
             model = cuml.svm.SVR(kernel="linear", C=10)
             model.fit(X, y)
             pred = model.predict(X)
         
             assert svm_array_equal(pred, y, 0)
         
             assert model.n_support_ == 0
             assert abs(model.intercept_ - 1) <= 1e-6
             assert svm_array_equal(model.coef_, cp.zeros((1, n_cols)))
     >       assert model.dual_coef_.shape == (1, 0)
     
     python/cuml/tests/test_svm.py:552: 
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/common/array_descriptor.py:134: in __get__
         return self._to_output(instance, output_type)
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/common/array_descriptor.py:99: in _to_output
         output = cuml_arr.to_output(output_type=to_output_type,
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/internals/memory_utils.py:85: in cupy_rmm_wrapper
         return func(*args, **kwargs)
     /opt/conda/envs/test/lib/python3.9/contextlib.py:79: in inner
         return func(*args, **kwds)
     /opt/conda/envs/test/lib/python3.9/site-packages/cuml/internals/array.py:630: in to_output
         return output_mem_type.xpy.asarray(
     /opt/conda/envs/test/lib/python3.9/site-packages/cupy/_creation/from_data.py:76: in asarray
         return _core.array(a, dtype, False, order)
     cupy/_core/core.pyx:2249: in cupy._core.core.array
         ???
     cupy/_core/core.pyx:2261: in cupy._core.core.array
         ???
     cupy/_core/core.pyx:2301: in cupy._core.core._array_from_cuda_array_interface
         ???
     cupy/_core/core.pyx:2632: in cupy._core.core._convert_object_with_cuda_array_interface
         ???
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
     
     >   ???
     E   RuntimeError: UnownedMemory requires explicit device ID for a null pointer.
     
    

    Environment details (please complete the following information):

    • Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)]
    • Linux Distro/Architecture: [Ubuntu 16.04 amd64]
    • GPU Model/Driver: [V100 and driver 396.44]
    • CUDA: [9.2]
    • Method of cuDF & cuML install: [conda, Docker, or from source]
      • If method of install is [conda], run conda list and include results here
      • If method of install is [Docker], provide docker pull & docker run commands used
      • If method of install is [from source], provide versions of cmake & gcc/g++ and commit hash of build

    Additional context Related to https://github.com/rapidsai/cuml/issues/4095

    bug ? - Needs Triage 
    opened by dantegd 0
Releases(v22.12.00)
  • v22.12.00(Dec 8, 2022)

    🚨 Breaking Changes

    • Change docs theme to pydata-sphinx theme (#4985) @galipremsagar
    • Remove "Open In Colab" link from Estimator Intro notebook. (#4980) @bdice
    • Remove CumlArray.copy() (#4958) @madsbk

    🐛 Bug Fixes

    • Remove cupy.cusparse custom serialization (#5024) @dantegd
    • Restore LinearRegression documentation (#5020) @viclafargue
    • Don't use CMake 3.25.0 as it has a FindCUDAToolkit show stopping bug (#5007) @robertmaynard
    • verifying cusparse wrapper revert passes CI (#4990) @cjnolet
    • Use rapdsi_cpm_find(COMPONENTS ) for proper component tracking (#4989) @robertmaynard
    • Fix integer overflow in AutoARIMA due to bool-to-int cub scan (#4971) @Nyrio
    • Add missing includes (#4947) @vyasr
    • Fix the CMake option for disabling deprecation warnings. (#4946) @vyasr
    • Make doctest resilient to changes in cupy reprs (#4945) @vyasr
    • Assign python/ sub-directory to python-codeowners (#4940) @csadorf
    • Fix for non-contiguous strides (#4736) @viclafargue

    📖 Documentation

    • Change docs theme to pydata-sphinx theme (#4985) @galipremsagar
    • Remove "Open In Colab" link from Estimator Intro notebook. (#4980) @bdice
    • Updating build instructions (#4979) @cjnolet

    🚀 New Features

    • Reenable copy_prs. (#5010) @vyasr
    • Add wheel builds (#5009) @vyasr
    • LinearRegression: add support for multiple targets (#4988) @ahendriksen
    • CPU/GPU interoperability POC (#4874) @viclafargue

    🛠️ Improvements

    • Upgrade Treelite to 3.0.1 (#5018) @hcho3
    • fix addition of nan_euclidean_distances to public api (#5015) @mattf
    • Fixing raft pin to 22.12 (#5000) @cjnolet
    • Pin dask and distributed for release (#4999) @galipremsagar
    • Update dask nightly install command in CI (#4978) @galipremsagar
    • Improve error message for array_equal asserts. (#4973) @csadorf
    • Use new rapids-cmake functionality for rpath handling. (#4966) @vyasr
    • Impl. CumlArray.deserialize() (#4965) @madsbk
    • Update cuda-python dependency to 11.7.1 (#4961) @galipremsagar
    • Add check for nsys utility version in the nvtx_benchmarks.py script (#4959) @viclafargue
    • Remove CumlArray.copy() (#4958) @madsbk
    • Implement hypothesis-based tests for linear models (#4952) @csadorf
    • Switch to using rapids-cmake for gbench. (#4950) @vyasr
    • Remove stale labeler (#4949) @raydouglass
    • Fix url in python/setup.py setuptools metadata. (#4937) @csadorf
    • Updates to fix cuml build (#4928) @cjnolet
    • Documenting hdbscan module to add prediction functions (#4925) @cjnolet
    • Unpin dask and distributed for development (#4912) @galipremsagar
    • Use KMeans from Raft (#4713) @lowener
    • Update cuml raft header extensions (#4599) @cjnolet
    • Reconciling primitives moved to RAFT (#4583) @cjnolet
    Source code(tar.gz)
    Source code(zip)
  • v23.02.00a(Dec 7, 2022)

    🔗 Links

    🚨 Breaking Changes

    • Estimators adaptation toward CPU/GPU interoperability (#4918) @viclafargue
    • Provide host CumlArray and associated infrastructure (#4908) @wphicks

    🐛 Bug Fixes

    • Skip RAFT docstring test in cuML (#5088) @dantegd
    • Check sklearn presence before importing the Pipeline (#5072) @viclafargue
    • Provide workaround for kernel ridge solver (#5064) @wphicks
    • Keep verbosity level in KMeans OPG (#5063) @viclafargue
    • Transmit verbosity level to Dask workers (#5062) @viclafargue
    • Ensure consistent order for nearest neighbor tests (#5059) @wphicks
    • Add workers argument to dask make_blobs (#5057) @viclafargue
    • Fix indexing type for ridge and linear models (#4996) @lowener

    📖 Documentation

    • Fix doc for solver in LogisticRegression (#5097) @viclafargue
    • Fix docstring of HashingVectorizer (#5041) @lowener
    • expose text, text.{CountVectorizer,HashingVectorizer,Tfidf{Transformer,Vectorizer}} from feature_extraction's public api (#5028) @mattf
    • Add Dask LabelEncoder to the documentation (#5023) @beckernick

    🚀 New Features

    • Break up silhouette score into 3 units to improve compilation time (#5061) @wphicks
    • Provide host CumlArray and associated infrastructure (#4908) @wphicks

    🛠️ Improvements

    • Update workflows for nightly tests (#5110) @ajschmidt8
    • Enable Recently Updated Check (#5105) @ajschmidt8
    • Ensure pytest is run from relevant directories in GH Actions (#5101) @ajschmidt8
    • Remove C++ Kmeans test (#5098) @lowener
    • Slightly lower the test_mbsgd_regressor expected min score. (#5092) @csadorf
    • Skip all hypothesis health checks by default in CI runs. (#5090) @csadorf
    • Reduce Naive Bayes test time (#5082) @lowener
    • Remove unused .conda folder (#5078) @ajschmidt8
    • Fix conflicts in #5045 (#5077) @ajschmidt8
    • Add GitHub Actions Workflows (#5075) @csadorf
    • Skip test_linear_regression_model_default test. (#5074) @csadorf
    • Fix link. (#5067) @bdice
    • Update xgb version in GPU CI 23.02 to 1.7.1 and unblocking CI (#5051) @dantegd
    • Remove direct UCX and NCCL dependencies (#5038) @vyasr
    • Move single test from test to tests (#5037) @vyasr
    • Support using CountVectorizer & TfidVectorizer in cuml.pipeline.Pipeline (#5034) @lasse-it
    • Implement hypothesis strategies and tests for arrays (#5017) @csadorf
    • Add dependencies.yaml for rapids-dependency-file-generator (#5003) @beckernick
    • Improved CPU/GPU interoperability (#5001) @viclafargue
    • Estimators adaptation toward CPU/GPU interoperability (#4918) @viclafargue
    Source code(tar.gz)
    Source code(zip)
  • v22.10.01(Nov 4, 2022)

    🐛 Bug Fixes

    • Skipping some hdbscan tests when cuda version is <= 11.2. (#4916) @cjnolet
    • Fix HDBSCAN python namespace (#4895) @cjnolet
    • Cupy 11 fixes (#4889) @dantegd
    • Fix small fp precision failure in linear regression doctest test (#4884) @lowener
    • Remove unused cuDF imports (#4873) @beckernick
    • Update for thrust 1.17 and fixes to accommodate for cuDF Buffer refactor (#4871) @dantegd
    • Use rapids-cmake 22.10 best practice for RAPIDS.cmake location (#4862) @robertmaynard
    • Patch for nightly test&bench (#4840) @viclafargue
    • Fixed Large memory requirements for SimpleImputer strategy median #4794 (#4817) @erikrene
    • Transforms RandomForest estimators non-consecutive labels to consecutive labels where appropriate (#4780) @VamsiTallam95

    📖 Documentation

    • Document that minimum required CMake version is now 3.23.1 (#4899) @robertmaynard
    • Update KMeans notebook for clarity (#4886) @beckernick

    🚀 New Features

    • Allow cupy 11 (#4880) @galipremsagar
    • Add sample_weight to Coordinate Descent solver (Lasso and ElasticNet) (#4867) @lowener
    • Import treelite models into FIL in a different precision (#4839) @canonizer
    • #4783 Added nan_euclidean distance metric to pairwise_distances (#4797) @Sreekiran096
    • PowerTransformer, QuantileTransformer and KernelCenterer (#4755) @viclafargue
    • Add "median" to TargetEncoder (#4722) @daxiongshu
    • New Feature StratifiedKFold (#3109) @daxiongshu

    🛠️ Improvements

    • Update cuda-python dependency to 11.7.1 (#4948) @shwina
    • Updating python to use pylibraft (#4887) @cjnolet
    • Upgrade Treelite to 3.0.0 (#4885) @hcho3
    • Statically link all CUDA toolkit libraries (#4881) @trxcllnt
    • approximate_predict function for HDBSCAN (#4872) @tarang-jain
    • Pin dask and distributed for release (#4859) @galipremsagar
    • Remove Raft deprecated headers (#4858) @lowener
    • Fix forward-merge conflicts (#4857) @ajschmidt8
    • Update the NVTX bench helper for the new nsys utility (#4826) @viclafargue
    • All points membership vector for HDBSCAN (#4800) @tarang-jain
    • TSNE and UMAP allow several distance types (#4779) @tarang-jain
    • Convert fp32 datasets to fp64 in ARIMA and AutoARIMA + update notebook to avoid deprecation warnings with positional parameters (#4195) @Nyrio
    Source code(tar.gz)
    Source code(zip)
  • v22.10.00(Oct 12, 2022)

    🐛 Bug Fixes

    • Skipping some hdbscan tests when cuda version is <= 11.2. (#4916) @cjnolet
    • Fix HDBSCAN python namespace (#4895) @cjnolet
    • Cupy 11 fixes (#4889) @dantegd
    • Fix small fp precision failure in linear regression doctest test (#4884) @lowener
    • Remove unused cuDF imports (#4873) @beckernick
    • Update for thrust 1.17 and fixes to accommodate for cuDF Buffer refactor (#4871) @dantegd
    • Use rapids-cmake 22.10 best practice for RAPIDS.cmake location (#4862) @robertmaynard
    • Patch for nightly test&bench (#4840) @viclafargue
    • Fixed Large memory requirements for SimpleImputer strategy median #4794 (#4817) @erikrene
    • Transforms RandomForest estimators non-consecutive labels to consecutive labels where appropriate (#4780) @VamsiTallam95

    📖 Documentation

    • Document that minimum required CMake version is now 3.23.1 (#4899) @robertmaynard
    • Update KMeans notebook for clarity (#4886) @beckernick

    🚀 New Features

    • Allow cupy 11 (#4880) @galipremsagar
    • Add sample_weight to Coordinate Descent solver (Lasso and ElasticNet) (#4867) @lowener
    • Import treelite models into FIL in a different precision (#4839) @canonizer
    • #4783 Added nan_euclidean distance metric to pairwise_distances (#4797) @Sreekiran096
    • PowerTransformer, QuantileTransformer and KernelCenterer (#4755) @viclafargue
    • Add "median" to TargetEncoder (#4722) @daxiongshu
    • New Feature StratifiedKFold (#3109) @daxiongshu

    🛠️ Improvements

    • Updating python to use pylibraft (#4887) @cjnolet
    • Upgrade Treelite to 3.0.0 (#4885) @hcho3
    • Statically link all CUDA toolkit libraries (#4881) @trxcllnt
    • approximate_predict function for HDBSCAN (#4872) @tarang-jain
    • Pin dask and distributed for release (#4859) @galipremsagar
    • Remove Raft deprecated headers (#4858) @lowener
    • Fix forward-merge conflicts (#4857) @ajschmidt8
    • Update the NVTX bench helper for the new nsys utility (#4826) @viclafargue
    • All points membership vector for HDBSCAN (#4800) @tarang-jain
    • TSNE and UMAP allow several distance types (#4779) @tarang-jain
    • Convert fp32 datasets to fp64 in ARIMA and AutoARIMA + update notebook to avoid deprecation warnings with positional parameters (#4195) @Nyrio
    Source code(tar.gz)
    Source code(zip)
  • v22.12.00a(Dec 9, 2022)

    🔗 Links

    🚨 Breaking Changes

    • Change docs theme to pydata-sphinx theme (#4985) @galipremsagar
    • Remove "Open In Colab" link from Estimator Intro notebook. (#4980) @bdice
    • Remove CumlArray.copy() (#4958) @madsbk

    🐛 Bug Fixes

    • Backport "Don't initialize CUDA context if RAPIDS_NO_INITIALIZE env variable is set" (#5069) @dantegd
    • Remove cupy.cusparse custom serialization (#5024) @dantegd
    • Restore LinearRegression documentation (#5020) @viclafargue
    • Don't use CMake 3.25.0 as it has a FindCUDAToolkit show stopping bug (#5007) @robertmaynard
    • verifying cusparse wrapper revert passes CI (#4990) @cjnolet
    • Use rapdsi_cpm_find(COMPONENTS ) for proper component tracking (#4989) @robertmaynard
    • Fix integer overflow in AutoARIMA due to bool-to-int cub scan (#4971) @Nyrio
    • Add missing includes (#4947) @vyasr
    • Fix the CMake option for disabling deprecation warnings. (#4946) @vyasr
    • Make doctest resilient to changes in cupy reprs (#4945) @vyasr
    • Assign python/ sub-directory to python-codeowners (#4940) @csadorf
    • Fix for non-contiguous strides (#4736) @viclafargue

    📖 Documentation

    • Change docs theme to pydata-sphinx theme (#4985) @galipremsagar
    • Remove "Open In Colab" link from Estimator Intro notebook. (#4980) @bdice
    • Updating build instructions (#4979) @cjnolet

    🚀 New Features

    • Reenable copy_prs. (#5010) @vyasr
    • Add wheel builds (#5009) @vyasr
    • LinearRegression: add support for multiple targets (#4988) @ahendriksen
    • CPU/GPU interoperability POC (#4874) @viclafargue

    🛠️ Improvements

    • Upgrade Treelite to 3.0.1 (#5018) @hcho3
    • fix addition of nan_euclidean_distances to public api (#5015) @mattf
    • Fixing raft pin to 22.12 (#5000) @cjnolet
    • Pin dask and distributed for release (#4999) @galipremsagar
    • Update dask nightly install command in CI (#4978) @galipremsagar
    • Improve error message for array_equal asserts. (#4973) @csadorf
    • Use new rapids-cmake functionality for rpath handling. (#4966) @vyasr
    • Impl. CumlArray.deserialize() (#4965) @madsbk
    • Update cuda-python dependency to 11.7.1 (#4961) @galipremsagar
    • Add check for nsys utility version in the nvtx_benchmarks.py script (#4959) @viclafargue
    • Remove CumlArray.copy() (#4958) @madsbk
    • Implement hypothesis-based tests for linear models (#4952) @csadorf
    • Switch to using rapids-cmake for gbench. (#4950) @vyasr
    • Remove stale labeler (#4949) @raydouglass
    • Fix url in python/setup.py setuptools metadata. (#4937) @csadorf
    • Updates to fix cuml build (#4928) @cjnolet
    • Documenting hdbscan module to add prediction functions (#4925) @cjnolet
    • Unpin dask and distributed for development (#4912) @galipremsagar
    • Use KMeans from Raft (#4713) @lowener
    • Update cuml raft header extensions (#4599) @cjnolet
    • Reconciling primitives moved to RAFT (#4583) @cjnolet
    Source code(tar.gz)
    Source code(zip)
  • v22.08.00(Aug 17, 2022)

    🚨 Breaking Changes

    • Update Python build to scikit-build (#4818) @dantegd
    • Bump xgboost to 1.6.0 from 1.5.2 (#4777) @galipremsagar

    🐛 Bug Fixes

    • Revert "Allow CuPy 11" (#4847) @galipremsagar
    • Fix RAFT_NVTX option not set (#4825) @achirkin
    • Fix KNN error message. (#4782) @trivialfis
    • Update raft pinnings in dev yml files (#4778) @galipremsagar
    • Bump xgboost to 1.6.0 from 1.5.2 (#4777) @galipremsagar
    • Fixes exception when using predict_proba on fitted Pipeline object with a ColumnTransformer step (#4774) @VamsiTallam95
    • Regression errors failing with mixed data type combinations (#4770) @shaswat-indian

    📖 Documentation

    • Use common code in python docs and defer js loading (#4852) @galipremsagar
    • Centralize common css & js code in docs (#4844) @galipremsagar
    • Add ComplementNB to the documentation (#4805) @lowener
    • Fix forward-merge branch-22.06 to branch-22.08 (#4789) @divyegala

    🚀 New Features

    • Update Python build to scikit-build (#4818) @dantegd
    • Vectorizers to accept Pandas Series as input (#4811) @shaswat-indian
    • Cython wrapper for v-measure (#4785) @shaswat-indian

    🛠️ Improvements

    • Pin dask & distributed for release (#4850) @galipremsagar
    • Allow CuPy 11 (#4837) @jakirkham
    • Remove duplicate adj_to_csr implementation (#4829) @ahendriksen
    • Update conda environment files to UCX 1.13.0 (#4813) @pentschev
    • Update conda recipes to UCX 1.13.0 (#4809) @pentschev
    • Fix #3414: remove naive versions dbscan algorithms (#4804) @ahendriksen
    • Accelerate adjacency matrix to CSR conversion for DBSCAN (#4803) @ahendriksen
    • Pin max version of cuda-python to 11.7.0 (#4793) @Ethyling
    • Allow cosine distance metric in dbscan (#4776) @tarang-jain
    • Unpin dask & distributed for development (#4771) @galipremsagar
    • Clean up Thrust includes. (#4675) @bdice
    • Improvements in feature sampling (#4278) @vinaydes
    Source code(tar.gz)
    Source code(zip)
  • v22.06.01(Jul 6, 2022)

  • v22.06.00(Jun 7, 2022)

    🐛 Bug Fixes

    • Fix sg benchmark build. (#4766) @trivialfis
    • Resolve KRR hypothesis test failure (#4761) @RAMitchell
    • Fix KBinsDiscretizer bin_edges_ (#4735) @viclafargue
    • FIX Accept small floats in RandomForest (#4717) @thomasjpfan
    • Remove import of scalar_broadcast_to from stemmer (#4706) @viclafargue
    • Replace 22.04.x with 22.06.x in yaml files (#4692) @daxiongshu
    • Replace cudf.logical_not with ~ (#4669) @canonizer

    📖 Documentation

    • Fix docs builds (#4733) @ajschmidt8
    • Change "principals" to "principles" (#4695) @cakiki
    • Update pydoc and promote ColumnTransformer out of experimental (#4509) @viclafargue

    🚀 New Features

    • float64 support in FIL functions (#4655) @canonizer
    • float64 support in FIL core (#4646) @canonizer
    • Allow "LabelEncoder" to accept cupy and numpy arrays as input. (#4620) @daxiongshu
    • MNMG Logistic Regression (dask-glm wrapper) (#3512) @daxiongshu

    🛠️ Improvements

    • Pin dask & distributed for release (#4758) @galipremsagar
    • Simplicial set functions (#4756) @viclafargue
    • Upgrade Treelite to 2.4.0 (#4752) @hcho3
    • Simplify recipes (#4749) @Ethyling
    • Inference for float64 random forests using FIL (#4739) @canonizer
    • MNT Removes unused optim_batch_size from UMAP's docstring (#4732) @thomasjpfan
    • Require UCX 1.12.1+ (#4720) @jakirkham
    • Allow enabling raft NVTX markers when raft is installed (#4718) @achirkin
    • Fix identifier collision (#4716) @viclafargue
    • Use raft::span in TreeExplainer (#4714) @hcho3
    • Expose simplicial set functions (#4711) @viclafargue
    • Refactor tests in cuml (#4703) @galipremsagar
    • Use conda to build python packages during GPU tests (#4702) @Ethyling
    • Update pinning to allow newer CMake versions. (#4698) @vyasr
    • TreeExplainer extensions (#4697) @RAMitchell
    • Add sample_weight for Ridge (#4696) @lowener
    • Unpin dask & distributed for development (#4693) @galipremsagar
    • float64 support in treelite->FIL import and Python layer (#4690) @canonizer
    • Enable building static libs (#4673) @trxcllnt
    • Treeshap hypothesis tests (#4671) @RAMitchell
    • float64 support in multi-sum and child_index() (#4648) @canonizer
    • Add libcuml-tests package (#4635) @Ethyling
    • Random ball cover algorithm for 3D data (#4582) @cjnolet
    • Use conda compilers (#4577) @Ethyling
    • Build packages using mambabuild (#4542) @Ethyling
    Source code(tar.gz)
    Source code(zip)
  • v22.04.00(Apr 6, 2022)

    🚨 Breaking Changes

    • Moving more ling prims to raft (#4567) @cjnolet
    • Refactor QN solver: pass parameters via a POD struct (#4511) @achirkin

    🐛 Bug Fixes

    • Fix single-GPU build by separating multi-GPU decomposition utils from single GPU (#4645) @dantegd
    • RF: fix stream bug causing performance regressions (#4644) @venkywonka
    • XFail test_hinge_loss temporarily (#4621) @lowener
    • cuml now supports building non static treelite (#4598) @robertmaynard
    • Fix mean_squared_error with cudf series (#4584) @daxiongshu
    • Fix for nightly CI tests: Use CUDA_REL variable in gpu build.sh script (#4581) @dantegd
    • Fix the TargetEncoder when transforming dataframe/series with custom index (#4578) @daxiongshu
    • Removing sign from pca assertions for now. (#4559) @cjnolet
    • Fix compatibility of OneHotEncoder fit (#4544) @lowener
    • Fix worker streams in OLS-eig executing in an unsafe order (#4539) @achirkin
    • Remove xfail from test_hinge_loss (#4504) @Nanthini10
    • Fix automerge #4501 (#4502) @dantegd
    • Remove classmethod of SimpleImputer (#4439) @lowener

    📖 Documentation

    • RF: Fix improper documentation in dask-RF (#4666) @venkywonka
    • Add doctest (#4618) @lowener
    • Fix document layouts in Parameters sections (#4609) @Yosshi999
    • Updates to consistency of MNMG PCA/TSVD solvers (docs + code consolidation) (#4556) @cjnolet

    🚀 New Features

    • Add a dummy argument deep to TargetEncoder.get_params() (#4601) @daxiongshu
    • Add Complement Naive Bayes (#4595) @lowener
    • Add get_params() to TargetEncoder (#4588) @daxiongshu
    • Target Encoder with variance statistics (#4483) @daxiongshu
    • Interruptible execution (#4463) @achirkin
    • Configurable libcuml++ per algorithm (#4296) @dantegd

    🛠️ Improvements

    • Adding some prints when hdbscan assertion fails (#4656) @cjnolet
    • Temporarily disable new ops-bot functionality (#4652) @ajschmidt8
    • Use CPMFindPackage to retrieve cumlprims_mg (#4649) @trxcllnt
    • Pin dask & distributed versions (#4647) @galipremsagar
    • Remove RAFT MM includes (#4637) @viclafargue
    • Add option to build RAFT artifacts statically into libcuml++ (#4633) @dantegd
    • Upgrade dask & distributed minimum version (#4632) @galipremsagar
    • Add .github/ops-bot.yaml config file (#4630) @ajschmidt8
    • Small fixes for certain test failures (#4628) @vinaydes
    • Templatizing FIL types to add float64 support (#4625) @canonizer
    • Fitsne as default tsne method (#4597) @lowener
    • Add get_feature_names to OneHotEncoder (#4596) @viclafargue
    • Fix OOM and cudaContext crash in C++ benchmarks (#4594) @RAMitchell
    • Using Pyraft and automatically cloning when raft pin changes (#4593) @cjnolet
    • Upgrade Treelite to 2.3.0 (#4590) @hcho3
    • Sphinx warnings as errors (#4585) @RAMitchell
    • Adding missing FAISS license (#4579) @cjnolet
    • Add QN solver to ElasticNet and Lasso models (#4576) @achirkin
    • Move remaining stats prims to raft (#4568) @cjnolet
    • Moving more ling prims to raft (#4567) @cjnolet
    • Adding libraft conda dependencies (#4564) @cjnolet
    • Fix RF integer overflow (#4563) @RAMitchell
    • Add CMake install rules for tests (#4551) @ajschmidt8
    • Faster GLM preprocessing by fusing kernels (#4549) @achirkin
    • RAFT API updates for lap, label, cluster, and spectral apis (#4548) @cjnolet
    • Moving cusparse wrappers to detail API in RAFT. (#4547) @cjnolet
    • Unpin max dask and distributed versions (#4546) @galipremsagar
    • Kernel density estimation (#4545) @RAMitchell
    • Update xgboost version in CI (#4541) @ajschmidt8
    • replaces ccache with sccache (#4534) @AyodeAwe
    • Remove RAFT memory management (2/2) (#4526) @viclafargue
    • Updating RAFT linalg headers (#4515) @divyegala
    • Refactor QN solver: pass parameters via a POD struct (#4511) @achirkin
    • Kernel ridge regression (#4492) @RAMitchell
    • QN solvers: Use different gradient norms for different for different loss functions. (#4491) @achirkin
    • RF: Variable binning and other minor refactoring (#4479) @venkywonka
    • Rewrite CD solver using more BLAS (#4446) @achirkin
    • Add support for sample_weights in LinearRegression (#4428) @lowener
    • Nightly automated benchmark (#4414) @viclafargue
    • Use FAISS with RMM (#4297) @viclafargue
    • Split C++ tests into separate binaries (#4295) @dantegd
    Source code(tar.gz)
    Source code(zip)
  • v22.02.00(Feb 2, 2022)

    🚨 Breaking Changes

    • Move NVTX range helpers to raft (#4445) @achirkin

    🐛 Bug Fixes

    • Always upload libcuml (#4530) @raydouglass
    • Fix RAFT pin to main branch (#4508) @dantegd
    • Pin dask & distributed (#4505) @galipremsagar
    • Replace use of RMM provided CUDA bindings with CUDA Python (#4499) @shwina
    • Dataframe Index as columns in ColumnTransformer (#4481) @viclafargue
    • Support compilation with Thrust 1.15 (#4469) @robertmaynard
    • fix minor ASAN issues in UMAPAlgo::Optimize::find_params_ab() (#4405) @yitao-li

    📖 Documentation

    • Remove comment numerical warning (#4408) @viclafargue
    • Fix docstring for npermutations in PermutationExplainer (#4402) @hcho3

    🚀 New Features

    • Combine and expose SVC's support vectors when fitting multi-class data (#4454) @NV-jpt
    • Accept fold index for TargetEncoder (#4453) @daxiongshu
    • Move NVTX range helpers to raft (#4445) @achirkin

    🛠️ Improvements

    • Fix packages upload (#4517) @Ethyling
    • Testing split fused l2 knn compilation units (#4514) @cjnolet
    • Prepare upload scripts for Python 3.7 removal (#4500) @Ethyling
    • Renaming macros with their RAFT counterparts (#4496) @divyegala
    • Allow CuPy 10 (#4487) @jakirkham
    • Upgrade Treelite to 2.2.1 (#4484) @hcho3
    • Unpin dask and distributed (#4482) @galipremsagar
    • Support categorical splits in in TreeExplainer (#4473) @hcho3
    • Remove RAFT memory management (#4468) @viclafargue
    • Add missing imports tests (#4452) @Ethyling
    • Update CUDA 11.5 conda environment to use 22.02 pinnings. (#4450) @bdice
    • Support cuML / scikit-learn RF classifiers in TreeExplainer (#4447) @hcho3
    • Remove IncludeCategories from .clang-format (#4438) @codereport
    • Simplify perplexity normalization in t-SNE (#4425) @zbjornson
    • Unify dense and sparse tests (#4417) @levsnv
    • Update ucx-py version on release using rvc (#4411) @Ethyling
    • Universal Treelite tree walk function for FIL (#4407) @levsnv
    • Update to UCX-Py 0.24 (#4396) @pentschev
    • Using sparse public API functions from RAFT (#4389) @cjnolet
    • Add a warning to prefer LinearSVM over SVM(kernel='linear') (#4382) @achirkin
    • Hiding cusparse deprecation warnings (#4373) @cjnolet
    • Unify dense and sparse import in FIL (#4328) @levsnv
    • Integrating RAFT handle updates (#4313) @divyegala
    • Use RAFT template instantations for distances (#4302) @cjnolet
    • RF: code re-organization to enhance build parallelism (#4299) @venkywonka
    • Add option to build faiss and treelite shared libs, inherit common dependencies from raft (#4256) @trxcllnt
    Source code(tar.gz)
    Source code(zip)
  • v21.12.00(Dec 8, 2021)

    🚨 Breaking Changes

    • Fix indexing of PCA to use safer types (#4255) @lowener
    • RF: Add Gamma and Inverse Gaussian loss criteria (#4216) @venkywonka
    • update RF docs (#4138) @venkywonka

    🐛 Bug Fixes

    • Update conda recipe to have explicit libcusolver (#4392) @dantegd
    • Restore FIL convention of inlining code (#4366) @levsnv
    • Fix SVR intercept AttributeError (#4358) @lowener
    • Fix is_stable_build logic for CI scripts (#4350) @ajschmidt8
    • Temporarily disable rmm devicebuffer in array.py (#4333) @dantegd
    • Fix categorical test in python (#4326) @levsnv
    • Revert "Merge pull request #4319 from AyodeAwe/branch-21.12" (#4325) @ajschmidt8
    • Preserve indexing in methods when applied to DataFrame and Series objects (#4317) @dantegd
    • Fix potential CUDA context poison when negative (invalid) categories provided to FIL model (#4314) @levsnv
    • Using sparse expanded distances where possible (#4310) @cjnolet
    • Fix for mean_squared_error (#4287) @viclafargue
    • Fix for Categorical Naive Bayes sparse handling (#4277) @lowener
    • Throw an explicit excpetion if the input array is empty in DBSCAN.fit #4273 (#4275) @viktorkovesd
    • Fix KernelExplainer returning TypeError for certain input (#4272) @Nanthini10
    • Remove most warnings from pytest suite (#4196) @dantegd

    📖 Documentation

    • Add experimental GPUTreeSHAP to API doc (#4398) @hcho3
    • Fix GLM typo on device/host pointer (#4320) @lowener
    • update RF docs (#4138) @venkywonka

    🚀 New Features

    • Add GPUTreeSHAP to cuML explainer module (experimental) (#4351) @hcho3
    • Enable training single GPU cuML models using Dask DataFrames and Series (#4300) @ChrisJar
    • LinearSVM using QN solvers (#4268) @achirkin
    • Add support for exogenous variables to ARIMA (#4221) @Nyrio
    • Use opt-in shared memory carveout for FIL (#3759) @levsnv
    • Symbolic Regression/Classification C/C++ (#3638) @vimarsh6739

    🛠️ Improvements

    • Fix Changelog Merge Conflicts for branch-21.12 (#4393) @ajschmidt8
    • Pin max dask and distributed to 2012.11.2 (#4390) @galipremsagar
    • Fix forward merge #4349 (#4374) @dantegd
    • Upgrade clang to 11.1.0 (#4372) @galipremsagar
    • Update clang-format version in docs; allow unanchored version string (#4365) @zbjornson
    • Add CUDA 11.5 developer environment (#4364) @dantegd
    • Fix aliasing violation in t-SNE (#4363) @zbjornson
    • Promote FITSNE from experimental (#4361) @lowener
    • Fix unnecessary f32/f64 conversions in t-SNE KL calc (#4331) @zbjornson
    • Update rapids-cmake version (#4330) @dantegd
    • rapids-cmake version update to 21.12 (#4327) @dantegd
    • Use compute-sanitizer instead of cuda-memcheck (#4324) @teju85
    • Ability to pass fp64 type to cuml benchmarks (#4323) @teju85
    • Split treelite fil import from forest object definition (#4306) @levsnv
    • update xgboost version (#4301) @msadang
    • Accounting for RAFT updates to matrix, stats, and random implementations in detail (#4294) @divyegala
    • Update cudf matrix calls for to_numpy and to_cupy (#4293) @dantegd
    • Update conda recipes for Enhanced Compatibility effort (#4288) @ajschmidt8
    • Increase parallelism from 4 to 8 jobs in CI (#4286) @dantegd
    • RAFT distance prims public API update (#4280) @cjnolet
    • Update to UCX-Py 0.23 (#4274) @pentschev
    • In FIL, clip blocks_per_sm to one wave instead of asserting (#4271) @levsnv
    • Update of "Gracefully accept 'n_jobs', a common sklearn parameter, in NearestNeighbors Estimator" (#4267) @NV-jpt
    • Improve numerical stability of the Kalman filter for ARIMA (#4259) @Nyrio
    • Fix indexing of PCA to use safer types (#4255) @lowener
    • Change calculation of ARIMA confidence intervals (#4248) @Nyrio
    • Unpin dask & distributed in CI (#4235) @galipremsagar
    • RF: Add Gamma and Inverse Gaussian loss criteria (#4216) @venkywonka
    • Exposing KL divergence in TSNE (#4208) @viclafargue
    • Unify template parameter dispatch for FIL inference and shared memory footprint estimation (#4013) @levsnv
    Source code(tar.gz)
    Source code(zip)
  • v21.10.02(Nov 12, 2021)

  • v21.10.01(Nov 10, 2021)

  • v21.08.03(Nov 8, 2021)

  • v21.10.00(Oct 6, 2021)

    🚨 Breaking Changes

    • RF: python api behaviour refactor (#4207) @venkywonka
    • Implement vector leaf for random forest (#4191) @RAMitchell
    • Random forest refactoring (#4166) @RAMitchell
    • RF: Add Poisson deviance impurity criterion (#4156) @venkywonka
    • avoid paramsSolver::{n_rows,n_cols} shadowing their base class counterparts (#4130) @yitao-li
    • Apply modifications to account for RAFT changes (#4077) @viclafargue

    🐛 Bug Fixes

    • Update scikit-learn version in conda dev envs to 0.24 (#4241) @dantegd
    • Using pinned host memory for Random Forest and DBSCAN (#4215) @divyegala
    • Make sure we keep the rapids-cmake and cuml cal version in sync (#4213) @robertmaynard
    • Add thrust_create_target to install export in CMakeLists (#4209) @dantegd
    • Change the error type to match sklearn. (#4198) @achirkin
    • Fixing remaining hdbscan bug (#4179) @cjnolet
    • Fix for cuDF changes to cudf.core (#4168) @dantegd
    • Fixing UMAP reproducibility pytest failures in 11.4 by using random init for now (#4152) @cjnolet
    • avoid paramsSolver::{n_rows,n_cols} shadowing their base class counterparts (#4130) @yitao-li
    • Use the new RAPIDS.cmake to fetch rapids-cmake (#4102) @robertmaynard

    📖 Documentation

    • Expose train_test_split in API doc (#4234) @hcho3
    • Adding docs for .get_feature_names() inside TfidfVectorizer (#4226) @mayankanand007
    • Removing experimental flag from hdbscan description in docs (#4211) @cjnolet
    • updated build instructions (#4200) @shaneding
    • Forward-merge branch-21.08 to branch-21.10 (#4171) @jakirkham

    🚀 New Features

    • Experimental option to build libcuml++ only with FIL (#4225) @dantegd
    • FIL to import categorical models from treelite (#4173) @levsnv
    • Add hamming, jensen-shannon, kl-divergence, correlation and russellrao distance metrics (#4155) @mdoijade
    • Add Categorical Naive Bayes (#4150) @lowener
    • FIL to infer categorical forests and generate them in C++ tests (#4092) @levsnv
    • Add Gaussian Naive Bayes (#4079) @lowener
    • ARIMA - Add support for missing observations and padding (#4058) @Nyrio

    🛠️ Improvements

    • Pin max dask and distributed versions to 2021.09.1 (#4229) @galipremsagar
    • Fea/umap refine (#4228) @AjayThorve
    • Upgrade Treelite to 2.1.0 (#4220) @hcho3
    • Add option to clone RAFT even if it is in the environment (#4217) @dantegd
    • RF: python api behaviour refactor (#4207) @venkywonka
    • Pytest updates for Scikit-learn 0.24 (#4205) @dantegd
    • Faster glm ols-via-eigendecomposition algorithm (#4201) @achirkin
    • Implement vector leaf for random forest (#4191) @RAMitchell
    • Refactor kmeans sampling code (#4190) @Nanthini10
    • Gracefully accept 'n_jobs', a common sklearn parameter, in NearestNeighbors Estimator (#4178) @NV-jpt
    • Update with rapids cmake new features (#4175) @robertmaynard
    • Update to UCX-Py 0.22 (#4174) @pentschev
    • Random forest refactoring (#4166) @RAMitchell
    • Fix log level for dask tree_reduce (#4163) @lowener
    • Add CUDA 11.4 development environment (#4160) @dantegd
    • RF: Add Poisson deviance impurity criterion (#4156) @venkywonka
    • Split FIL infer_k into phases to speed up compilation (when a patch is applied) (#4148) @levsnv
    • RF node queue rewrite (#4125) @RAMitchell
    • Remove max version pin for dask & distributed on development branch (#4118) @galipremsagar
    • Correct name of a cmake function in get_spdlog.cmake (#4106) @robertmaynard
    • Apply modifications to account for RAFT changes (#4077) @viclafargue
    • Warnings are errors (#4075) @harrism
    • ENH Replace gpuci_conda_retry with gpuci_mamba_retry (#4065) @dillon-cullinan
    • Changes to NearestNeighbors to call 2d random ball cover (#4003) @cjnolet
    • support space in workspace (#3752) @jolorunyomi
    Source code(tar.gz)
    Source code(zip)
  • v21.08.02(Sep 16, 2021)

  • v21.08.01(Aug 6, 2021)

  • v21.08.00(Aug 4, 2021)

    🚨 Breaking Changes

    • Remove deprecated target_weights in UMAP (#4081) @lowener
    • Upgrade Treelite to 2.0.0 (#4072) @hcho3
    • RF/DT cleanup (#4005) @venkywonka
    • RF: memset and batch size optimization for computing splits (#4001) @venkywonka
    • Remove old RF backend (#3868) @RAMitchell
    • Enable warp-per-tree inference in FIL for regression and binary classification (#3760) @levsnv

    🐛 Bug Fixes

    • Disabling umap reproducibility tests for cuda 11.4 (#4128) @cjnolet
    • Fix for crash in RF when max_leaves parameter is specified (#4126) @vinaydes
    • Running umap mnmg test twice (#4112) @cjnolet
    • Minimal fix for SparseRandomProjection (#4100) @viclafargue
    • Creating copy of components in PCA transform and inverse transform (#4099) @divyegala
    • Fix SVM model parameter handling in case n_support=0 (#4097) @tfeher
    • Fix set_params for linear models (#4096) @lowener
    • Fix train test split pytest comparison (#4062) @dantegd
    • Fix fit_transform on KMeans (#4055) @lowener
    • Fixing -1 key access in 1nn reduce op in HDBSCAN (#4052) @divyegala
    • Disable installing gbench to avoid container permission issues (#4049) @dantegd
    • Fix double fit crash in preprocessing models (#4040) @viclafargue
    • Always add faiss library alias if it's missing (#4028) @trxcllnt
    • Fixing intermittent HBDSCAN pytest failure in CI (#4025) @divyegala
    • HDBSCAN bug on A100 (#4024) @divyegala
    • Add treelite include paths to treelite targets (#4023) @trxcllnt
    • Add Treelite_BINARY_DIR include to cuml++ build interface include paths (#4018) @trxcllnt
    • Small ARIMA-related bug fixes in Hessenberg reduction and make_arima (#4017) @Nyrio
    • Update setup.py (#4015) @ajschmidt8
    • Update treelite version in get_treelite.cmake (#4014) @ajschmidt8
    • Fix build with latest RAFT branch-21.08 (#4012) @trxcllnt
    • Skipping hdbscan pytests when gpu is a100 (#4007) @cjnolet
    • Using 64-bit array lengths to increase scale of pca & tsvd (#3983) @cjnolet
    • Fix MNMG test in Dask RF (#3964) @hcho3
    • Use nested include in destination of install headers to avoid docker permission issues (#3962) @dantegd
    • Fix automerge #3939 (#3952) @dantegd
    • Update UCX-Py version to 0.21 (#3950) @pentschev
    • Fix kernel and line info in cmake (#3941) @dantegd
    • Fix for multi GPU PCA compute failing bug after transform and added error handling when n_components is not passed (#3912) @akaanirban
    • Tolerate QN linesearch failures when it's harmless (#3791) @achirkin

    📖 Documentation

    • Improve docstrings for silhouette score metrics. (#4026) @bdice
    • Update CHANGELOG.md link (#3956) @Salonijain27
    • Update documentation build examples to be generator agnostic (#3909) @robertmaynard
    • Improve FIL code readability and documentation (#3056) @levsnv

    🚀 New Features

    • Add Multinomial and Bernoulli Naive Bayes variants (#4053) @lowener
    • Add weighted K-Means sampling for SHAP (#4051) @Nanthini10
    • Use chebyshev, canberra, hellinger and minkowski distance metrics (#3990) @mdoijade
    • Implement vector leaf prediction for fil. (#3917) @RAMitchell
    • change TargetEncoder's smooth argument from ratio to count (#3876) @daxiongshu
    • Enable warp-per-tree inference in FIL for regression and binary classification (#3760) @levsnv

    🛠️ Improvements

    • Remove clang/clang-tools from conda recipe (#4109) @dantegd
    • Pin dask version (#4108) @galipremsagar
    • ANN warnings/tests updates (#4101) @viclafargue
    • Removing local memory operations from computeSplitKernel and other optimizations (#4083) @vinaydes
    • Fix libfaiss dependency to not expressly depend on conda-forge (#4082) @Ethyling
    • Remove deprecated target_weights in UMAP (#4081) @lowener
    • Upgrade Treelite to 2.0.0 (#4072) @hcho3
    • Optimize dtype conversion for FIL (#4070) @dantegd
    • Adding quick notes to HDBSCAN public API docs as to why discrepancies may occur between cpu and gpu impls. (#4061) @cjnolet
    • Update conda environment name for CI (#4039) @ajschmidt8
    • Rewrite random forest gtests (#4038) @RAMitchell
    • Updating Clang Version to 11.0.0 (#4029) @codereport
    • Raise ARIMA parameter limits from 4 to 8 (#4022) @Nyrio
    • Testing extract clusters in HDBSCAN (#4009) @divyegala
    • ARIMA - Kalman loop rewrite: single megakernel instead of host loop (#4006) @Nyrio
    • RF/DT cleanup (#4005) @venkywonka
    • Exposing condensed hierarchy through cython for easier unit-level testing (#4004) @cjnolet
    • Use the 21.08 branch of rapids-cmake as rmm requires it (#4002) @robertmaynard
    • RF: memset and batch size optimization for computing splits (#4001) @venkywonka
    • Reducing cluster size to number of selected clusters. Returning stability scores (#3987) @cjnolet
    • HDBSCAN: Lazy-loading (and caching) condensed & single-linkage tree objects (#3986) @cjnolet
    • Fix 21.08 forward-merge conflicts (#3982) @ajschmidt8
    • Update Dask/Distributed version (#3978) @pentschev
    • Use clang-tools on x86 only (#3969) @jakirkham
    • Promote trustworthiness_score to public header, add missing includes, update dependencies (#3968) @trxcllnt
    • Moving FAISS ANN wrapper to raft (#3963) @cjnolet
    • Add MG weighted k-means (#3959) @lowener
    • Remove unused code in UMAP. (#3931) @trivialfis
    • Fix automerge #3900 and correct package versions in meta packages (#3918) @dantegd
    • Adaptive stress tests when GPU memory capacity is insufficient (#3916) @lowener
    • Fix merge conflicts (#3892) @ajschmidt8
    • Remove old RF backend (#3868) @RAMitchell
    • Refactor to extract random forest objectives (#3854) @RAMitchell
    Source code(tar.gz)
    Source code(zip)
  • v21.06.02(Jun 17, 2021)

  • v21.06.01(Jun 10, 2021)

  • v21.06.00(Jun 9, 2021)

    🚨 Breaking Changes

    • Remove Base.enable_rmm_pool method as it is no longer needed (#3875) @teju85
    • RF: Make experimental-backend default for regression tasks and deprecate old-backend. (#3872) @venkywonka
    • Deterministic UMAP with floating point rounding. (#3848) @trivialfis
    • Fix RF regression performance (#3845) @RAMitchell
    • Add feature to print forest shape in FIL upon importing (#3763) @levsnv
    • Remove 'seed' and 'output_type' deprecated features (#3739) @lowener

    🐛 Bug Fixes

    • Disable UMAP deterministic test on CTK11.2 (#3942) @trivialfis
    • Revert #3869 (#3933) @hcho3
    • RF: fix the bug in pdf_to_cdf device function that causes hang when n_bins &gt; TPB &amp;&amp; n_bins % TPB != 0 (#3921) @venkywonka
    • Fix number of permutations in pytest and getting handle for cuml models (#3920) @dantegd
    • Fix typo in umap target_weight parameter (#3914) @lowener
    • correct compliation of cuml c library (#3908) @robertmaynard
    • Correct install path for include folder to avoid double nesting (#3901) @dantegd
    • Add type check for y in train_test_split (#3886) @Nanthini10
    • Fix for MNMG test_rf_classification_dask_fil_predict_proba (#3831) @lowener
    • Fix MNMG test test_rf_regression_dask_fil (#3830) @hcho3
    • AgglomerativeClustering support single cluster and ignore only zero distances from self-loops (#3824) @cjnolet

    📖 Documentation

    • Small doc fixes for 21.06 release (#3936) @dantegd
    • Document ability to export cuML RF to predict on other machines (#3890) @hcho3

    🚀 New Features

    • Deterministic UMAP with floating point rounding. (#3848) @trivialfis
    • HDBSCAN (#3821) @cjnolet
    • Add feature to print forest shape in FIL upon importing (#3763) @levsnv

    🛠️ Improvements

    • Pin dask ot 2021.5.1 for 21.06 release (#3937) @dantegd
    • Upgrade xgboost to 1.4.2 (#3925) @dantegd
    • Use UCX-Py 0.20 (#3911) @jakirkham
    • Upgrade NCCL to 2.9.9 (#3902) @dantegd
    • Update conda developer environments (#3898) @viclafargue
    • ARIMA: pre-allocation of temporary memory to reduce latencies (#3895) @Nyrio
    • Condense TSNE parameters into a struct (#3884) @lowener
    • Update CHANGELOG.md links for calver (#3883) @ajschmidt8
    • Make sure __init__ is called in graph callback. (#3881) @trivialfis
    • Update docs build script (#3877) @ajschmidt8
    • Remove Base.enable_rmm_pool method as it is no longer needed (#3875) @teju85
    • RF: Make experimental-backend default for regression tasks and deprecate old-backend. (#3872) @venkywonka
    • Enable probability output from RF binary classifier (alternative implementaton) (#3869) @hcho3
    • CI test speed improvement (#3851) @lowener
    • Fix RF regression performance (#3845) @RAMitchell
    • Update to CMake 3.20 features, rapids-cmake and CPM (#3844) @dantegd
    • Support sparse input features in QN solvers and Logistic Regression (#3827) @achirkin
    • Trustworthiness score improvements (#3826) @viclafargue
    • Performance optimization of RF split kernels by removing empty cycles (#3818) @vinaydes
    • Correct deprecate positional args decorator for CalVer (#3784) @lowener
    • ColumnTransformer & FunctionTransformer (#3745) @viclafargue
    • Remove 'seed' and 'output_type' deprecated features (#3739) @lowener
    Source code(tar.gz)
    Source code(zip)
  • v0.19.0(Apr 21, 2021)

    🚨 Breaking Changes

    • Use the new RF backend by default for classification (#3686) @hcho3
    • Deprecating quantile-per-tree and removing three previously deprecated Random Forest parameters (#3667) @vinaydes
    • Update predict() / predict_proba() of RF to match sklearn (#3609) @hcho3
    • Upgrade FAISS to 1.7.x (#3509) @viclafargue
    • cuML's estimator Base class for preprocessing models (#3270) @viclafargue

    🐛 Bug Fixes

    • Fix brute force KNN distance metric issue (#3755) @viclafargue
    • Fix min_max_axis (#3735) @viclafargue
    • Fix NaN errors observed with ARIMA in CUDA 11.2 builds (#3730) @Nyrio
    • Fix random state generator (#3716) @viclafargue
    • Fixes the out of memory access issue for computeSplit kernels (#3715) @vinaydes
    • Fixing umap gtest failure under cuda 11.2. (#3696) @cjnolet
    • Fix irreproducibility issue in RF classification (#3693) @vinaydes
    • BUG fix BatchedLevelAlgo DtClsTest & DtRegTest failing tests (#3690) @venkywonka
    • Restore the functionality of RF score() (#3685) @hcho3
    • Use main build.sh to build docs in docs CI (#3681) @dantegd
    • Revert "Update conda recipes pinning of repo dependencies" (#3680) @raydouglass
    • Skip tests that fail on CUDA 11.2 (#3679) @dantegd
    • Dask KNN Cl&Re 1D labels (#3668) @viclafargue
    • Update conda recipes pinning of repo dependencies (#3666) @mike-wendt
    • OOB access in GLM SoftMax (#3642) @divyegala
    • SilhouetteScore C++ tests seed (#3640) @divyegala
    • SimpleImputer fix (#3624) @viclafargue
    • Silhouette Score make_monotonic for non-monotonic label set (#3619) @divyegala
    • Fixing support for empty rows in sparse Jaccard / Cosine (#3612) @cjnolet
    • Fix train_test_split with stratify option (#3611) @Nanthini10
    • Update predict() / predict_proba() of RF to match sklearn (#3609) @hcho3
    • Change dask and distributed branch to main (#3593) @dantegd
    • Fixes memory allocation for experimental backend and improves quantile computations (#3586) @vinaydes
    • Add ucx-proc package back that got lost during an auto merge conflict (#3550) @dantegd
    • Fix failing Hellinger gtest (#3549) @cjnolet
    • Directly invoke make for non-CMake docs target (#3534) @wphicks
    • Fix Codecov.io Coverage Upload for Branch Builds (#3524) @mdemoret-nv
    • Ensure global_output_type is thread-safe (#3497) @wphicks
    • List as input for SimpleImputer (#3489) @viclafargue

    📖 Documentation

    • Add sparse docstring comments (#3712) @JohnZed
    • FIL and Dask demo (#3698) @miroenev
    • Deprecating quantile-per-tree and removing three previously deprecated Random Forest parameters (#3667) @vinaydes
    • Fixing Indentation for Docstring Generators (#3650) @mdemoret-nv
    • Update doc to indicate ExtraTree support (#3635) @hcho3
    • Update doc, now that FIL supports multi-class classification (#3634) @hcho3
    • Document model_type='xgboost_json' in FIL (#3633) @hcho3
    • Including log loss metric to the documentation website (#3617) @lowener
    • Update the build doc regarding the use of GCC 7.5 (#3605) @hcho3
    • Update One-Hot Encoder doc (#3600) @lowener
    • Fix documentation of KMeans (#3595) @lowener

    🚀 New Features

    • Reduce the size of the cuml libraries (#3702) @robertmaynard
    • Use ninja as default CMake generator (#3664) @wphicks
    • Single-Linkage Hierarchical Clustering Python Wrapper (#3631) @cjnolet
    • Support for precomputed distance matrix in DBSCAN (#3585) @Nyrio
    • Adding haversine to brute force knn (#3579) @cjnolet
    • Support for sample_weight parameter in LogisticRegression (#3572) @viclafargue
    • Provide "--ccache" flag for build.sh (#3566) @wphicks
    • Eliminate unnecessary includes discovered by cppclean (#3564) @wphicks
    • Single-linkage Hierarchical Clustering C++ (#3545) @cjnolet
    • Expose sparse distances via semiring to Python API (#3516) @lowener
    • Use cmake --build in build.sh to facilitate switching build tools (#3487) @wphicks
    • Add cython hinge_loss (#3409) @Nanthini10
    • Adding CodeCov Info for Dask Tests (#3338) @mdemoret-nv
    • Add predict_proba() to XGBoost-style models in FIL C++ (#2894) @levsnv

    🛠️ Improvements

    • Updating docs, readme, and umap param tests for 0.19 (#3731) @cjnolet
    • Locking RAFT hash for 0.19 (#3721) @cjnolet
    • Upgrade to Treelite 1.1.0 (#3708) @hcho3
    • Update to XGBoost 1.4.0rc1 (#3699) @hcho3
    • Use the new RF backend by default for classification (#3686) @hcho3
    • Update LogisticRegression documentation (#3677) @viclafargue
    • Preprocessing out of experimental (#3676) @viclafargue
    • ENH Decision Tree new backend computeSplit*Kernel histogram calculation optimization (#3674) @venkywonka
    • Remove check_cupy8 (#3669) @viclafargue
    • Use custom conda build directory for ccache integration (#3658) @dillon-cullinan
    • Disable three flaky tests (#3657) @hcho3
    • CUDA 11.2 developer environment (#3648) @dantegd
    • Store data frequencies in tree nodes of RF (#3647) @hcho3
    • Row major Gram matrices (#3639) @tfeher
    • Converting all Estimator Constructors to Keyword Arguments (#3636) @mdemoret-nv
    • Adding make_pipeline + test score with pipeline (#3632) @viclafargue
    • ENH Decision Tree new backend computeSplitClassificationKernel histogram calculation and occupancy optimization (#3616) @venkywonka
    • Revert "ENH Fix stale GHA and prevent duplicates " (#3614) @mike-wendt
    • ENH Fix stale GHA and prevent duplicates (#3613) @mike-wendt
    • KNN from RAFT (#3603) @viclafargue
    • Update Changelog Link (#3601) @ajschmidt8
    • Move SHAP explainers out of experimental (#3596) @dantegd
    • Fixing compatibility issue with CUDA array interface (#3594) @lowener
    • Remove cutlass usage in row major input for euclidean exp/unexp, cosine and L1 distance matrix (#3589) @mdoijade
    • Test FIL probabilities with absolute error thresholds in python (#3582) @levsnv
    • Removing sparse prims and fused l2 nn prim from cuml (#3578) @cjnolet
    • Prepare Changelog for Automation (#3570) @ajschmidt8
    • Print debug message if SVM convergence is poor (#3562) @tfeher
    • Fix merge conflicts in 3552 (#3557) @ajschmidt8
    • Additional distance metrics for ANN (#3533) @viclafargue
    • Improve warning message when QN solver reaches max_iter (#3515) @tfeher
    • Fix merge conflicts in 3502 (#3513) @ajschmidt8
    • Upgrade FAISS to 1.7.x (#3509) @viclafargue
    • ENH Pass ccache variables to conda recipe & use Ninja in CI (#3508) @Ethyling
    • Fix forward-merger conflicts in #3502 (#3506) @dantegd
    • Sklearn meta-estimators into namespace (#3493) @viclafargue
    • Add flexibility to copyright checker (#3466) @lowener
    • Update sparse KNN to use rmm device buffer (#3460) @lowener
    • Fix forward-merger conflicts in #3444 (#3455) @ajschmidt8
    • Replace ML::MetricType with raft::distance::DistanceType (#3389) @lowener
    • RF param initialization cython and C++ layer cleanup (#3358) @venkywonka
    • MNMG RF broadcast feature (#3349) @viclafargue
    • cuML's estimator Base class for preprocessing models (#3270) @viclafargue
    • Make _get_tags a class/static method (#3257) @dantegd
    • NVTX Markers for RF and RF-backend (#3014) @venkywonka
    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(Feb 24, 2021)

    Breaking Changes 🚨

    • cuml.experimental SHAP improvements (#3433) @dantegd
    • Enable feature sampling for the experimental backend of Random Forest (#3364) @vinaydes
    • re-enable cuML's copyright checker script (#3363) @teju85
    • Batched Silhouette Score (#3362) @divyegala
    • Update failing MNMG tests (#3348) @viclafargue
    • Rename print_summary() of Dask RF to get_summary_text(); it now returns string to the client (#3341) @hcho3
    • Rename dump_as_json() -> get_json(); expose it from Dask RF (#3340) @hcho3
    • MNMG KNN consolidation (#3307) @viclafargue
    • Return confusion matrix as int unless float weights are used (#3275) @lowener
    • Approximate Nearest Neighbors (#2780) @viclafargue

    Bug Fixes 🐛

    • HOTFIX Add ucx-proc package back that got lost during an auto merge conflict (#3551) @dantegd
    • Non project-flash CI ml test 18.04 issue debugging and bugfixing (#3495) @dantegd
    • Temporarily xfail KBinsDiscretizer uniform tests (#3494) @wphicks
    • Fix illegal memory accesses when NITEMS > 1, and nrows % NITEMS != 0. (#3480) @canonizer
    • Update call to dask client persist (#3474) @dantegd
    • Adding warning for IVFPQ (#3472) @viclafargue
    • Fix failing sparse NN test in CI by allowing small number of index discrepancies (#3454) @cjnolet
    • Exempting thirdparty code from copyright checks (#3453) @lowener
    • Relaxing Batched SilhouetteScore Test Constraint (#3452) @divyegala
    • Mark kbinsdiscretizer quantile tests as xfail (#3450) @wphicks
    • Fixing documentation on SimpleImputer (#3447) @lowener
    • Skipping IVFPQ (#3429) @viclafargue
    • Adding tol to dask test_kmeans (#3426) @lowener
    • Fix memory bug for SVM with large n_rows (#3420) @tfeher
    • Allow linear regression for with CUDA >=11.0 (#3417) @wphicks
    • Fix vectorizer tests by restoring sort behavior in groupby (#3416) @JohnZed
    • Ensure make_classification respects output type (#3415) @wphicks
    • Clean Up #include Dependencies (#3402) @mdemoret-nv
    • Fix Nearest Neighbor Stress Test (#3401) @lowener
    • Fix array_equal in tests (#3400) @viclafargue
    • Improving Copyright Check When Not Running in CI (#3398) @mdemoret-nv
    • Also xfail zlib errors when downloading newsgroups data (#3393) @JohnZed
    • Fix for ANN memory release bug (#3391) @viclafargue
    • XFail Holt Winters test where statsmodels has known issues with gcc 9.3.0 (#3385) @JohnZed
    • FIX Update cupy to >= 7.8 and remove unused build.sh script (#3378) @dantegd
    • re-enable cuML's copyright checker script (#3363) @teju85
    • Update failing MNMG tests (#3348) @viclafargue
    • Rename print_summary() of Dask RF to get_summary_text(); it now returns string to the client (#3341) @hcho3
    • Fixing make_blobs to Respect the Global Output Type (#3339) @mdemoret-nv
    • Fix permutation explainer (#3332) @RAMitchell
    • k-means bug fix in debug build (#3321) @akkamesh
    • Fix for default arguments of PCA (#3320) @lowener
    • Provide workaround for cupy.percentile bug (#3315) @wphicks
    • Fix SVR unit test parameter (#3294) @tfeher
    • Add xfail on fetching 20newsgroup dataset (test_naive_bayes) (#3291) @lowener
    • Remove unused keyword in PorterStemmer code (#3289) @wphicks
    • Remove static specifier in DecisionTree unit test for C++14 compliance (#3281) @wphicks
    • Correct pure virtual declaration in manifold_inputs_t (#3279) @wphicks

    Documentation 📖

    • Correct import path in docs for experimental preprocessing features (#3488) @wphicks
    • Minor doc updates for 0.18 (#3475) @JohnZed
    • Improve Python Docs with Default Role (#3445) @mdemoret-nv
    • Fixing Python Documentation Errors and Warnings (#3428) @mdemoret-nv
    • Remove outdated references to changelog in CONTRIBUTING.md (#3328) @wphicks
    • Adding highlighting to bibtex in readme (#3296) @cjnolet

    New Features 🚀

    • Improve runtime performance of RF to Treelite conversion (#3410) @wphicks
    • Parallelize Treelite to FIL conversion over trees (#3396) @wphicks
    • Parallelize RF to Treelite conversion over trees (#3395) @wphicks
    • Allow saving Dask RandomForest models immediately after training (fixes #3331) (#3388) @jameslamb
    • genetic programming initial structures (#3387) @teju85
    • MNMG DBSCAN (#3382) @Nyrio
    • FIL to use L1 cache when input columns don't fit into shared memory (#3370) @levsnv
    • Enable feature sampling for the experimental backend of Random Forest (#3364) @vinaydes
    • Batched Silhouette Score (#3362) @divyegala
    • Rename dump_as_json() -> get_json(); expose it from Dask RF (#3340) @hcho3
    • Exposing model_selection in a similar way to scikit-learn (#3329) @ptartan21
    • Promote IncrementalPCA from experimental in 0.18 release (#3327) @lowener
    • Create labeler.yml (#3324) @jolorunyomi
    • Add slow high-precision mode to KNN (#3304) @wphicks
    • Sparse TSNE (#3293) @divyegala
    • Sparse Generalized SPMV (semiring) Primitive (#3146) @cjnolet
    • Multiclass meta estimator wrappers and multiclass SVC (#3092) @tfeher
    • Approximate Nearest Neighbors (#2780) @viclafargue
    • Add KNN parameter to t-SNE (#2592) @aleksficek

    Improvements 🛠️

    • Update stale GHA with exemptions & new labels (#3507) @mike-wendt
    • Add GHA to mark issues/prs as stale/rotten (#3500) @Ethyling
    • Fix naive bayes inputs (#3448) @cjnolet
    • Prepare Changelog for Automation (#3442) @ajschmidt8
    • cuml.experimental SHAP improvements (#3433) @dantegd
    • Speed up knn tests (#3411) @JohnZed
    • Replacing sklearn functions with cuml in RF MNMG notebook (#3408) @lowener
    • Auto-label PRs based on their content (#3407) @jolorunyomi
    • Use stable 1.0.0 version of Treelite (#3394) @hcho3
    • API update to match RAFT PR #120 (#3386) @drobison00
    • Update linear models to use RMM memory allocation (#3365) @lowener
    • Updating dense pairwise distance enum names (#3352) @cjnolet
    • Upgrade Treelite module (#3316) @hcho3
    • Removed FIL node types with _t suffix (#3314) @canonizer
    • MNMG KNN consolidation (#3307) @viclafargue
    • Updating PyTests to Stay Below 4 Gb Limit (#3306) @mdemoret-nv
    • Refactoring: move internal FIL interface to a separate file (#3292) @canonizer
    • Return confusion matrix as int unless float weights are used (#3275) @lowener
    • 018 add unfitted error pca & tests on IPCA (#3272) @lowener
    • Linear models predict function consolidation (#3256) @dantegd
    • Preparing sparse primitives for movement to RAFT (#3157) @cjnolet
    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Dec 10, 2020)

  • v0.16.0(Oct 21, 2020)

  • v0.15.0(Sep 16, 2020)

  • v0.5.1(Feb 7, 2019)

  • v0.6.0.dev0(Jan 31, 2019)

Owner
RAPIDS
Open GPU Data Science
RAPIDS
Python interface to GPU-powered libraries

Package Description scikit-cuda provides Python interfaces to many of the functions in the CUDA device/runtime, CUBLAS, CUFFT, and CUSOLVER libraries

Lev E. Givon 924 Dec 26, 2022
cuSignal - RAPIDS Signal Processing Library

cuSignal The RAPIDS cuSignal project leverages CuPy, Numba, and the RAPIDS ecosystem for GPU accelerated signal processing. In some cases, cuSignal is

RAPIDS 646 Dec 30, 2022
📊 A simple command-line utility for querying and monitoring GPU status

gpustat Just less than nvidia-smi? NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome! Self-Promo

Jongwook Choi 3.2k Jan 04, 2023
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code her

NVIDIA Corporation 6.9k Dec 28, 2022
BlazingSQL is a lightweight, GPU accelerated, SQL engine for Python. Built on RAPIDS cuDF.

A lightweight, GPU accelerated, SQL engine built on the RAPIDS.ai ecosystem. Get Started on app.blazingsql.com Getting Started | Documentation | Examp

BlazingSQL 1.8k Jan 02, 2023
cuGraph - RAPIDS Graph Analytics Library

cuGraph - GPU Graph Analytics The RAPIDS cuGraph library is a collection of GPU accelerated graph algorithms that process data found in GPU DataFrames

RAPIDS 1.2k Jan 01, 2023
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python

GPUtil GPUtil is a Python module for getting the GPU status from NVIDA GPUs using nvidia-smi. GPUtil locates all GPUs on the computer, determines thei

Anders Krogh Mortensen 927 Dec 08, 2022
jupyter/ipython experiment containers for GPU and general RAM re-use

ipyexperiments jupyter/ipython experiment containers and utils for profiling and reclaiming GPU and general RAM, and detecting memory leaks. About Thi

Stas Bekman 153 Dec 07, 2022
QPT-Quick packaging tool 前项式Python环境快捷封装工具

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,一行命令即可将普通的Python脚本打包成EXE可执行程序,与此同时还可轻松引入CUDA等深度学习加速库, 尽可能在用户使用时复现您的开发环境。

GT-Zhang 545 Dec 28, 2022
A Python function for Slurm, to monitor the GPU information

Gpu-Monitor A Python function for Slurm, where I couldn't use nvidia-smi to monitor the GPU information. whole repo is not finish Installation TODO Mo

Squidward Tentacles 2 Feb 11, 2022
Python 3 Bindings for NVML library. Get NVIDIA GPU status inside your program.

py3nvml Documentation also available at readthedocs. Python 3 compatible bindings to the NVIDIA Management Library. Can be used to query the state of

Fergal Cotter 212 Jan 04, 2023
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 4k Dec 29, 2022
CUDA integration for Python, plus shiny features

PyCUDA lets you access Nvidia's CUDA parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about P

Andreas Klöckner 1.4k Jan 02, 2023
A NumPy-compatible array library accelerated by CUDA

CuPy : A NumPy-compatible array library accelerated by CUDA Website | Docs | Install Guide | Tutorial | Examples | API Reference | Forum CuPy is an im

CuPy 6.6k Jan 05, 2023
cuDF - GPU DataFrame Library

cuDF - GPU DataFrames NOTE: For the latest stable README.md ensure you are on the main branch. Resources cuDF Reference Documentation: Python API refe

RAPIDS 5.2k Jan 08, 2023
cuML - RAPIDS Machine Learning Library

cuML - GPU Machine Learning Algorithms cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions t

RAPIDS 3.1k Jan 04, 2023
Conda package for artifact creation that enables offline environments. Ideal for air-gapped deployments.

Conda-Vendor Conda Vendor is a tool to create local conda channels and manifests for vendored deployments Installation To install with pip, run: pip i

MetroStar - Tech 13 Nov 17, 2022
Python 3 Bindings for the NVIDIA Management Library

====== pyNVML ====== *** Patched to support Python 3 (and Python 2) *** ------------------------------------------------ Python bindings to the NVID

Nicolas Hennion 95 Jan 01, 2023
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases.

Vulkan Kompute The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabl

The Institute for Ethical Machine Learning 1k Dec 26, 2022
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi

NVIDIA Corporation 4.2k Jan 08, 2023