Hyperparameters tuning and features selection are two common steps in every machine learning pipeline.

Overview

shap-hypetune

A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models.

shap-hypetune diagram

Overview

Hyperparameters tuning and features selection are two common steps in every machine learning pipeline. Most of the time they are computed separately and independently. This may result in suboptimal performances and in a more time expensive process.

shap-hypetune aims to combine hyperparameters tuning and features selection in a single pipeline optimizing the optimal number of features while searching for the optimal parameters configuration. Hyperparameters Tuning or Features Selection can also be carried out as standalone operations.

shap-hypetune main features:

  • designed for gradient boosting models, as LGBModel or XGBModel;
  • developed to be integrable with the scikit-learn ecosystem;
  • effective in both classification or regression tasks;
  • customizable training process, supporting early-stopping and all the other fitting options available in the standard algorithms api;
  • ranking feature selection algorithms: Recursive Feature Elimination (RFE); Recursive Feature Addition (RFA); or Boruta;
  • classical boosting based feature importances or SHAP feature importances (the later can be computed also on the eval_set);
  • apply grid-search, random-search, or bayesian-search (from hyperopt);
  • parallelized computations with joblib.

Installation

pip install --upgrade shap-hypetune

lightgbm, xgboost are not needed requirements. The module depends only on NumPy, shap, scikit-learn and hyperopt. Python 3.6 or above is supported.

Media

Usage

from shaphypetune import BoostSearch, BoostRFE, BoostRFA, BoostBoruta

Hyperparameters Tuning

BoostSearch(
    estimator,                              # LGBModel or XGBModel
    param_grid=None,                        # parameters to be optimized
    greater_is_better=False,                # minimize or maximize the monitored score
    n_iter=None,                            # number of sampled parameter configurations
    sampling_seed=None,                     # the seed used for parameter sampling
    verbose=1,                              # verbosity mode
    n_jobs=None                             # number of jobs to run in parallel
)

Feature Selection (RFE)

BoostRFE(  
    estimator,                              # LGBModel or XGBModel
    min_features_to_select=None,            # the minimum number of features to be selected  
    step=1,                                 # number of features to remove at each iteration  
    param_grid=None,                        # parameters to be optimized  
    greater_is_better=False,                # minimize or maximize the monitored score  
    importance_type='feature_importances',  # which importance measure to use: default or shap  
    train_importance=True,                  # where to compute the shap feature importance  
    n_iter=None,                            # number of sampled parameter configurations  
    sampling_seed=None,                     # the seed used for parameter sampling  
    verbose=1,                              # verbosity mode  
    n_jobs=None                             # number of jobs to run in parallel  
)  

Feature Selection (BORUTA)

BoostBoruta(
    estimator,                              # LGBModel or XGBModel
    perc=100,                               # threshold used to compare shadow and real features
    alpha=0.05,                             # p-value levels for feature rejection
    max_iter=100,                           # maximum Boruta iterations to perform
    early_stopping_boruta_rounds=None,      # maximum iterations without confirming a feature
    param_grid=None,                        # parameters to be optimized
    greater_is_better=False,                # minimize or maximize the monitored score
    importance_type='feature_importances',  # which importance measure to use: default or shap
    train_importance=True,                  # where to compute the shap feature importance
    n_iter=None,                            # number of sampled parameter configurations
    sampling_seed=None,                     # the seed used for parameter sampling
    verbose=1,                              # verbosity mode
    n_jobs=None                             # number of jobs to run in parallel
)

Feature Selection (RFA)

BoostRFA(
    estimator,                              # LGBModel or XGBModel
    min_features_to_select=None,            # the minimum number of features to be selected
    step=1,                                 # number of features to remove at each iteration
    param_grid=None,                        # parameters to be optimized
    greater_is_better=False,                # minimize or maximize the monitored score
    importance_type='feature_importances',  # which importance measure to use: default or shap
    train_importance=True,                  # where to compute the shap feature importance
    n_iter=None,                            # number of sampled parameter configurations
    sampling_seed=None,                     # the seed used for parameter sampling
    verbose=1,                              # verbosity mode
    n_jobs=None                             # number of jobs to run in parallel
)

Full examples in the notebooks folder.

Comments
  • Suppress warnings

    Suppress warnings

    Hi,

    While running BoostBoruta according to the notebook toturial I'm getting the following warnings which I would like to suppress:

    'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.
    'verbose' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
    

    Any ideas on how to do that?

    Thank you

    opened by Rane90 4
  • Can BoostBoruta be used in a scikit-pipeline?

    Can BoostBoruta be used in a scikit-pipeline?

    Hello, First of all, thank you for this great repo. It looks very promising. I'd like to use BoostBoruta within a scikit-pipeline. Is it possible?

    For now, here is the code I've tried with no success :

    # get the categorical and numeric column names
    num_cols = X_train.select_dtypes(exclude=['object']).columns.tolist()
    cat_cols = X_train.select_dtypes(include=['object']).columns.tolist()
    
    # pipeline for numerical columns
    num_pipe = make_pipeline(
        StandardScaler()
    )
    # pipeline for categorical columns
    cat_pipe = make_pipeline(
        OneHotEncoder(handle_unknown='ignore', sparse=False)
    )
    
    # combine both the pipelines
    full_pipe = ColumnTransformer([
        ('num', num_pipe, num_cols),
        ('cat', cat_pipe, cat_cols)
    ])
    
    model = BoostBoruta(
        clf_lgbm, param_grid=param_dist_hyperopt, n_iter=8, sampling_seed=0, importance_type="shap", train_importance=True,n_jobs=-1, verbose=2
    )
    
    pipeline_hypetune = make_pipeline(full_pipe, model)
    model_selection = RepeatedStratifiedKFold(n_splits=10, n_repeats=2, random_state=2022)
    
    results = cross_validate(pipeline_hypetune, X_train, y, scoring='accuracy', cv=model_selection, return_estimator=True)
    

    No exception is thrown but no model is learned either... Any ideas why?

    Thanks in advance

    opened by YoannPitarch 4
  • ExplainerError

    ExplainerError

    For my dataset I'm getting this error:

    ExplainerError: Additivity check failed in TreeExplainer! Please ensure the data matrix you passed to the explainer is the same
    shape that the model was trained on. If your data shape is correct then please report this on GitHub. Consider retrying with the 
    feature_perturbation='interventional' option. This check failed because for one of the samples the sum of the SHAP values was 
    -0.577556, while the model output was -0.540311. If this difference is acceptable you can set check_additivity=False to disable 
    this check.
    

    I'm using it like this:

    model = BoostRFE(regr_xgb, param_grid=param_dist, 
                                   min_features_to_select=10, 
                                   step=20, 
                                   importance_type='shap_importances',
                                   n_iter=5
                                   )
    

    Any suggestion how to solve this?

    opened by hasan-sayeed 3
  • great software! wonder if it supports for custom CVs?

    great software! wonder if it supports for custom CVs?

    Hello,

    Great package! very easy to use, and it is very effective! :)

    I was wondering if it is possible to use custom CVs for random search + feature selection

    Thanks!

    opened by GalaxyNight-day 3
  • Feature Immportance chart with selected feature names with scores

    Feature Immportance chart with selected feature names with scores

    Hi @cerlymarco ,

    1. How do I get feature names with scores like this? (Traditional Xg-Boost)
    2. And what will be the X-axis scoring scale for that?

    image img-src : https://user-images.githubusercontent.com/42869040/162376574-03869b81-f11e-4d1f-8bea-eddb714d39b0.png

    Thanks

    Originally posted by @VinayChaudhari1996 in https://github.com/cerlymarco/shap-hypetune/issues/4#issuecomment-1092483163

    opened by VinayChaudhari1996 2
  • List of the important features?

    List of the important features?

    Hi, I apologize if this is a dumb question ,but I can't find where to get the list of important features from the trained model? Thanks for any pointers.

    opened by jmrichardson 2
  • Support for state of art hyperparameter optimization packages

    Support for state of art hyperparameter optimization packages

    It would be nice to have options to select some state of art technique for hyperparameter optimization. Such as: https://scikit-optimize.github.io/stable/ https://github.com/optuna/optuna or maybe the best (should be drop in replacement for scikit Grid/Random search, but support advanced techniques from packages above) https://github.com/ray-project/tune-sklearn

    opened by oldrichsmejkal 2
  • Erratic behaviour

    Erratic behaviour

    Hi,

    I am still running a series of experiments with shap-hypertune. Some sort of cross-validation with a number of stratified K-fold splits.

    For each split, I generate random seeds like this: np.random.randint(4294967295).

    A typical run goes like this (there is one for each split):

    11 trials detected for ('num_leaves', 'n_estimators', 'max_depth', 'learning_rate')
    
    trial: 0001 ### iterations: 00008 ### eval_score: 0.94737
    trial: 0002 ### iterations: 00018 ### eval_score: 0.92481
    trial: 0003 ### iterations: 00020 ### eval_score: 0.99248
    trial: 0004 ### iterations: 00017 ### eval_score: 0.97744
    trial: 0005 ### iterations: 00025 ### eval_score: 0.98496
    trial: 0006 ### iterations: 00012 ### eval_score: 0.97744
    trial: 0007 ### iterations: 00020 ### eval_score: 0.99248
    trial: 0008 ### iterations: 00012 ### eval_score: 0.98496
    trial: 0009 ### iterations: 00021 ### eval_score: 0.98496
    trial: 0010 ### iterations: 00018 ### eval_score: 0.98496
    trial: 0011 ### iterations: 00025 ### eval_score: 0.98496
    
    11 trials detected for ('num_leaves', 'n_estimators', 'max_depth', 'learning_rate')
    
    trial: 0001 ### iterations: 00025 ### eval_score: 0.96241
    trial: 0002 ### iterations: 00038 ### eval_score: 0.97744
    trial: 0003 ### iterations: 00037 ### eval_score: 0.97744
    trial: 0004 ### iterations: 00015 ### eval_score: 0.96241
    trial: 0005 ### iterations: 00002 ### eval_score: 0.81203
    trial: 0006 ### iterations: 00018 ### eval_score: 0.96241
    trial: 0007 ### iterations: 00016 ### eval_score: 0.96241
    trial: 0008 ### iterations: 00011 ### eval_score: 0.91729
    trial: 0009 ### iterations: 00038 ### eval_score: 0.97744
    trial: 0010 ### iterations: 00022 ### eval_score: 0.96241
    trial: 0011 ### iterations: 00021 ### eval_score: 0.96992
    

    However, sometimes the eval_score drops dramatically.

    But this does not seem to be your typical stochastic behaviour.

    For instance, normally, f it drops for one split it will drop for all the subsequent splits. In spite of the fact that a new seed is (pseudo) randomly generated for each split at each stage:

    skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=np.random.randint(4294967295))
    
    	clf_lgbm = LGBMClassifier(boosting_type='rf',
                             random_state=np.random.randint(4294967295),
    
    	model = BoostRFA(    
        sampling_seed=np.random.randint(4294967295),	
    
    

    In other cases the number of iterations stays constant for each run:

    11 trials detected for ('num_leaves', 'n_estimators', 'max_depth', 'learning_rate')
    
    trial: 0001 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0002 ### iterations: 00001 ### eval_score: 0.7594
    trial: 0003 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0004 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0005 ### iterations: 00001 ### eval_score: 0.79699
    trial: 0006 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0007 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0008 ### iterations: 00001 ### eval_score: 0.7594
    trial: 0009 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0010 ### iterations: 00001 ### eval_score: 0.69173
    trial: 0011 ### iterations: 00001 ### eval_score: 0.69173
    
    11 trials detected for ('num_leaves', 'n_estimators', 'max_depth', 'learning_rate')
    
    trial: 0001 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0002 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0003 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0004 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0005 ### iterations: 00001 ### eval_score: 0.81955
    trial: 0006 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0007 ### iterations: 00001 ### eval_score: 0.81955
    trial: 0008 ### iterations: 00001 ### eval_score: 0.81955
    trial: 0009 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0010 ### iterations: 00001 ### eval_score: 0.82707
    trial: 0011 ### iterations: 00001 ### eval_score: 0.82707
    

    If you re-run the script, you typically observe the normal behaviour again.

    opened by mirix 1
  • Issue with custom scorer

    Issue with custom scorer

    Hello,

    I have an unbalanced dataset and I am trying to create a custom scorer that finds the best possible recall above a given precision for the minority class.

    The opposite seems to work well. When I feed the following score to shap-hypertune, it produces consistent results for the precision:

    def precision_at_recall(y_true, y_hat):
    	precision, recall, thresholds = precision_recall_curve(y_true, y_hat, pos_label=1)
    	ix = np.argmax(precision[recall >= .9])
    	return 'precision_at_recall', precision[ix], True
    

    The recall and precision for the minority class at a threshold of 0.5 are both around 0.85. If we set a recall above 0.9, the precision decreases accordingly, as expected.

    However, the following does not work:

    def recall_at_precision(y_true, y_hat):
    	precision, recall, thresholds = precision_recall_curve(y_true, y_hat, pos_label=1)
    	ix = np.argmax(recall[precision >= .9])
    	return 'recall_at_precision', recall[ix], True
    

    It always produces a perfect recall (1), regardless of the precision, even if the precision is set to 1.

    opened by mirix 1
  • Error in BoostBoruta

    Error in BoostBoruta

    Hi, I am getting an error while running BoostBoruta for a binary classification task.

    Size of data is: `print(X_clf_train.shape, y_clf_train.shape) print(X_clf_valid.shape, y_clf_valid.shape)

    (102, 32) (102,) (12, 32) (12,) ` and here is the code I use:

    `### BORUTA ###

    model = BoostBoruta( clf_xgb, max_iter=200, perc=100, sampling_seed=0, verbose=3, n_jobs=-1, ) model.fit(X_clf_train, y_clf_train, eval_set=[(X_clf_valid, y_clf_valid)], early_stopping_rounds=6, verbose=3) print(model.n_features_)

    `

    and the error:

    Iterantion: 1 / 200

    XGBoostError Traceback (most recent call last) /tmp/ipykernel_4016678/3155018104.py in <cell line: 11>() 9 n_jobs=-1, 10 ) ---> 11 model.fit(X_clf_train, 12 y_clf_train, 13 eval_set=[(X_clf_valid, y_clf_valid)],

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/shaphypetune/_classes.py in fit(self, X, y, trials, **fit_params) 163 164 if self.param_grid is None: --> 165 results = self._fit(X, y, fit_params) 166 167 for v in vars(results['model']):

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/shaphypetune/_classes.py in _fit(self, X, y, fit_params, params) 66 model = self._build_model(params) 67 if isinstance(model, _BoostSelector): ---> 68 model.fit(X=X, y=y, **fit_params) 69 else: 70 with contextlib.redirect_stdout(io.StringIO()):

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/shaphypetune/_classes.py in fit(self, X, y, **fit_params) 521 _X = self._create_X(X, feat_id_real) 522 with contextlib.redirect_stdout(io.StringIO()): --> 523 estimator.fit(_X, y, **_fit_params) 524 525 # get coefs

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/core.py in inner_f(*args, **kwargs) 434 for k, arg in zip(sig.parameters, args): 435 kwargs[k] = arg --> 436 return f(**kwargs) 437 438 return inner_f

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/sklearn.py in fit(self, X, y, sample_weight, base_margin, eval_set, eval_metric, early_stopping_rounds, verbose, xgb_model, sample_weight_eval_set, base_margin_eval_set, feature_weights, callbacks) 1174 ) 1175 -> 1176 self._Booster = train( 1177 params, 1178 train_dmatrix,

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/training.py in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, xgb_model, callbacks) 187 Booster : a trained booster model 188 """ --> 189 bst = _train_internal(params, dtrain, 190 num_boost_round=num_boost_round, 191 evals=evals,

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/training.py in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks, evals_result, maximize, verbose_eval, early_stopping_rounds) 79 if callbacks.before_iteration(bst, i, dtrain, evals): 80 break ---> 81 bst.update(dtrain, i, obj) 82 if callbacks.after_iteration(bst, i, dtrain, evals): 83 break

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/core.py in update(self, dtrain, iteration, fobj) 1497 1498 if fobj is None: -> 1499 _check_call(_LIB.XGBoosterUpdateOneIter(self.handle, 1500 ctypes.c_int(iteration), 1501 dtrain.handle))

    ~/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/core.py in _check_call(ret) 208 """ 209 if ret != 0: --> 210 raise XGBoostError(py_str(_LIB.XGBGetLastError())) 211 212

    XGBoostError: [12:37:22] ../src/data/data.cc:583: Check failed: labels_.Size() == num_row_ (102 vs. 160) : Size of labels must equal to number of rows. Stack trace: [bt] (0) /home/zeydabadi/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/lib/libxgboost.so(+0x9133f) [0x7fe8df99b33f] [bt] (1) /home/zeydabadi/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/lib/libxgboost.so(+0x110fcc) [0x7fe8dfa1afcc] [bt] (2) /home/zeydabadi/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/lib/libxgboost.so(+0x1b90e7) [0x7fe8dfac30e7] [bt] (3) /home/zeydabadi/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/lib/libxgboost.so(+0x1b99bc) [0x7fe8dfac39bc] [bt] (4) /home/zeydabadi/myvenv/mykears3.9/lib/python3.9/site-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x50) [0x7fe8df98aed0] [bt] (5) /lib64/libffi.so.6(ffi_call_unix64+0x4c) [0x7febb4ca610e] [bt] (6) /lib64/libffi.so.6(ffi_call+0x36f) [0x7febb4ca5abf] [bt] (7) /home/zeydabadi/mykeras/bin/usr/local/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so(+0x11235) [0x7febb4eba235] [bt] (8) /home/zeydabadi/mykeras/bin/usr/local/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so(+0xaa66) [0x7febb4eb3a66]

    opened by zeydabadi 1
  • only 10 features show in the BoostBoruta, without any feature labels/ranks/indexes

    only 10 features show in the BoostBoruta, without any feature labels/ranks/indexes

    I have a dataset with >2000 features. After I run BoostBoruta, 10 features show in the results, but they have no label/ind. information. How can I retrieve feature importances and the original ind./labels mapping to the original feature set?

    opened by raqueldias 1
  • Eval Metric directionality?

    Eval Metric directionality?

    Hi,

    If I use a custom metric like the brier score where lower is better, does this package support looking to minimize the eval metric? or is it by default trying to maximize?

    Thank You

    opened by ericvoots 0
  • Any plan to write a publication or preprint.

    Any plan to write a publication or preprint.

    This is an excellent repo to do the hyper-parameter tuning, and an approach to use SHAP measurement. A publication or preprint helps the practitioners to understand the repo deeper. Thus, I'm curious, do you have any plan to do this?

    opened by JiaxiangBU 0
Releases(v0.2.6)
Owner
Marco Cerliani
Statistician Hacker & Data Scientist
Marco Cerliani
Source codes of CenterTrack++ in 2021 ICME Workshop on Big Surveillance Data Processing and Analysis

MOT Tracked object bounding box association (CenterTrack++) New association method based on CenterTrack. Two new branches (Tracked Size and IOU) are a

36 Oct 04, 2022
This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents".

Introduction This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents". If

tsc 0 Jan 11, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

308 Jan 04, 2023
Learning-based agent for Google Research Football

TiKick 1.Introduction Learning-based agent for Google Research Football Code accompanying the paper "TiKick: Towards Playing Multi-agent Football Full

Tsinghua AI Research Team for Reinforcement Learning 90 Dec 26, 2022
Deep Reinforcement Learning for Keras.

Deep Reinforcement Learning for Keras What is it? keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seaml

Keras-RL 0 Dec 15, 2022
This code provides a PyTorch implementation for OTTER (Optimal Transport distillation for Efficient zero-shot Recognition), as described in the paper.

Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation This repository contains PyTorch evaluation code, trainin

Meta Research 45 Dec 20, 2022
This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR,which is an open-source toolbox based on PyTorch. The overall architecture will be sh

Jianquan Ye 82 Nov 17, 2022
Deep Q Learning with OpenAI Gym and Pokemon Showdown

pokemon-deep-learning An openAI gym project for pokemon involving deep q learning. Made by myself, Sam Little, and Layton Webber. This code captures g

2 Dec 22, 2021
Real-time multi-object tracker using YOLO v5 and deep sort

This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algor

Mike 3.6k Jan 05, 2023
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning

Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning This repository provides an implementation of the paper Beta S

Yongchan Kwon 28 Nov 10, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Training BERT with Compute/Time (Academic) Budget

Training BERT with Compute/Time (Academic) Budget This repository contains scripts for pre-training and finetuning BERT-like models with limited time

Intel Labs 263 Jan 07, 2023
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
A simple log parser and summariser for IIS web server logs

IISLogFileParser A basic parser tool for IIS Logs which summarises findings from the log file. Inspired by the Gist https://gist.github.com/wh13371/e7

2 Mar 26, 2022
Cycle Consistent Adversarial Domain Adaptation (CyCADA)

Cycle Consistent Adversarial Domain Adaptation (CyCADA) A pytorch implementation of CyCADA. If you use this code in your research please consider citi

Hyunwoo Ko 2 Jan 10, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

81 Dec 29, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022