Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

Overview

Tests codecov PyPI - Downloads Documentation Status

Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

This is a port of the amazing openskill.js package.

Installation

pip install openskill

Usage

>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)

If a1 and a2 are on a team, and wins against a team of b1 and b2, send this into rate:

>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
([28.669648436582808, 8.071520788025197], [33.83086971107981, 5.062772998705765], [43.071274808241974, 2.4166900452721256], [23.149503312339064, 6.1378606973362135])

You can also create Rating objects by importing create_rating:

>>> from openskill import create_rating
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)

Ranks

When displaying a rating, or sorting a list of ratings, you can use ordinal:

>>> from openskill import ordinal
>>> ordinal(mu=43.07, sigma=2.42)
35.81

By default, this returns mu - 3 * sigma, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.

Artificial Ranks

If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[[20.96265504062538, 8.083731307186588]], [[27.795084971874736, 8.263160757613477]], [[24.68943500312503, 8.083731307186588]], [[26.552824984374855, 8.179213704945203]]]

It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.

Ties should have either equivalent rank or score.

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[[24.68943500312503, 8.179213704945203]], [[22.826045021875203, 8.179213704945203]], [[24.68943500312503, 8.179213704945203]], [[27.795084971874736, 8.263160757613477]]]

Choosing Models

The default model is PlackettLuce. You can import alternate models from openskill.models like so:

>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[[17.09430584957905, 7.5012190693964005]], [[32.90569415042095, 7.5012190693964005]], [[22.36476861652635, 7.5012190693964005]], [[27.63523138347365, 7.5012190693964005]]]

Predicting Winners

You can compare two or more teams to get the probabilities of each team winning.

>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0

Available Models

  • BradleyTerryFull: Full Pairing for Bradley-Terry
  • BradleyTerryPart: Partial Pairing for Bradley-Terry
  • PlackettLuce: Generalized Bradley-Terry
  • ThurstoneMostellerFull: Full Pairing for Thurstone-Mosteller
  • ThurstoneMostellerPart: Partial Pairing for Thurstone-Mosteller

Which Model Do I Want?

  • Bradley-Terry rating models follow a logistic distribution over a player's skill, similar to Glicko.
  • Thurstone-Mosteller rating models follow a gaussian distribution, similar to TrueSkill. Gaussian CDF/PDF functions differ in implementation from system to system (they're all just chebyshev approximations anyway). The accuracy of this model isn't usually as great either, but tuning this with an alternative gamma function can improve the accuracy if you really want to get into it.
  • Full pairing should have more accurate ratings over partial pairing, however in high k games (like a 100+ person marathon race), Bradley-Terry and Thurstone-Mosteller models need to do a calculation of joint probability which involves is a k-1 dimensional integration, which is computationally expensive. Use partial pairing in this case, where players only change based on their neighbors.
  • Plackett-Luce (default) is a generalized Bradley-Terry model for k ≥ 3 teams. It scales best.

Implementations in other Languages

Comments
  • Support for partial play/weighting/player performance

    Support for partial play/weighting/player performance

    Would it be possible to add some sort of system to weight player performance like how the official trueskill module does it? I'm trying to create a system that weights players overall performance compared to their teams to get a more accurate skill rating.

    enhancement help wanted 
    opened by spookybear0 7
  • Support for Python 3.7+

    Support for Python 3.7+

    I've been using openskill in my local Python 3.9 environment for a while now. I've been liking it a lot and I wanted to add it to one of my projects for use in production. I was surprised to find that the latest version on pypi said it only supported 3.10+, which is pretty rough from a compatibility perspective. I wanted to check how much it actually depended on features in 3.10, so I switched to a 3.6 virtual env and tried running the tests. They failed, and indicated that isinstance(var, Union[T1, T2]) calls were an issue. This is a fairly easy thing to express in older pythons with: isinstance(var, (T1, T2)). I did a search and replace on these, and then the tests passed. That was the only incompatibility I could find.

    If the maintainers are open to supporting older Pythons for a new release, that would be very helpful to me. I would also be OK with the minimum version being something like 3.7 or 3.8.

    I haven't used towncrier before, and it wasn't immediately obvious how to impute the changelog, so I figure I'll wait until a maintainer greenlights this before I continue any work on details like that.

    Affirmation

    -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Erotemic -----BEGIN PGP SIGNATURE-----

    iHUEARYIAB0WIQT3wcl6TvLPlkPZTGZswF8VWxKrNgUCYnUyfgAKCRBswF8VWxKr Nv5kAQCy8UMZZYCX3+dG5sLn1UmF/SZ1qnGwsMAUWsVgTdJvFAEAzOCgQwWOpIma wPXZMrEOpp8S9IAgNpeFetms+kp9lQ0= =7IXK -----END PGP SIGNATURE-----

    enhancement 
    opened by Erotemic 6
  • let score difference be reflected in rating

    let score difference be reflected in rating

    When you enter scores into rate(), the difference between the scores have no effect on the rating - meaning: rate([team1,team2],score(1,0)) == rate([team1,team2],score(100,0)) is true. They have exactly the same rating effect on team1 and team2.

    I don't know if it is mathematical possible and how it would look like. But it would be great if the difference could be somehow factored into the calculation, as it is (if your game has a score) quite an important datapoint for skill evaluation.

    enhancement 
    opened by jonathan-scholz 4
  • Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    If I understand it correctly, those two functions seem to perform calculations using equations numbered (65) in the paper. However, those equations seems to be specific to Thurstone-Mosteller model and as far as I can tell, the proper way to calculate probabilities for Bradley-Terry model would be to use equations (48) and (51) (also seen as p_iq in equation (49)). Is this intended? Or am I misunderstanding either the paper or the code of these functions?

    question 
    opened by asyncth 2
  • Update link

    Update link

    Description of Changes

    • [ ] Wrote at least one-line docstrings (for any new functions)
    • [ ] Added test(s) covering the changes (if testable)

    Updated link to repository

    Issue(s) Resolved

    Fixes #

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: bstummer

    opened by bstummer 2
  • Unable to install on Google Colab

    Unable to install on Google Colab

    Describe the bug I can't install openskill on Google Colab via pip. What should I do?

    Screenshots スクリーンショット 2022-03-31 23 14 18

    Platform Information

    • Google Colab
    • Python Version: 3.7.13
    wontfix 
    opened by toshi71 2
  • Implement additive dynamics factor 'tau'.

    Implement additive dynamics factor 'tau'.

    Description of Changes

    • [x] Wrote at least one-line docstrings (for any new functions)
    • [x] Added test(s) covering the changes (if testable)

    Ports https://github.com/philihp/openskill.js/pull/233

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: daegontaven

    opened by daegontaven 2
  • Faster runtime of predict_win and predict_draw

    Faster runtime of predict_win and predict_draw

    Description of Changes

    • [x] Adds some unit tests that I have for the Javascript library, to prevent any regressions.
    • [x] Tells itertools.permutations to give permutation pairs, rather than all full permutations
    • [x] Additionally, the length of permutation_pairs is shorter, and that simplifies the denominator to just a triangular number.

    I noticed that for teams of ABCD, your code as written was finding all permutations (ABCD, ABDC, ACBD, ACDB, ...) and then only using the first two from the permutation, which for n>=4 teams, this causes the same pairings to be calculated multiple times.

    This should reduce runtime from O(n^n) to O(n^2).

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Philihp Busby

    opened by philihp 2
  • Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bumps prompt-toolkit from 3.0.26 to 3.0.27.

    Changelog

    Sourced from prompt-toolkit's changelog.

    3.0.27: 2022-02-07

    New features:

    • Support for cursor shapes. The cursor shape for prompts/applications can now be configured, either as a fixed cursor shape, or in case of Vi input mode, according to the current input mode.
    • Handle "cursor forward" command in ANSI formatted text. This makes it possible to render many kinds of generated ANSI art.
    • Accept align attribute in Label widget.
    • Added PlainTextOutput: an output implementation that doesn't render any ANSI escape sequences. This will be used by default when redirecting stdout to a file.
    • Added create_app_session_from_tty: a context manager that enforces input/output to go to the current TTY, even if stdin/stdout are attached to pipes.
    • Added to_plain_text utility for converting formatted text into plain text.

    Fixes:

    • Don't automatically use sys.stderr for output when sys.stdout is not a TTY, but sys.stderr is. The previous behavior was confusing, especially when rendering formatted text to the output, and we expect it to follow redirection.
    Commits
    • 6ac867a Release 3.0.27
    • 96ec6fb Removed unused imports.
    • b4d728e Added support for cursor shapes.
    • 4a66820 Added to_plain_text utility. A function to turn formatted text into a string.
    • 7dd8435 Added create_app_session_from_tty.
    • 2c96fe2 Stop preferring a TTY output when creating the default output.
    • 402b6a3 Added PlainTextOutput: an output that doesn't write ANSI escape sequences to ...
    • 57b42c4 Added ansi-art-and-textarea.py example.
    • a71de3c Accept 'align' attribute in Label widget.
    • 64d870a Added ptk-logo-ansi-art.py example.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump scipy from 1.7.3 to 1.8.0

    Bump scipy from 1.7.3 to 1.8.0

    Bumps scipy from 1.7.3 to 1.8.0.

    Release notes

    Sourced from scipy's releases.

    SciPy 1.8.0 Release Notes

    SciPy 1.8.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with python -Wd and check for DeprecationWarning s). Our development attention will now shift to bug-fix releases on the 1.8.x branch, and on adding new features on the master branch.

    This release requires Python 3.8+ and NumPy 1.17.3 or greater.

    For running on PyPy, PyPy3 6.0+ is required.

    Highlights of this release

    • A sparse array API has been added for early testing and feedback; this work is ongoing, and users should expect minor API refinements over the next few releases.
    • The sparse SVD library PROPACK is now vendored with SciPy, and an interface is exposed via scipy.sparse.svds with solver='PROPACK'. It is currently default-off due to potential issues on Windows that we aim to resolve in the next release, but can be optionally enabled at runtime for friendly testing with an environment variable setting of USE_PROPACK=1.
    • A new scipy.stats.sampling submodule that leverages the UNU.RAN C library to sample from arbitrary univariate non-uniform continuous and discrete distributions
    • All namespaces that were private but happened to miss underscores in their names have been deprecated.

    New features

    scipy.fft improvements

    Added an orthogonalize=None parameter to the real transforms in scipy.fft which controls whether the modified definition of DCT/DST is used without changing the overall scaling.

    scipy.fft backend registration is now smoother, operating with a single

    ... (truncated)

    Commits
    • b5d8bab REL: 1.8.0 release commit.
    • d84f731 Merge pull request #15521 from tylerjereddy/treddy_prep_180_final
    • 315dd53 DOC: update 1.8.0 relnotes.
    • b54b7ae MAINT: fix broken link and remove CI badges
    • 920e27b REL: 1.8.0 unreleased.
    • ea004bd REL: 1.8.0rc4 released.
    • 4f3969d Merge pull request #15479 from tylerjereddy/treddy_180rc4
    • 8ed6aa9 DOC: update 1.8.0 relnotes.
    • efe4ca5 MAINT: PR 15479 revisions
    • 1803913 MAINT: remove non-default settings (except shallow) in .gitmodules
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump twine from 3.7.1 to 3.8.0

    Bump twine from 3.7.1 to 3.8.0

    Bumps twine from 3.7.1 to 3.8.0.

    Release notes

    Sourced from twine's releases.

    3.8.0

    https://pypi.org/project/twine/3.8.0/

    Changelog

    Changelog

    Sourced from twine's changelog.

    Twine 3.8.0 (2022-02-02)

    Features ^^^^^^^^

    • Add --verbose logging for querying keyring credentials. ([#849](https://github.com/pypa/twine/issues/849) <https://github.com/pypa/twine/issues/849>_)
    • Log all upload responses with --verbose. ([#859](https://github.com/pypa/twine/issues/859) <https://github.com/pypa/twine/issues/859>_)
    • Show more helpful error message for invalid metadata. ([#861](https://github.com/pypa/twine/issues/861) <https://github.com/pypa/twine/issues/861>_)

    Bugfixes ^^^^^^^^

    • Require a recent version of urllib3. ([#858](https://github.com/pypa/twine/issues/858) <https://github.com/pypa/twine/issues/858>_)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Tournament Interface

    Tournament Interface

    Is your feature request related to a problem? Please describe. Creating models of tournaments is hard since you have to parse the data using another library (depending on the format) and then pass everything into rate and predict manually. It's a lot of effort to predict the entire outcome of say, "2022 FIFA World Cup" easily.

    Describe the solution you'd like it would be nice if there was a tournament class of some kind that allowed us to pass in rounds which themselves contained matches. Then using an exhaustive approach predict winners and move them along each bracket/round. Especially now that #74 has landed it would be easier to predict whole matches and in turn tournaments.

    The classes should be customizable to allow our own logic. For instance, allow using the munkres algorithm and other such methods.

    Describe alternatives you've considered I don't know any other libraries that do this already.

    enhancement help wanted 
    opened by daegontaven 0
  • Improve win predictions for 1v1 teams

    Improve win predictions for 1v1 teams

    First of all, congrats and thanks for the great repo!

    In a scenario that Player A has 2x the rating of Player B, the predicted win probability is 60% vs 40%. This seems strange.

    players = [ [Rating(50)], [Rating(25)] ]
    
    predict_win(teams=players)
    
    [ 1 ]: [0.6002914159316424, 0.39970858406835763]
    
    

    If I use this function implementation, I get 97% vs 3% which sounds more reasonable to me.

    Maybe the predict_win function has some flaw?

    enhancement 
    opened by nthypes 1
Releases(v4.0.0)
Owner
Open Debates Project
Debate the way it's meant to be.
Open Debates Project
All-in-one web-based development environment for machine learning

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

3 Feb 03, 2021
CinnaMon is a Python library which offers a number of tools to detect, explain, and correct data drift in a machine learning system

CinnaMon is a Python library which offers a number of tools to detect, explain, and correct data drift in a machine learning system

Zelros 67 Dec 28, 2022
Apple-voice-recognition - Machine Learning

Apple-voice-recognition Machine Learning How does Siri work? Siri is based on large-scale Machine Learning systems that employ many aspects of data sc

Harshith VH 1 Oct 22, 2021
决策树分类与回归模型的实现和可视化

DecisionTree 决策树分类与回归模型,以及可视化 DecisionTree ID3 C4.5 CART 分类 回归 决策树绘制 分类树 回归树 调参 剪枝 ID3 ID3决策树是最朴素的决策树分类器: 无剪枝 只支持离散属性 采用信息增益准则 在data.py中,我们记录了一个小的西瓜数据

Welt Xing 10 Oct 22, 2022
PySpark + Scikit-learn = Sparkit-learn

Sparkit-learn PySpark + Scikit-learn = Sparkit-learn GitHub: https://github.com/lensacom/sparkit-learn About Sparkit-learn aims to provide scikit-lear

Lensa 1.1k Jan 04, 2023
Machine-Learning with python (jupyter)

Machine-Learning with python (jupyter) 머신러닝 야학 작심 10일과 쥬피터 노트북 기반 데이터 사이언스 시작 들어가기전 https://nbviewer.org/ 페이지를 통해서 쥬피터 노트북 내용을 볼 수 있다. 위 페이지에서 현재 레포 기

HyeonWoo Jeong 1 Jan 23, 2022
Library of Stan Models for Survival Analysis

survivalstan: Survival Models in Stan author: Jacki Novik Overview Library of Stan Models for Survival Analysis Features: Variety of standard survival

Hammer Lab 122 Jan 06, 2023
Avocado hass time series vs predict price

AVOCADO HASS TIME SERIES VÀ PREDICT PRICE Trước khi vào Heroku muốn giao diện đẹp mọi người chuyển giúp mình theo hình bên dưới https://avocado-hass.h

hieulmsc 3 Dec 18, 2021
Python bindings for MPI

MPI for Python Overview Welcome to MPI for Python. This package provides Python bindings for the Message Passing Interface (MPI) standard. It is imple

MPI for Python 604 Dec 29, 2022
Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application

Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application (with docker-compose).

Philip May 2 Dec 03, 2021
Book Item Based Collaborative Filtering

Book-Item-Based-Collaborative-Filtering Collaborative filtering methods are used

Şebnem 3 Jan 06, 2022
Factorization machines in python

Factorization Machines in Python This is a python implementation of Factorization Machines [1]. This uses stochastic gradient descent with adaptive re

Corey Lynch 892 Jan 03, 2023
A game theoretic approach to explain the output of any machine learning model.

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo

Scott Lundberg 18.2k Jan 02, 2023
Data from "Datamodels: Predicting Predictions with Training Data"

Data from "Datamodels: Predicting Predictions with Training Data" Here we provid

Madry Lab 51 Dec 09, 2022
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 03, 2023
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm

Daniel Han-Chen 1.4k Jan 01, 2023
Bayesian Modeling and Computation in Python

Bayesian Modeling and Computation in Python Open access and Code This repository contains the open access version of the text and the code examples in

Bayesian Modeling and Computation in Python 339 Jan 02, 2023
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.

SageMaker Python SDK SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the S

Amazon Web Services 1.8k Jan 01, 2023
(3D): LeGO-LOAM, LIO-SAM, and LVI-SAM installation and application

SLAM-application: installation and test (3D): LeGO-LOAM, LIO-SAM, and LVI-SAM Tested on Quadruped robot in Gazebo ● Results: video, video2 Requirement

EungChang-Mason-Lee 203 Dec 26, 2022
Backtesting an algorithmic trading strategy using Machine Learning and Sentiment Analysis.

Trading Tesla with Machine Learning and Sentiment Analysis An interactive program to train a Random Forest Classifier to predict Tesla daily prices us

Renato Votto 31 Nov 17, 2022