A research toolkit for particle swarm optimization in Python

Overview

PySwarms Logo

PyPI version Build Status Documentation Status License: MIT DOI Code style: black Gitter Chat

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python.

It is intended for swarm intelligence researchers, practitioners, and students who prefer a high-level declarative interface for implementing PSO in their problems. PySwarms enables basic optimization with PSO and interaction with swarm optimizations. Check out more features below!

Features

  • High-level module for Particle Swarm Optimization. For a list of all optimizers, check this link.
  • Built-in objective functions to test optimization algorithms.
  • Plotting environment for cost histories and particle movement.
  • Hyperparameter search tools to optimize swarm behaviour.
  • (For Devs and Researchers): Highly-extensible API for implementing your own techniques.

Installation

To install PySwarms, run this command in your terminal:

$ pip install pyswarms

This is the preferred method to install PySwarms, as it will always install the most recent stable release.

In case you want to install the bleeding-edge version, clone this repo:

$ git clone -b development https://github.com/ljvmiranda921/pyswarms.git

and then run

$ cd pyswarms
$ python setup.py install

Running in a Vagrant Box

To run PySwarms in a Vagrant Box, install Vagrant by going to https://www.vagrantup.com/downloads.html and downloading the proper packaged from the Hashicorp website.

Afterward, run the following command in the project directory:

$ vagrant provision
$ vagrant up
$ vagrant ssh

Now you're ready to develop your contributions in a premade virtual environment.

Basic Usage

PySwarms provides a high-level implementation of various particle swarm optimization algorithms. Thus, it aims to be user-friendly and customizable. In addition, supporting modules can be used to help you in your optimization problem.

Optimizing a sphere function

You can import PySwarms as any other Python module,

import pyswarms as ps

Suppose we want to find the minima of f(x) = x^2 using global best PSO, simply import the built-in sphere function, pyswarms.utils.functions.sphere(), and the necessary optimizer:

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
best_cost, best_pos = optimizer.optimize(fx.sphere, iters=100)

Sphere Optimization

This will run the optimizer for 100 iterations, then returns the best cost and best position found by the swarm. In addition, you can also access various histories by calling on properties of the class:

# Obtain the cost history
optimizer.cost_history
# Obtain the position history
optimizer.pos_history
# Obtain the velocity history
optimizer.velocity_history

At the same time, you can also obtain the mean personal best and mean neighbor history for local best PSO implementations. Simply call optimizer.mean_pbest_history and optimizer.mean_neighbor_history respectively.

Hyperparameter search tools

PySwarms implements a grid search and random search technique to find the best parameters for your optimizer. Setting them up is easy. In this example, let's try using pyswarms.utils.search.RandomSearch to find the optimal parameters for LocalBestPSO optimizer.

Here, we input a range, enclosed in tuples, to define the space in which the parameters will be found. Thus, (1,5) pertains to a range from 1 to 5.

import numpy as np
import pyswarms as ps
from pyswarms.utils.search import RandomSearch
from pyswarms.utils.functions import single_obj as fx

# Set-up choices for the parameters
options = {
    'c1': (1,5),
    'c2': (6,10),
    'w': (2,5),
    'k': (11, 15),
    'p': 1
}

# Create a RandomSearch object
# n_selection_iters is the number of iterations to run the searcher
# iters is the number of iterations to run the optimizer
g = RandomSearch(ps.single.LocalBestPSO, n_particles=40,
            dimensions=20, options=options, objective_func=fx.sphere,
            iters=10, n_selection_iters=100)

best_score, best_options = g.search()

This then returns the best score found during optimization, and the hyperparameter options that enable it.

>>> best_score
1.41978545901
>>> best_options['c1']
1.543556887693
>>> best_options['c2']
9.504769054771

Swarm visualization

It is also possible to plot optimizer performance for the sake of formatting. The plotters module is built on top of matplotlib, making it highly-customizable.

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
from pyswarms.utils.plotters import plot_cost_history, plot_contour, plot_surface
import matplotlib.pyplot as plt
# Set-up optimizer
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
optimizer.optimize(fx.sphere, iters=100)
# Plot the cost
plot_cost_history(optimizer.cost_history)
plt.show()

CostHistory

We can also plot the animation...

from pyswarms.utils.plotters.formatters import Mesher, Designer
# Plot the sphere function's mesh for better plots
m = Mesher(func=fx.sphere,
           limits=[(-1,1), (-1,1)])
# Adjust figure limits
d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)],
             label=['x-axis', 'y-axis', 'z-axis'])

In 2D,

plot_contour(pos_history=optimizer.pos_history, mesher=m, designer=d, mark=(0,0))

Contour

Or in 3D!

pos_history_3d = m.compute_history_3d(optimizer.pos_history) # preprocessing
animation3d = plot_surface(pos_history=pos_history_3d,
                           mesher=m, designer=d,
                           mark=(0,0,0))    

Surface

Contributing

PySwarms is currently maintained by a small yet dedicated team:

And we would appreciate it if you can lend a hand with the following:

  • Find bugs and fix them
  • Update documentation in docstrings
  • Implement new optimizers to our collection
  • Make utility functions more robust.

We would also like to acknowledge all our contributors, past and present, for making this project successful!

If you wish to contribute, check out our contributing guide. Moreover, you can also see the list of features that need some help in our Issues page.

Most importantly, first-time contributors are welcome to join! I try my best to help you get started and enable you to make your first Pull Request! Let's learn from each other!

Credits

This project was inspired by the pyswarm module that performs PSO with constrained support. The package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Cite us

Are you using PySwarms in your project or research? Please cite us!

  • Miranda L.J., (2018). PySwarms: a research toolkit for Particle Swarm Optimization in Python. Journal of Open Source Software, 3(21), 433, https://doi.org/10.21105/joss.00433
@article{pyswarmsJOSS2018,
    author  = {Lester James V. Miranda},
    title   = "{P}y{S}warms, a research-toolkit for {P}article {S}warm {O}ptimization in {P}ython",
    journal = {Journal of Open Source Software},
    year    = {2018},
    volume  = {3},
    issue   = {21},
    doi     = {10.21105/joss.00433},
    url     = {https://doi.org/10.21105/joss.00433}
}

Projects citing PySwarms

Not on the list? Ping us in the Issue Tracker!

  • Gousios, Georgios. Lecture notes for the TU Delft TI3110TU course Algorithms and Data Structures. Accessed May 22, 2018. http://gousios.org/courses/algo-ds/book/string-distance.html#sop-example-using-pyswarms.
  • Nandy, Abhishek, and Manisha Biswas., "Applying Python to Reinforcement Learning." Reinforcement Learning. Apress, Berkeley, CA, 2018. 89-128.
  • Benedetti, Marcello, et al., "A generative modeling approach for benchmarking and training shallow quantum circuits." arXiv preprint arXiv:1801.07686 (2018).
  • Vrbančič et al., "NiaPy: Python microframework for building nature-inspired algorithms." Journal of Open Source Software, 3(23), 613, https://doi.org/10.21105/joss.00613
  • Häse, Florian, et al. "Phoenics: A Bayesian optimizer for chemistry." ACS Central Science. 4.9 (2018): 1134-1145.
  • Szynkiewicz, Pawel. "A Comparative Study of PSO and CMA-ES Algorithms on Black-box Optimization Benchmarks." Journal of Telecommunications and Information Technology 4 (2018): 5.
  • Mistry, Miten, et al. "Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded." Imperial College London (2018).
  • Vishwakarma, Gaurav. Machine Learning Model Selection for Predicting Properties of High Refractive Index Polymers Dissertation. State University of New York at Buffalo, 2018.
  • Uluturk Ismail, et al. "Efficient 3D Placement of Access Points in an Aerial Wireless Network." 2019 16th IEEE Anual Consumer Communications and Networking Conference (CCNC) IEEE (2019): 1-7.
  • Downey A., Theisen C., et al. "Cam-based passive variable friction device for structural control." Engineering Structures Elsevier (2019): 430-439.
  • Thaler S., Paehler L., Adams, N.A. "Sparse identification of truncation errors." Journal of Computational Physics Elsevier (2019): vol. 397
  • Lin, Y.H., He, D., Wang, Y. Lee, L.J. "Last-mile Delivery: Optimal Locker locatuion under Multinomial Logit Choice Model" https://arxiv.org/abs/2002.10153
  • Park J., Kim S., Lee, J. "Supplemental Material for Ultimate Light trapping in free-form plasmonic waveguide" KAIST, University of Cambridge, and Cornell University http://www.jlab.or.kr/documents/publications/2019PRApplied_SI.pdf
  • Pasha A., Latha P.H., "Bio-inspired dimensionality reduction for Parkinson's Disease Classification," Health Information Science and Systems, Springer (2020).
  • Carmichael Z., Syed, H., et al. "Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time-Series Forecasting," Proceedings of the 7th Annual Neuro-inspired Computational Elements ACM (2019), nb. 7: 1-10 https://doi.org/10.1145/3320288.3320303
  • Klonowski, J. "Optimizing Message to Virtual Link Assignment in Avionics Full-Duplex Switched Ethernet Networks" Proquest
  • Haidar, A., Jan, ZM. "Evolving One-Dimensional Deep Convolutional Neural Netowrk: A Swarm-based Approach," IEEE Congress on Evolutionary Computation (2019) https://doi.org/10.1109/CEC.2019.8790036
  • Shang, Z. "Performance Evaluation of the Control Plane in OpenFlow Networks," Freie Universitat Berlin (2020)
  • Linker, F. "Industrial Benchmark for Fuzzy Particle Swarm Reinforcement Learning," Liezpic University (2020)
  • Vetter, A. Yan, C. et al. "Computational rule-based approach for corner correction of non-Manhattan geometries in mask aligner photolithography," Optics (2019). vol. 27, issue 22: 32523-32535 https://doi.org/10.1364/OE.27.032523
  • Wang, Q., Megherbi, N., Breckon T.P., "A Reference Architecture for Plausible Thread Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes" https://arxiv.org/abs/2001.05459
  • Menke, Tim, Hase, Florian, et al. "Automated discovery of superconducting circuits and its application to 4-local coupler design," arxiv preprint: https://arxiv.org/abs/1912.03322

Others

Like it? Love it? Leave us a star on Github to show your appreciation!

Contributors

Thanks goes to these wonderful people (emoji key):


Aaron

🚧 💻 📖 ⚠️ 🤔 👀

Carl-K

💻 ⚠️

Siobhán K Cronin

💻 🚧 🤔

Andrew Jarcho

⚠️ 💻

Mamady

💻

Jay Speidell

💻

Eric

🐛 💻

CPapadim

🐛 💻

JiangHui

💻

Jericho Arcelao

💻

James D. Bohrman

💻

bradahoward

💻

ThomasCES

💻

Daniel Correia

🐛 💻

fluencer

💡 📖

miguelcocruz

📖 💡

Steven Beardwell

💻 🚧 📖 🤔

Nathaniel Ngo

📖

Aneal Sharma

📖

Chris McClure

📖 💡

Christopher Angell

📖

Kutim

🐛

Jake Souter

🐛 💻

Ian Zhang

📖 💡

Zach

📖

Michel Lavoie

🐛

ewekam

📖

Ivyna Santino

📖 💡

Muhammad Yasirroni

📖

Christian Kastner

📖 📦

Nishant Rodrigues

💻

msat59

💻 🐛

Diego

📖

Shaad Alaka

📖

Krzysztof Błażewicz

🐛

Jorge Castillo

📖

Philipp Danner

💻

Nikhil Sethi

💻 📖

This project follows the all-contributors specification. Contributions of any kind welcome!

Comments
  • Early stropping (ftol) if cost isn't reduced

    Early stropping (ftol) if cost isn't reduced

    Describe the bug When the ftol is used and the best cost is equal in two consecutive iteration, the search is stopped even if the cost is high.

    To Reproduce options = {'c1': 0.5, 'c2': 0.3, 'w':0.9} optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=2, options=options, ftol=1e-10) cost, pos = optimizer.optimize(fx.sphere, iters=1000, n_processes=None, verbose=True)

    Output is something like this. Position: [0.06410654 0.02934344], Cost: 0.0049706854704889645

    As can be seen, x1 and x2 are far from zero, the cost is relatively high, and it's much higher than 1e-10. The reason is that the algorithm couldn't find lower cost so the swarm.best_cost is equal to best_cost_yet_found. So the search is stopped even if the cost remains high.

    Environment (please complete the following information):

    • PySwarms Version 1.1.0

    How to fix In optimizer code, an additional check should be performed to ignore if swarm.best_cost == best_cost_yet_found. After applying the fix, the output will be:

    Position: [-8.05105641e-06 -6.63608458e-07], Cost: 6.525988549305629e-11

    bug stale question 
    opened by msat59 36
  • Fix for Issue #397

    Fix for Issue #397

    Description

    Early unwanted stopping if ftol is used, no matter what threshold is set, because of no change in best_cost in two consecutive iterations.

    Related Issue

    • #397

    • The logger was also improved to show the termination condition, and a bug in showing the progress bar was fixed too.

    • Functions docstrings were also updated.

    Motivation and Context

    It avoids having an unwanted stop.

    How Has This Been Tested?

    All GlobalBestPSO, LocalBestPSO, and GeneralOptimizerPSO were tested for Sphere(), with following criteria:

    • n_particles=50, dimensions=2, iters=300
    • without ftol («iters=300» reached)
    • ftol=1e-60 («iters=300» reached)
    • ftol=1e-3 («ftol=0.001» reached)
    • ftol=1e-10, ftol_iter=5 («ftol_iter=5» reached)
    • ftol=1e-60, ftol_iter=50 («iters=300» reached)

    Screenshots (if appropriate):

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [x] All new and existing tests passed.
    stale 
    opened by msat59 35
  • Update example notebooks

    Update example notebooks

    After releasing v0.4.0, it seems that we already had some API changes that were not reflected in the notebook. What you need to do is just rerun all the notebooks in this link and update the code as necessary to account for API changes :+1:

    first-timers-only 
    opened by ljvmiranda921 32
  • Implement assertions() in search methods

    Implement assertions() in search methods

    Thanks to @SioKCronin , we're now able to implement hyperparameter search tools such as GridSearch and RandomSearch . These are awesome implementations, but we can maximize its robustness by adding an assertions() method in our classes. Would you like to try?

    What you'll do

    Simply put, you will just add an assertions() method in the SearchBase class. This should contain various checks when instantiating any class that inherits from it. You can check the docstrings for each class and create a proper method call for it.

    If you want to check previous implementations, see the SwarmBase.assertions method, and check how it is being implemented in GlobalBestPSO (it is not called explicitly because it just inherits from the base) or in LocalBestPSO (additional lines of code were added after calling the super() because of extra attributes).

    Go the extra mile?

    If you want to go the extra mile, you can add tests to check for edge cases and see if your assertions are working when passed with an invalid input (maybe an invalid type, out-of-bounds, etc.). For this, you just need to add an Instantiation class in the test.utils.search.test_gridsearch and test.utils.search.test_randomsearch modules.

    Again, if you want a template, you can check the Instantiation classes in test.optimizers.test_global_best and others. Notice how we try to feed the class with invalid arguments/edge cases and have the unit test capture them. It's a very fun activity!

    If you wish to take this on, then feel free to drop a comment here! I'd be glad to help you!

    Update: 9/17/2017

    Follow the instructions in this quick-start guide to get you started!

    Commit Guidelines

    I'd appreciate if we lessen the number of commits per issue to 1-2. Of course you can commit often, but before merging, I would ask you to rebase them depending on the changes you've done. A commit format that we follow is shown below:

    Short and sweet imperative title  (#27)
    
    Description of the commit. The commit title should be short
    and concise, and must be in imperative form. It could be as
    simple as `Add tests for optimizer` or `Implement foo`. It describes
    the "what" of the commit. You must also reference the issue, if any,
    in the title. Here, in the description, we explain the
    "why" of the commit. This is more or less free-for-all. So you can
    describe as detailed as possible, in whatever tense/form you like.
    
    Author: <your-github-username>
    Email: <your-email>
    
    help wanted first-timers-only 
    opened by ljvmiranda921 32
  • Add BoundaryHandler and VelocityHandler

    Add BoundaryHandler and VelocityHandler

    Description

    This PR implements the new BoundaryHandler and VelocityHandler classes that deal with the issue of particles that don't move in optimisations with boundary conditions. This is achieved by using different strategies that have been described in Hel10 (P. 28). These strategies include:

    • Nearest bound: Reset particle to the nearest boundary
    • Random: Reset the particle to a random position within the bounds
    • Shrink: Shrink the velocity such that the particle touches the boundary instead of surpassing it
    • Intermediate: Reset the particle to a value between its former location and the bound
    • Resample: Calculate new velocities until the next particle position is within the bounds
    • Periodic: Tile the space with the same search space over and over

    There is one more:

    • Reflective: Mirror the position of the particle at the surpassed boundary

    I won't include the last one in this PR as it is probably provided by @jolayfield. After they've made their PR I'm going to embed their code in the BoundaryHandler class. The strategies for the VelocityHandler include:

    • Unmodified: The velocity remains without modification
    • Adjust: The velocity is adjusted to be exactly the distance vector between the previous and the current position.
    • Invert: The velocity is inverted.
    • Zero: The velocities of particles that go out-of-bounds are set to zero.

    Related Issue

    Resolves: #237 See also: #150, #234

    Motivation and Context

    In #234 we encountered a weird phenomenon when optimizing with boundary conditions. A lot of particles stopped moving while the optimization was still running. This is caused by the waiving of updating the position in the compute_position() function when it leaves the boundaries. To fix this issue we introduce a new class called BoundaryHandler as well as a VelocityHandler which are going to reposition the particle instead of not updating it.

    How Has This Been Tested?

    Types of changes

    • [ ] Bug fix (a non-breaking change which fixes an issue)
    • [X] New feature (a non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [X] My code follows the code style of this project.
    • [X] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [X] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests pass.

    TODO:

    • [x] Add BoundaryHandler
    • [ ] Write tests for the BoundaryHandler strategies
    • [x] Integrate the BoundaryHandler into the operators
    • [x] Add VelocityHandler
    • [ ] Write tests for the VelocityHandler strategies
    • [x] Integrate the VelocityHandler into the operators
    • [ ] Clean up documentation
    • [x] Add functions for the strategies in LaTeX
    • [ ] Check doc build for any LaTeX errors
    • [ ] Clean up PR
    enhancement documentation unit tests v.1.1.0 
    opened by whzup 31
  • Parallel particle evaluation

    Parallel particle evaluation

    Is your feature request related to a problem? Please describe. I know I have an expensive function to evaluate, I am doing a simulation based optimization of a hyper parameter space. I am planning to use HPC environment because of this. Can particles be evaluated in parallel? Synchronously or even asynchronously? I thought an approach similar to this would be interesting (let me know if article is inaccessible).

    Describe the solution you'd like A switch to enable threads for particle evaluations. In particular, I am fine with firing off os.system() commands and waiting for return.

    Describe alternatives you've considered I supposed I could write a sub-script that executes the swarm in parallel, evaluates the target function for each particle, and returns those values to the optimizer. This wouldn't allow for async optimization. Also, I could also try to write another optimizer, one that takes an option for the number of parallel particles.

    What is the best way to go here? Still digging into this API currently.

    enhancement help wanted 
    opened by soamaven 27
  • Make a Jupyter notebook tutorial for the analysis of a simple circuit

    Make a Jupyter notebook tutorial for the analysis of a simple circuit

    Goal

    PSO can be utilized in a wide variety of fields. To broaden our collection of tutorials, we'd like to have another example where we analyse a simple circuit with PSO. For some inspiration, you can visit the example section. I propose that, for the beginning, we start by analysing the circuit shown below. It has a resistor and a diode. The end-goal is to have a nice Jupyter notebook that goes through the whole process of writing this optimization program.

    Method

    circuit As there are many models for diodes, let us use a more realistic one (a simplified Shockley equation) for this tutorial:

    where:

    • : diode current
    • : reverse bias saturation current
    • : diode voltage
    • : thermal voltage (use 25.3 mV in this tutorial)

    to use it in the tutorial I'd recommend to solve for :

    Using the Kirchhoff voltage law we get this:

    where denotes the voltage over the resistor. We can restructure it to be our cost function (optimally it is 0) for the optimization:

    The absolute value is necessary because we don't want to obtain negative currents. If we write this more verbosely we see that the current is the parameter we want to optimize:

    These are some sample values for the other parameters:

    • : 10 V
    • : 9.4 pA
    • : 100 Ohm

    Of course, if you know an alternative way to solve this or you have any questions about the issue don't hesitate to write a comment! I'm looking forward to seeing a first-timer writing this Jupyter notebook! 💯

    Notes

    Please work on the development branch. You can find a good StackOverflow question about forking here. For a more advanced beginner guide to the GitHub workflow, there is this cheatsheet available. It is quite detailed and gives a simple overview.

    help wanted first-timers-only v.1.1.0 
    opened by whzup 25
  • Prepare v.0.3.0 release

    Prepare v.0.3.0 release

    Tasklist (Deadline: August 10, 2018)

    • [x] Run black and flake8 for a clean format (drop)
    • [x] Tests and minor changes for topologies (c/o @whzup )
    • [x] Make a PyPI friendly README by following this link
    • [x] Boundary conditions #150 (c/o @jolayfield) Moved to v.0.4.0 or on dev branch
    • [x] Resolve issue with PyYaml, you can find a related issue in yaml/pyyaml#207
    • [x] Update README (update authors and maintainers)
    • [x] Merge all dependency updates in #162, #166, #173, and #178
    • [x] Resolve all merge conflicts
    • [x] Update HISTORY.rst
    • [x] Write Release Notes
    • [x] Bump version and update setup.py
    • [x] Release: merge to master, make a Release, and upload to PyPI

    Please write the changelog below for our release notes. It would be nice to follow this link.


    Release v.0.3.0

    We're proud to present the release of PySwarms version 0.3.0! Coinciding with this, we would like to welcome Aaron Moser (@whzup) as one of the project's maintainers! v.0.3.0 includes new topologies, a static option to configure a particle's neighbor/s, and a revamped plotters module. We would like to thank our contributors for helping us with this release.

    Release notes

    • NEW: More basic particle topologies in the pyswarms.backend module - #142, #151, #155, #177
    • NEW: Ability to make topologies static or dynamic - #164
    • NEW: A GeneralOptimizerPSO class. The GeneralOptimizerPSO class has an additional attribute for the topology used in the optimization - #151
    • NEW: A plotters module for swarm visualization. The environments module is now deprecated - #135, #172
    • FIX: Bugfix for optimizations not returning the best cost - #176
    • FIX: Bugfix for setup.py not running on Windows - #175
    • IMPROVED: Objective functions can now be parametrized. Helpful for your custom-objective functions - #144. Thanks, @bradahoward!
    • IMPROVED: New single-objective functions - #168. Awesome work, @jayspeidell!

    New Topologies and the GeneralOptimizerPSO Class

    New topologies were added to improve the ability to customize how a swarm behaves during optimization. In addition, a GeneralOptimizerPSO class was added to enable switching-out various topologies. Check out the description below!

    New Topology classes and the static attribute

    The newly added topologies expand on the existing ones (Star and Ring topology) and increase the built-in variety of possibilities for users that want to build their custom swarm implementation from the pyswarms.backend module. The new topologies include: - Pyramid topology: Computes the neighbours using a Delaunay triangulation of the particles. - Random topology: Computes the neighbours randomly, but systematically. - VonNeumann topology: Computes the neighbours using a Von Neumann topology (inherited from the Ring topology) With these new topologies, the ability to change the behaviour of the topologies was added in form of a static argument that is passed when initializing a Topology class. The static parameter is a boolean that decides whether the neighbours in the topologies are computed every iteration (static=False) or only in the first one (static=True). It is passed as a parameter at the initialization of the topology and is False by default. Additionally, the LocalBestPSO now also takes a static parameter to pass this information to its Ring topology. For an example see below.

    The GeneralOptimizerPSO class

    The new topologies can also be easily used in the new GeneralOptimizerPSO class which extends the collection of optimizers. In addition to the parameters used in the GlobalBestPSO and LocalBestPSO classes, the GeneralOptimizerPSO uses a topology argument. This argument passes a Topology class to the GeneralOptimizerPSO.

    from pyswarms.single import GeneralOptimizer
    from pyswarms.backend.topology import Random
    
    options = {"w": 1, "c1": 0.4, "c2": 0.5, "k": 3}
    topology = Random(static=True)
    optimizer = GeneralOptimizerPSO(n_particles=20, dimensions=4, options=options, bounds=bounds, topology=topology)
    

    The plotters module

    The environments module is now deprecated. Instead, we have a plotters module that takes a property of the optimizer and plots it with minimal effort. The whole module is built on top of matplotlib.

     import pyswarms as ps
     from pyswarms.utils.functions import single_obj as fx
     from pyswarms.utils.plotters import plot_cost_history
    
     # Set-up optimizer
     options = {'c1':0.5, 'c2':0.3, 'w':0.9}
     optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
     optimizer.optimize(fx.sphere_func, iters=100)
    
     # Plot the cost
     plot_cost_history(optimizer.cost_history)
     plt.show()
    

    Imgur

    We can also plot the animation...

    from pyswarms.utils.plotters.formatters import Mesher
    from pyswarms.utils.plotters.formatters import Designer
    from pyswarms.utils.plotters import plot_contour, plot_surface
    
    # Plot the sphere function's mesh for better plots
    m = Mesher(func=fx.sphere_func)
    
    # Adjust figure limits
    d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)],
                 label=['x-axis', 'y-axis', 'z-axis'])
    

    In 2D,

    plot_contour(pos_history=optimizer.pos_history, mesher=m, mark=(0,0))
    

    Contour

    Or in 3D!

    pos_history_3d = m.compute_history_3d(optimizer.pos_history) # preprocessing
    animation3d = plot_surface(pos_history=pos_history_3d,
                               mesher=m, designer=d,
                               mark=(0,0,0))    
    

    Surface

    admin 
    opened by ljvmiranda921 24
  • IndexError: index 5309 is out of bounds for axis 1 with size 1

    IndexError: index 5309 is out of bounds for axis 1 with size 1

    Sir, I'm having problems with the out of bounds error. '5309' is the first element in my y. Here is the complete traceback:

    IndexError                                Traceback (most recent call last)
    <ipython-input-66-107cafabd9c0> in <module>()
         76 
         77 # Perform optimization
    ---> 78 cost, pos = optimizer.optimize(f, print_step=100, iters=1000, verbose=3)
         79 
         80 def predict(X_train, pos):
    
    ~\Anaconda3\envs\tensorflow\lib\site-packages\pyswarms\single\global_best.py in optimize(self, objective_func, iters, print_step, verbose)
        131         for i in xrange(iters):
        132             # Compute cost for current position and personal best
    --> 133             current_cost = objective_func(self.pos)
        134             pbest_cost = objective_func(self.personal_best_pos)
        135 
    
    <ipython-input-66-107cafabd9c0> in f(X_train)
         65 def f(X_train):
         66     n_particles = 100
    ---> 67     j = [forward_prop(X_train[i]) for i in range(n_particles)]
         68     return np.array(j)
         69 
    
    <ipython-input-66-107cafabd9c0> in <listcomp>(.0)
         65 def f(X_train):
         66     n_particles = 100
    ---> 67     j = [forward_prop(X_train[i]) for i in range(n_particles)]
         68     return np.array(j)
         69 
    
    <ipython-input-66-107cafabd9c0> in forward_prop(params)
         59     # Compute for the negative log likelihood
         60     N = 84 # Number of samples
    ---> 61     correct_logprobs = -np.log(probs[range(N), Y_train])
         62     loss = np.sum(correct_logprobs) / N
         63     return loss
    
    IndexError: index 5309 is out of bounds for axis 1 with size 1
    
    opened by javej 23
  • Memory issues

    Memory issues

    Describe the bug When running optimize in loop I get the growth of memory consumptions without any limits. If I specify the n_processes parameter the growth is even faster, but the issue persists even without this parameter.

    To Reproduce Steps to reproduce the behavior:

    import pyswarms as ps
    from pyswarms.utils.functions import single_obj as fx
    
    options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
    # Call instance of PSO
    optimizer = ps.single.GlobalBestPSO(n_particles=20, dimensions=24, options=options)
    while True:
        best_cost, best_pos = optimizer.optimize(fx.sphere, iters=50, n_processes=4)
    

    Expected behavior No excessive memory consumption.

    Environment (please complete the following information):

    • OS: Linux
    • Version Ubuntu 18.04
    • PySwarms Version 1.1.0
    • Python Version 3.6.8

    Additional context Running in loop is important, because I may need to run it repetitively, when e.g. target of optimization changes and I need to get the new optimization results.

    bug help wanted 
    opened by Jendker 22
  • wrong implementation/definition of ftol_iter

    wrong implementation/definition of ftol_iter

    Description

    When ftol is used, for example 1e-10, pyswarm stops when swarm.best_cost is lower than ftol for ftol_iter times. However, the correct naming of ftol_iter should be stall_iter. This means that the search stops when best_cost isn't improved after x iterations, or when it is below ftol. However, when I search for a solution, I need to stop the search when the best_cost isn't improved after x iterations or when it is below ftol (must happen only once).

    The current naming and implementation is confusing and doesn't make sense. That's why I thought I had to set ftol_iter=1 which caused stopping after the first iteration and having premature solution ([0.03840509 0.00329153] instead of [-9 -9]).

    In this regard, the default value of ftol_iter should be big enough (at least 100) to avoid having premature solution. ftol_iter=1 is unusual and very small.

    This is what I got with ftol=1e-10 and ftol_iter=100. As can be seen, it must stop after iteration 199.

    196 = {float64} 1.543800321250907e-10
    197 = {float64} 1.543800321250907e-10
    198 = {float64} 1.543800321250907e-10
    199 = {float64} 1.2788497031507338e-12   <-- stop here
    200 = {float64} 1.2788497031507338e-12
    201 = {float64} 1.2788497031507338e-12
    202 = {float64} 1.2788497031507338e-12
    203 = {float64} 1.2788497031507338e-12
    204 = {float64} 1.2788497031507338e-12
    

    Sample code A sample code is attached. The function to be optimized is a modified sphere function centered at [-9,-9]. The real cost is zero.

    ftol_test.zip

    Expected behavior As I said, I expect that the optimizer stops after ftol_iter iterations when there is no improvement (best_cost stalls) or it stops instantly when the best_cost is below ftol. With the current implementation, optimizer only stops after iteration around 300, after stalling for 100 iterations but with ftol around 2e-16, which is much lower than given ftol. However, it must stop somewhere around iterations 200.

    Environment (please complete the following information):

    • PySwarms Version 1.3.0

    Thank you. DATA-BOY

    stale question 
    opened by D4T4-80Y 21
  • How to define a discrete design space when using pyswarms.discrete.binary module

    How to define a discrete design space when using pyswarms.discrete.binary module

    Description

    Hi all,

    I've just started using pyswarms and am stuck on defining a discrete design space when using the pyswarms.discrete.binary module.

    I have got it running when using the ps.single.GlobalBestPSO module but I want to have points from 0 to 90 at 0.5 increments for my candidate solutions for each of my design variables (2 dimensional).

    Any help would be much appreciated.

    Cheers, Aidan

    opened by AidanHawk 0
  • Problem with conditional functions in multi-dimension particles

    Problem with conditional functions in multi-dimension particles

    I believe the package cannot handle a few complex-conditioned functions. This is an example.

    The function defined in the tutorial for "Basic Optimization with Arguments" is a simple non-conditional function.

    Imagine we change such function to a more complex one which has a few IFs and we request more than one particle (n_particles = 10).

    Because the input variables are passed to function as numpy arrays, the package fails to do such comparisons. Because it compares full arrays of numpy, for instance x[10:0]>x[10:1] and it correctly cannot handle these comparisons.

    This is the error it reports and it means it is comparing arrays not members of arrays and the output is booleans regarding the whole array not its members.

    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    Is there any way to use this package for more complex functions?

    This is a minimum working example code which produces such a function and in case it is important, I am on a windows system with a python 3.10+.

    # import modules
    import numpy as np
    
    # create a parameterized version of the classic Rosenbrock unconstrained optimzation function
    def rosenbrock_with_args(x, a, b, c=0):
    
        tmp0 = x[:, 0]
        tmp1 = x[:, 1]
    
        if tmp0 > tmp1:
            f = (a - tmp0) ** 2 + b * (tmp1 - tmp0 ** 2) ** 2 + c
        else:
            f = 0
        return f
    
    from pyswarms.single.global_best import GlobalBestPSO
    
    # instatiate the optimizer
    x_max = 10 * np.ones(2)
    x_min = -1 * x_max
    bounds = (x_min, x_max)
    options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
    optimizer = GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)
    
    # now run the optimization, pass a=1 and b=100 as a tuple assigned to args
    
    cost, pos = optimizer.optimize(rosenbrock_with_args, 1000, a=1, b=100, c=0)
    
    opened by someparsa 0
  • compute_Pbest - compute gbest weird behavior

    compute_Pbest - compute gbest weird behavior

    Hello, thanks for that job. i encounter an issue in usage: I prepare my own optimization loops and an issu eoccurs in gbest computing the output value is not the minimum value.

    i modify and try directly using GlobalBestPSO or GeneralOptimizerPSO and both return me that error at the end of first iterations run : "mask_cost = swarm.current_cost < swarm.pbest_cost TypeError: '>' not supported between instances of 'float' and 'NoneType' "

    resulting code stop i guess it look like at first run the swarm.pbest_cost not been initialize

    i am on Win 10 Python 3.9.6 Pywswarms 1.3.0

    opened by K1one44 0
  • Passing verbose and n_processes to hyperparameter tuning algorithms

    Passing verbose and n_processes to hyperparameter tuning algorithms

    I don't think this needs much explanation, but if user could pass verbose and n_processes to the hyperparamter tuning algorithms like RandomSearch or GridSearch along with other parameters (e.g., bounds) that would be great. Great job putting this together.

    opened by amirhszd 0
  • Implemented options to hotstart training

    Implemented options to hotstart training

    Description

    This PR implements an option to provide the swarm position, velocity and global best (or any particle) to the optimizer. This was performed by adding the following optional arguments to the optimizer class:

    • init_vel
    • init_best

    The arguments were added to GlobalBestPSO, GeneralOptimizerPSO and LocalBestPSO as well as to the abstract base classes SwarmOptimizer and DiscreteSwarmOptimizer.

    The optimizer can now be called as follows:

    # Call instance of PSO 
    optimizer = pyswarms.single.GlobalBestPSO(
                        n_particles=100, dimensions=dim, options=options, bounds=bounds, 
                        init_pos=init_pos,init_vel=init_vel,init_best=init_best)
    

    Related Issue

    Motivation and Context

    The current implementation does not allow to properly resume a previous optimization process. This PR makes it possible while maintaining full compatibility with previous versions.

    How Has This Been Tested?

    This has been tested in a limited manner so far. Only using GlobalBestPSO on Ubuntu 18.04, python 3.6.

    Screenshots (if appropriate):

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly (modified docstrings in the code).
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    opened by LucasWaelti 0
Releases(v.1.2.0)
  • v.1.2.0(Nov 14, 2020)

    This minor release contains multiple documentation and CI/CD improvements. Thank you for everyone who helped out in this version! I apologize for this very late release--life happened:

    • NEW: Modernize CI/CD Pipeline using Azure Pipelines - #433
    • IMPROVED: Documentation updates on the Jupyter notebook and modules - #430 , #404 , #409 . #399 , #384, #379. Thank you @diegoroman17 , @Archer6621 , @yasirroni , @ivynasantino and @a310883
    • FIX: Fix missing Pyyaml in requirements - #421 . Thank you @blazewicz
    • FIX: Verbose behaviour - #408 Thank you @nishnash54 for the good discussions!
    • IMPROVED: Decouple technologies and operators - #403 Thank you @whzup as always!
    • IMPROVED: Add tolerance parameters - #402 Thank you @nishnash54 !
    • IMPROVED: Add verbose switch and fix unclosed pools - #395 Thank you for the good discussion @msat59 !
    Source code(tar.gz)
    Source code(zip)
  • v.1.1.0(May 18, 2019)

    This new version adds support for parallel particle evaluation, better documentation, multiple fixes, and updated build dependencies.

    • NEW: Updated API documentation - #344
    • NEW: Relaxed dependencies when installing pyswarms - #345
    • NEW: We're now using Azure Pipelines for our builds! - #327
    • NEW: Add notebook for electric circuits - #288 . Thank you @miguelcocruz !
    • NEW: Parallel particle evaluation - #312 . Thahnk you once more @danielcorreia96 !
    • FIX: Fix optimise methods returning incorrect best_pos - #322 . Thank you @ichbinjakes !
    • FIX: Fix SearchBase parameter - #328 . Thank you @Kutim !
    • FIX: Fix basic optimization example - #329 . Thank you @IanBoyanZhang !
    • FIX: Fix global best velocity equation - #330 . Thank you @craymichael !
    • FIX: Update sample code to new API - #296 . Thank you @ndngo !
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.2(Feb 18, 2019)

  • v.1.0.1(Feb 14, 2019)

    • FIX: Handlers memory management so that it works all the time - #286 . Thanks for this @whzup !
    • FIX: Re-introduce fix for multiple optimization function calls - #290 . Thank you once more @danielcorreia96 !
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.0(Feb 9, 2019)

    This is the first major release of PySwarms. Starting today, we will be adhering to a better semantic versioning guidelines. We will be updating the project wikis shortly after. The maintainers believe that PySwarms is mature enough to merit a version 1, this would also help us release more often (mostly minor releases) and create patch releases as soon as possible.

    Also, we will be maintaining a quarterly release cycle, where the next minor release (v.1.1.0) will be on June. All enhancements and new features will be staged on the development branch, then will be merged back to the master branch at the end of the cycle. However, bug fixes and documentation errors will merit a patch release, and will be merged to master immediately.

    • NEW: Boundary and velocity handlers to resolve stuck particles - #238 . All thanks for our maintainer, @whzup !
    • FIX: Duplication function calls during optimization, hopefully your long-running objective functions won't take doubly long. - #266. Thank you @danielcorreia96 !
    Source code(tar.gz)
    Source code(zip)
  • v.0.4.0(Jan 29, 2019)

    • NEW: The console output is now generated by the Reporter module - #227
    • NEW: A @cost decorator which automatically scales to the whole swarm - #226
    • FIX: A bug in the topologies where the best position in some topologies was not calculated using the nearest neighbours - #253
    • IMPROVED: Better naming for benchmarking functions - #222. Thanks @nik1082!
    • IMPROVED: Error handling in the Optimizers - #232
    • IMPROVED: New management method for dependencies - #262
    • REMOVED: The environments module was removed - #217
    Source code(tar.gz)
    Source code(zip)
  • v.0.3.1(Aug 13, 2018)

    • NEW: Collaboration tool using Vagrantfiles - #193. Thanks @jdbohrman!
    • NEW: Add configuration file for pyup.io - #210
    • FIX: Fix for incomplete documentation in ReadTheDocs - #208
    • IMPROVED: Update dependencies via pyup - #204
    Source code(tar.gz)
    Source code(zip)
  • v.0.3.0(Aug 10, 2018)

    We're proud to present the release of PySwarms version 0.3.0! Coinciding with this, we would like to welcome Aaron Moser (@whzup) as one of the project's maintainers! v.0.3.0 includes new topologies, a static option to configure a particle's neighbor/s, and a revamped plotters module. We would like to thank our contributors for helping us with this release.

    Release notes

    • NEW: More basic particle topologies in the pyswarms.backend module - #142, #151, #155, #177
    • NEW: Ability to make topologies static or dynamic - #164
    • NEW: A GeneralOptimizerPSO class. The GeneralOptimizerPSO class has an additional attribute for the topology used in the optimization - #151
    • NEW: A plotters module for swarm visualization. The environments module is now deprecated - #135, #172
    • FIX: Bugfix for optimizations not returning the best cost - #176
    • FIX: Bugfix for setup.py not running on Windows - #175
    • IMPROVED: Objective functions can now be parametrized. Helpful for your custom-objective functions - #144. Thanks, @bradahoward!
    • IMPROVED: New single-objective functions - #168. Awesome work, @jayspeidell!

    New Topologies and the GeneralOptimizerPSO Class

    New topologies were added to improve the ability to customize how a swarm behaves during optimization. In addition, a GeneralOptimizerPSO class was added to enable switching-out various topologies. Check out the description below!

    New Topology classes and the static attribute

    The newly added topologies expand on the existing ones (Star and Ring topology) and increase the built-in variety of possibilities for users that want to build their custom swarm implementation from the pyswarms.backend module. The new topologies include: - Pyramid topology: Computes the neighbours using a Delaunay triangulation of the particles. - Random topology: Computes the neighbours randomly, but systematically. - VonNeumann topology: Computes the neighbours using a Von Neumann topology (inherited from the Ring topology) With these new topologies, the ability to change the behaviour of the topologies was added in form of a static argument that is passed when initializing a Topology class. The static parameter is a boolean that decides whether the neighbours in the topologies are computed every iteration (static=False) or only in the first one (static=True). It is passed as a parameter at the initialization of the topology and is False by default. Additionally, the LocalBestPSO now also takes a static parameter to pass this information to its Ring topology. For an example see below.

    The GeneralOptimizerPSO class

    The new topologies can also be easily used in the new GeneralOptimizerPSO class which extends the collection of optimizers. In addition to the parameters used in the GlobalBestPSO and LocalBestPSO classes, the GeneralOptimizerPSO uses a topology argument. This argument passes a Topology class to the GeneralOptimizerPSO.

    from pyswarms.single import GeneralOptimizer
    from pyswarms.backend.topology import Random
    
    options = {"w": 1, "c1": 0.4, "c2": 0.5, "k": 3}
    topology = Random(static=True)
    optimizer = GeneralOptimizerPSO(n_particles=20, dimensions=4, options=options, bounds=bounds, topology=topology)
    

    The plotters module

    The environments module is now deprecated. Instead, we have a plotters module that takes a property of the optimizer and plots it with minimal effort. The whole module is built on top of matplotlib.

     import pyswarms as ps
     from pyswarms.utils.functions import single_obj as fx
     from pyswarms.utils.plotters import plot_cost_history
    
     # Set-up optimizer
     options = {'c1':0.5, 'c2':0.3, 'w':0.9}
     optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
     optimizer.optimize(fx.sphere_func, iters=100)
    
     # Plot the cost
     plot_cost_history(optimizer.cost_history)
     plt.show()
    

    Imgur

    We can also plot the animation...

    from pyswarms.utils.plotters.formatters import Mesher
    from pyswarms.utils.plotters.formatters import Designer
    from pyswarms.utils.plotters import plot_contour, plot_surface
    
    # Plot the sphere function's mesh for better plots
    m = Mesher(func=fx.sphere_func)
    
    # Adjust figure limits
    d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)],
                 label=['x-axis', 'y-axis', 'z-axis'])
    

    In 2D,

    plot_contour(pos_history=optimizer.pos_history, mesher=m, mark=(0,0))
    

    Contour

    Or in 3D!

    pos_history_3d = m.compute_history_3d(optimizer.pos_history) # preprocessing
    animation3d = plot_surface(pos_history=pos_history_3d,
                               mesher=m, designer=d,
                               mark=(0,0,0))    
    

    Surface

    Source code(tar.gz)
    Source code(zip)
  • v.0.2.1(Jun 27, 2018)

  • v.0.2.0(Jun 11, 2018)

    Release notes

    • NEW: pyswarms.backend module for custom swarm algorithms. Users can now use some primitives provided in this module to write their own optimization loop, providing a more "white-box" approach in swarm intelligence - #119, #115, #116, #117
    • IMPROVED: Unit tests ported to pytest. We're now dropping the unittest module. Pytest's parameterized tests enable our test cases to scale much better - #114
    • IMPROVED: Python 2.7 support is dropped. Given the imminent end-of-life of Python 2, we'll be fully-supporting Python 3.4 and above - #113
    • IMPROVED: PSO algorithms ported to the new PySwarms backend - #115
    • IMPROVED: Updated documentation in ReadTheDocs and new Jupyter notebook example - #124

    The PySwarms Backend module

    The new backend module exposes some swarm optimization primitives so that users can create their custom swarm implementations without relying too much on our base classes. There are two main components for the backend, the Swarm class and the Topology base class. Using these classes, you can construct your own optimization loop like the one below:

    optimization_loop

    The Swarm class

    This class acts as a data class that holds all necessary attributes in a given swarm. The idea is to continually update the attributes located there. You can easily initialize this class by providing the initial position and velocity matrices.

    The Topology class

    The topology class abstracts away common operations in swarm optimization: (1) determining the best particle in the swarm, (2) computing the next position, and (3) computing the velocity matrix. As of now, we only have the Ring and Star topologies implemented. Hopefully, we can add more in the future.

    pyswarms_api

    Source code(tar.gz)
    Source code(zip)
  • v.0.1.9(Apr 20, 2018)

    After three months, we are happy to present our next development release, version v.0.1.9! This release introduces non-breaking changes in the API and minor fixes adopting pylint's and flake8's strict conventions. This release would not have been possible without the help of @mamadyonline and our new Collaborator Siobhan K. Cronin! Thank you for all your help and support in maintaining PySwarms!

    Release notes

    NEW: Ability to set the initial position of the swarm - #93 NEW: Ability to set a tolerance value to break the iteration - #93, #100 FIX: Fix for the Rosenbrock function returning the incorrect shape - #98

    Initial Position and Tolerance Value

    Before, the swarm particles were generated randomly with respect to a lower and upper bound that we set during initialization. Now, we have the ability to initialize our swarm particles around a particular location, just in case we have applications that require that feature.

    Addtionally, we added a tolerance value to decrease optimization time. Usually, we just wait for a given number of iterations until the optimization finishes. We have now improved this and included a ftol parameter that serves as a threshold whenever the difference in the costs are not as significant anymore.

    Fix for the Rosenbrock function

    Turns out that there is something wrong with our Rosenbrock function for it does not return a vector of shape (n_particles, ). Don't worry, we have fixed that!

    Source code(tar.gz)
    Source code(zip)
  • v.0.1.8(Jan 10, 2018)

    Special Release

    This release reflects most of the changes requested by Journal of Open Source Software (JOSS) reviewers. We are now published in JOSS! You can check the review thread here and the actual paper in this link

    Source code(tar.gz)
    Source code(zip)
  • v.0.1.7(Sep 25, 2017)

    Release notes

    • FIX: Bugfix for local_best.py and binary.py not returning the best cost they have encountered in the optimization process - #34
    • IMPROVED: Git now ignores IPython notebook checkpoints
    Source code(tar.gz)
    Source code(zip)
  • v.0.1.6(Sep 24, 2017)

    Release notes

    • NEW: Hyperparameter search tools - #20, #25, #28
    • IMPROVED: Updated structure of Base classes for higher extensibility
    • IMPROVED: More robust tests for PlotEnvironment

    Hyperparameter Search Tools

    PySwarms now implements a native version of GridSearch and RandomSearch to help you find the best hyperparameters in your swarm. To use this feature, simply call the RandomSearch and GridSearch classes from the pyswarms.utils.search module.

    import numpy as np
    import pyswarms as ps
    from pyswarms.utils.search import RandomSearch
    from pyswarms.utils.functions import single_obj as fx
    
    # Set-up choices for the parameters
    options = {
        'c1': (1,5),
        'c2': (6,10),
        'w': (2,5),
        'k': (11, 15),
        'p': 1
    }
    
    # Create a RandomSearch object
    # n_selection_iters is the number of iterations to run the searcher
    # iters is the number of iterations to run the optimizer
    
    g = RandomSearch(ps.single.LocalBestPSO, n_particles=40,
                dimensions=20, options=options, objective_func=fx.sphere_func,
                iters=10, n_selection_iters=100)
    
    best_score, best_options = g.search()
    

    This then returns the best score found during optimization and the hyperparameter options that enabled it.

    >>> best_score
    1.41978545901
    >>> best_options['c1']
    1.543556887693
    >>> best_options['c2']
    9.504769054771
    

    Improved Library API

    Most of the swarm classes now inherit the base class in order to demonstrate its extensibility. If you are a developer or a swarm researcher planning to implement your own algorithms, simply inherit from these Base Classes and implement the optimize() method.

    Source code(tar.gz)
    Source code(zip)
  • v.0.1.5(Aug 11, 2017)

    Release notes

    • NEW: Easy graphics environment - #30, #31

    Graphics Environment

    This new plotting environment makes it easier to plot the costs and swarm movement in 2-d or 3-d planes. The PlotEnvironment class takes in the optimizer and its parameters as arguments. It then performs a fresh run to plot the cost and to create animations.

    An example of usage can be seen below:

    import pyswarms as ps
    from pyswarms.utils.functions import single_obj as fx
    from pyswarms.utils.environments import PlotEnvironment
    
    # Set-up optimizer
    options = {'c1':0.5, 'c2':0.3, 'w':0.9}
    optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=3, options=options)
    
    # Initialize plot environment
    plt_env = PlotEnvironment(optimizer, fx.sphere_func, 1000)
    
    # Plot the cost
    plt_env.plot_cost(figsize=(8,6));
    plt.show()
    
    Source code(tar.gz)
    Source code(zip)
Owner
Lj Miranda
Machine Learning Researcher at @thinkingmachines
Lj Miranda
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
Implementation and replication of ProGen, Language Modeling for Protein Generation, in Jax

ProGen - (wip) Implementation and replication of ProGen, Language Modeling for Protein Generation, in Pytorch and Jax (the weights will be made easily

Phil Wang 71 Dec 01, 2022
Generalized and Efficient Blackbox Optimization System.

OpenBox Doc | OpenBox中文文档 OpenBox: Generalized and Efficient Blackbox Optimization System OpenBox is an efficient and generalized blackbox optimizatio

DAIR Lab 238 Dec 29, 2022
This repo is for segmentation of T2 hyp regions in gliomas.

T2-Hyp-Segmentor This repo is for segmentation of T2 hyp regions in gliomas. By downloading the model from here you can use it to segment your T2w ima

1 Jan 18, 2022
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
the official implementation of the paper "Isometric Multi-Shape Matching" (CVPR 2021)

Isometric Multi-Shape Matching (IsoMuSh) Paper-CVF | Paper-arXiv | Video | Code Citation If you find our work useful in your research, please consider

Maolin Gao 9 Jul 17, 2022
The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
Iranian Cars Detection using Yolov5s, PyTorch

Iranian Cars Detection using Yolov5 Train 1- git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt 2- Dataset ../

Nahid Ebrahimian 22 Dec 05, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Junbin Xiao 50 Nov 24, 2022
source code of Adversarial Feedback Loop Paper

Adversarial Feedback Loop [ArXiv] [project page] Official repository of Adversarial Feedback Loop paper Firas Shama, Roey Mechrez, Alon Shoshan, Lihi

17 Jul 20, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
DiAne is a smart fuzzer for IoT devices

Diane Diane is a fuzzer for IoT devices. Diane works by identifying fuzzing triggers in the IoT companion apps to produce valid yet under-constrained

seclab 28 Jan 04, 2023
Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample

Alias-Free-Torch Simple torch module implementation of Alias-Free GAN. This repository including Alias-Free GAN style lowpass sinc filter @filter.py A

이준혁(Junhyeok Lee) 64 Dec 22, 2022
Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022
Bib-parser - Convenient script to parse .bib files with the ACM Digital Library like metadata

Bib Parser Convenient script to parse .bib files with the ACM Digital Library li

Mehtab Iqbal (Shahan) 1 Jan 26, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022