当前位置:网站首页>Pymoo学习 (7):并行化Parallelization
Pymoo学习 (7):并行化Parallelization
2022-07-25 18:59:00 【因吉】
1 引入
在实际中,并行化可以显著提升优化的效率。对于基于Population的算法,可以通过并行化评估本身,实现对一组解决方案的评估。
2 向量化矩阵运算
一种方法是使用Numpy矩阵运算,它已用于几乎所有在Pymoo中实现的测试问题。默认情况下,elementwise_evaluation设置为False,这意味着_evaluate检索一组解决方案。 因此,输入矩阵 x x x的每一行是一个个体,每一列是一个变量:
import numpy as np
from pymoo.core.problem import Problem
from pymoo.algorithms.soo.nonconvex.ga import GA
from pymoo.optimize import minimize
class MyProblem(Problem):
def __init__(self, **kwargs):
super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5, **kwargs)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum(x ** 2, axis=1)
res = minimize(MyProblem(), GA())
print('Threads:', res.exec_time)
输出如下:
Threads: 1.416006326675415
3 Starmap接口
Starmap由Python标准库multiprocessing.Pool.starmap提供,可以方便的进行并行化。此时需要设置elementwise_evaluation=True,意味着每一次调用_evaluate只评估一个方案。
3.1 线程
import numpy as np
from pymoo.core.problem import Problem
from pymoo.core.problem import starmap_parallelized_eval
from pymoo.algorithms.soo.nonconvex.pso import PSO
from pymoo.optimize import minimize
from multiprocessing.pool import ThreadPool
class MyProblem(Problem):
def __init__(self, **kwargs):
super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5, **kwargs)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum(x ** 2, axis=1)
n_threads = 8
pool = ThreadPool(n_threads)
problem = MyProblem(runner=pool.starmap, func_eval=starmap_parallelized_eval)
res = minimize(problem, PSO(), seed=1, n_gen=100)
print('Threads:', res.exec_time)
输出如下:
Threads: 0.5501224994659424
3.2 进程
import multiprocessing
n_proccess = 8
pool = multiprocessing.Pool(n_proccess)
problem = MyProblem(runner=pool.starmap, func_eval=starmap_parallelized_eval)
res = minimize(problem, PSO(), seed=1, n_gen=100)
print('Processes:', res.exec_time)
输出如下:
Processes: 1.1640357971191406
3.3 Dask
更高级的方法是将评估函数分配给几个worker。在Pymoo中推荐使用Dask。
注:可能需要安装以下库:
pip install dask distributed
代码如下:
import numpy as np
from dask.distributed import Client
from pymoo.core.problem import dask_parallelized_eval
from pymoo.core.problem import Problem
from pymoo.algorithms.soo.nonconvex.pso import PSO
from pymoo.optimize import minimize
class MyProblem(Problem):
def __init__(self, **kwargs):
super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5, **kwargs)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum(x ** 2, axis=1)
if __name__ == '__main__':
client = Client()
client.restart()
print("STARTED")
client = Client()
problem = MyProblem(runner=client, func_eval=dask_parallelized_eval)
res = minimize(problem, PSO(), seed=1, n_gen=100)
print('Dask:', res.exec_time)
输出如下:
STARTED
Dask: 1.30446195602417
4 个性化并行
4.1 线程
import numpy as np
from multiprocessing.pool import ThreadPool
from pymoo.core.problem import Problem
from pymoo.algorithms.soo.nonconvex.pso import PSO
from pymoo.optimize import minimize
class MyProblem(Problem):
def __init__(self, **kwargs):
super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5, **kwargs)
def _evaluate(self, X, out, *args, **kwargs):
def my_eval(x):
return (x ** 2).sum()
params = [[X[k]] for k in range(len(X))]
F = pool.starmap(my_eval, params)
out["F"] = np.array(F)
if __name__ == '__main__':
pool = ThreadPool(8)
problem = MyProblem()
res = minimize(problem, PSO(), seed=1, n_gen=100)
print('Threads:', res.exec_time)
输出如下:
Threads: 1.0212376117706299
4.2 Dask
import numpy as np
from dask.distributed import Client
from pymoo.algorithms.soo.nonconvex.pso import PSO
from pymoo.core.problem import Problem
from pymoo.optimize import minimize
class MyProblem(Problem):
def __init__(self, *args, **kwargs):
super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5,
elementwise_evaluation=False, *args, **kwargs)
def _evaluate(self, X, out, *args, **kwargs):
def fun(x):
return np.sum(x ** 2)
jobs = [client.submit(fun, x) for x in X]
out["F"] = np.row_stack([job.result() for job in jobs])
if __name__ == '__main__':
client = Client(processes=False)
problem = MyProblem()
res = minimize(problem, PSO(), seed=1, n_gen=100)
print('Dask:', res.exec_time)
client.close()
输出如下:
Dask: 19.102460861206055
参考文献
【1】https://pymoo.org/problems/parallelization.html
【2】https://blog.csdn.net/u013066730/article/details/105821888
边栏推荐
猜你喜欢

Dachang cloud business adjustment, a new round of war turn

歌曲转调之后和弦如何转换

SQL 实现 Excel 的10个常用功能,附面试原题

Dynamic memory management

Pixel2mesh generates 3D meshes from a single RGB image eccv2018

进程通信(SystemV通信方式:共享内存,消息队列,信号量)

With a financing of 200million yuan, the former online bookstore is now closed nationwide, with only 3 stores left in 60 stores

How to design product help center? The following points cannot be ignored

ThreadLocal夺命11连问

Communication between processes (pipeline communication)
随机推荐
Interface automation test platform fasterrunner series (III) - operation examples
Interface automation test platform fasterrunner series (II) - function module
优维低代码:Use Resolves
如何创建一个有效的帮助文档?
The understanding of domain adaptation in transfer learning and the introduction of three technologies
果链“围城”:傍上苹果,是一场甜蜜与苦楚交错的旅途
Dachang cloud business adjustment, a new round of war turn
Care for front-line epidemic prevention workers, Haocheng JIAYE and Gaomidian sub district office jointly build the great wall of public welfare
With a financing of 200million yuan, the former online bookstore is now closed nationwide, with only 3 stores left in 60 stores
基础乐理--配置和弦
上半年出货量已超去年全年,森思泰克毫米波雷达“夺食”国际巨头
JMeter performance test actual video (what are the common performance test tools)
Everyone can participate in the official launch of open source activities. We sincerely invite you to experience!
[open source project] stm32c8t6 + ADC signal acquisition + OLED waveform display
【919. 完全二叉树插入器】
How high are the young people in this class for "ugly things"?
人人可参与开源活动正式上线,诚邀您来体验!
srec_cat 常用参数的使用
无惧高温暴雨,有孚网络如何保您无忧?
分享六个实用的小程序插件