Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Related tags

Machine Learningpyomo
Overview

Github Actions Status Jenkins Status codecov Documentation Status GitHub contributors Merged PRs Issue stats Project Status: Active - The project has reached a stable, usable state and is being actively developed.

a COIN-OR project

Pyomo Overview

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Pyomo supports a wide range of problem types, including:

  • Linear programming
  • Quadratic programming
  • Nonlinear programming
  • Mixed-integer linear programming
  • Mixed-integer quadratic programming
  • Mixed-integer nonlinear programming
  • Mixed-integer stochastic programming
  • Generalized disjunctive programming
  • Differential algebraic equations
  • Mathematical programming with equilibrium constraints

Pyomo supports analysis and scripting within a full-featured programming language. Further, Pyomo has also proven an effective framework for developing high-level optimization and analysis tools. For example, the mpi-sppy package provides generic solvers for stochastic programming. mpi-sppy leverages the fact that Pyomo's modeling objects are embedded within a full-featured high-level programming language, which allows for transparent parallelization of subproblems using Python parallel communication libraries.

Pyomo was formerly released as the Coopr software library.

Pyomo is available under the BSD License, see the LICENSE.txt file.

Pyomo is currently tested with the following Python implementations:

  • CPython: 3.6, 3.7, 3.8, 3.9
  • PyPy: 3

Installation

PyPI PyPI version PyPI downloads

pip install pyomo

Anaconda Anaconda version Anaconda downloads

conda install -c conda-forge pyomo

Tutorials and Examples

Getting Help

To get help from the Pyomo community ask a question on one of the following:

Developers

Pyomo development moved to this repository in June, 2016 from Sandia National Laboratories. Developer discussions are hosted by google groups.

By contributing to this software project, you are agreeing to the following terms and conditions for your contributions:

  1. You agree your contributions are submitted under the BSD license.
  2. You represent you are authorized to make the contributions and grant the license. If your employer has rights to intellectual property that includes your contributions, you represent that you have received permission to make contributions and grant the required license on behalf of that employer.

Related Packages

See https://pyomo.readthedocs.io/en/latest/related_packages.html.

Comments
  • Add kaug dsdp mode into sens.py

    Add kaug dsdp mode into sens.py

    Summary/Motivation:

    The current sens.py uses the only sipopt. kaug dsdp mode has added as another option for sensitivity.

    Changes proposed in this PR:

    • Add ipopt solver option as an input, optarg=None (line 229, 375-376)
    • kaug requires variable initialization (line 311-316).
    • k_aug doesn’t support inequalities. Raise exception error (line 363).
    • This function requires ipopt, k_aug, dotsens (line 374-384).
    • Declare Suffixes(line 386-412).
    • ipopt.solve -> kaug.solve -> dotsens.solve (line 415-428).
    • fixes #2047

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by JanghoPark-LBL 48
  • APPSI: 'Could not import gurobipy'

    APPSI: 'Could not import gurobipy'

    Dear all,

    I tried to use APPSI and Gurobi as solver but I got this error: 'Could not import gurobipy' By running the example:

    from gurobipy import GRB 
    import pyomo.environ as pe
    from pyomo.core.expr.taylor_series import taylor_series_expansion
    from pyomo.contrib import appsi
    
    m = pe.ConcreteModel()
    m.x = pe.Var(bounds=(0, 4))
    m.y = pe.Var(within=pe.Integers, bounds=(0, None))
    m.obj = pe.Objective(expr=2*m.x + m.y)
    m.cons = pe.ConstraintList()  # for the cutting planes
    
    def _add_cut(xval):
        # a function to generate the cut
        m.x.value = xval
        return m.cons.add(m.y >= taylor_series_expansion((m.x - 2)**2))
    
    _c = _add_cut(0)  # start with 2 cuts at the bounds of x
    _c = _add_cut(4)  # this is an arbitrary choice
    
    opt = appsi.solvers.Gurobi()
    opt.config.stream_solver = True
    opt.set_instance(m) 
    opt.gurobi_options['PreCrush'] = 1
    opt.gurobi_options['LazyConstraints'] = 1
    
    def my_callback(cb_m, cb_opt, cb_where):
        if cb_where == GRB.Callback.MIPSOL:
            cb_opt.cbGetSolution(vars=[m.x, m.y])
            if m.y.value < (m.x.value - 2)**2 - 1e-6:
                cb_opt.cbLazy(_add_cut(m.x.value))
    
    opt.set_callback(my_callback)
    res = opt.solve(m) 
    

    image

    I have the packages:

    • pyomo 6.3.0
    • gurobipy 9.5.1

    And: -Gurobi 9.5.1 with its respective license.

    Please, let me know if I made something wrong.

    Best regards, Erik

    bug pyomo.contrib 
    opened by erikfilias 45
  • Pyomo Network

    Pyomo Network

    Summary/Motivation:

    This originated out of the desire to have IDAES Streams inherit off of a Pyomo component in order to be able to genericize the sequential modular simulator I'm working on. However, it's also a useful Pyomo component since it provides a simple API for equating everything in two Connectors, and expanding Connections is less expensive than the current ConnectorExpander since it is able to search the model for the specific Connection ctype.

    Basically a Connection is a component on which you can define either a source/destination pair for a directed Connection or simply pass a list/tuple of two Connectors for an undirected Connection. After expanding, simple equality constraints are added onto a new block and the connection is deactivated.

    Except it's all called ports and arcs now and it's in a new package and it doesn't have to be just an equality relationship.

    Changes proposed in this PR:

    • Introduce Pyomo network package
      • Ports and Arcs

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    IDAES related 
    opened by gseastream 28
  • Dropping Support for Python 2.7

    Dropping Support for Python 2.7

    This may seem premature, but many major packages in the Python community are planning to drop support for Python 2.7 in or before 2020. In fact, many major projects are planning to only support bug fixes sometime beforehand:

    • http://python3statement.org/

    Since we often plan releases in early/mid-fall, I think it's reasonable to plan our last major release supporting Python 2.7 in the fall of 2019.

    design discussions testing_and_ci 
    opened by whart222 26
  • Optimizations to minimize use of NumericConstant objects

    Optimizations to minimize use of NumericConstant objects

    Fixes N/A.

    Summary/Motivation:

    Clean-up the use of as_numeric and re-define it with a more restrictive semantics.

    Changes proposed in this PR:

    The as_numeric() function is used to create NumericConstant objects, which are used to wrap numeric values in Pyomo components.

    This PR also changes the caching mechanism. Values are not coerced to floats, but instead they are cached separately for each type.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by whart222 24
  • Merge Expression Branch

    Merge Expression Branch

    This PR can be referenced to discuss issues that need to be resolved to merge the expressions branch: expr_dev.

    Dependencies:

    • [x] Resolve #276 (Kernel subclasses)
    • [x] Resolve #212 (GDP Rework) and merge into this branch
    IDAES related pyomo.core 
    opened by whart222 23
  • [IDEA] Generation of problem files with mixed-representation index sets

    [IDEA] Generation of problem files with mixed-representation index sets

    Fixes #567 (partial).

    Summary/Motivation:

    We recently made a small change that makes the default representation of Sets in Pyomo rely on insertion order. This PR completes that activity by resolving the simple test failures outlined in #567. Specifically, these changes allow mixed-representation sets to be used as index sets. The result is that problem files can be generated with determinism=0, which does no sorting of index values.

    However, these changes only work with CPython (3.6, 3.7) and PyPy. Starting with Python3.6, the CPython and PyPy Python implementations have deterministic key orderings in their dictionary representations. And as of Python 3.7 this property is a part of the Python language specification. This feature is exploited to provide deterministic file generation without sorting index values.

    NOTE: This is a partial fix of #567, since it only applies to more recent versions of Python. However, there is not a clear motivation for extending this fix for older versions of Python. However, this PR is motivated by the fact that sorting is not done during file generation, and hence files are generated more quickly (especially for models with constraints that have large index sets).

    NOTE: Sorted ordering of mixed-representation sets often works for Python 2.7 (using determinism=1), since that version allows for comparison of more data types.

    NOTE: This PR does not simply use the sorted_robust() function to sort when generating problem files. The sorted_robust() function is significantly slower than sorted() with mixed-representation data. Thus using sorted_robust() with Python 3.x would make it appear that Python3 is slower than Python2, when in fact there are faster alternatives.

    Changes proposed in this PR:

    • Changing iteration in Set objects to use the insertion order by default.
    • Adding tests that confirm that mixed-representation sorts can be solved.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    AT: STALE 
    opened by whart222 21
  • New ShortNameLabeler, used to limit GAMS symbol names

    New ShortNameLabeler, used to limit GAMS symbol names

    Resolves #488 .

    Create a new labeler called ShortNameLabeler which can take a limit size, a prefix, a start, and even a custom labeler. If no custom labeler is provided, AlphaNumericTextLabeler will be used. The final labels are shortened from the output from this first labeler. This was applied to the GAMS writer in order to enforce a symbol name limit. Once caveat I though of so far was the rare possibility of a name overlap/conflict, which would only happen if the user has a 63-character component name that might look like mycomponent_17, and another component over 63 characters similarly named mycomponentbutdifferent which happens to be the 17th component with a long name so it gets shortened to mycomponent_17. This feels very rare to me, and even if it happened to a user, they could create their own version of a ShortNameLabeler with a different prefix and pass that to the writer via the labeler keyword.

    Also let me know if the tests I added are appropriate or if something else should be done to test the new functionality. I ran it as a solver test to make sure that, by default, the GAMS writer produces output that can be successfully run by GAMS.

    opened by gseastream 19
  • Fix so that generate_cuid_names descends into Disjunct objects as well

    Fix so that generate_cuid_names descends into Disjunct objects as well

    This is a quick fix to allow generate_cuid_names to descend into "Block-like" components. I am not sure why generate_cuid_names doesn't use the _tree_iterator approach that things like block_data_objects uses, but I wasn't comfortable enough with the iteration methods to go ahead and refactor the function with that approach.

    opened by qtothec 19
  • Document PR Process

    Document PR Process

    Summary/Motivation:

    Add documentation of pull request expectations and conventions.

    This fixes #304 and fixes #267.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by qtothec 18
  • add logfile option to GAMS solver

    add logfile option to GAMS solver

    This would add an option for specifying a custom logfile for the GAMS command line solver. The other solvers seem to support such an option and it is quite useful if one want more control over the logfiles destination.

    opened by daviskirk 18
  • Unit test for QCQO

    Unit test for QCQO

    Change

    • Added a small unit test for QCQO problems for Pyomo-MOSEK.

    ( I did not want to obfuscate the purpose of #2647, so I made this a separate PR)

    Summary/Motivation:

    A while ago there was a bug due to mosek_direct passing upper-triangular elements when setting Q matrix elements. This was fixed, but no unit test was added for this. That has been changed.

    Changes proposed in this PR:

    • Unit test (small qcqo problem).

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by Utkarsh-Detha 0
  • Manage Gurobi environments in GurobiDirect

    Manage Gurobi environments in GurobiDirect

    Fixes #2408

    Summary/Motivation:

    There are currently several limitations in the GurobiDirect interface:

    1. Some Gurobi parameters cannot be used with the current approach, as they require explicitly creating a gurobipy Env object. This includes connection parameters for compute servers, token servers, and instant cloud (can be worked around via a license file, but this isn't always an ideal approach) and special parameters such as MemLimit.
    2. There is no clean way to close Gurobi models and environments, which leaves license tokens in-use and compute server connections open longer than a user need them.
    3. A user cannot retry acquiring a Gurobi license token (important in shared license environments) since the GurobiDirect class caches errors in global state.

    Changes proposed in this PR:

    Introduces a constructor flag manage_env for GurobiDirect (defaults to False), and two public methods .close() and .close_global(). If users set manage_env=True:

    • GurobiDirect explicitly creates a Gurobi environment bound to the solver instance. This enables Gurobi resources to be properly freed by the solver object:
    with SolverFactory('gurobi', solver_io='python', manage_env=True) as opt:
        opt.solve(model)
    # All Gurobi models and environments are freed
    
    • Calling .close() achieves the same result as the context manager:
    opt = SolverFactory('gurobi', solver_io='python', manage_env=True)
    try:
        opt.solve(model)
    finally:
        opt.close()
    # All Gurobi models and environments are freed
    
    • Internally, solver options are passed to the Env constructor (instead of the Model, as is currently done) to allow environment-level connection parameters to be used:
    options = {
        "CSManager": "<url>",
        "CSAPIAccessID": "<access-id>",
        "CSAPISecret": "<api-key>",
    }
    with SolverFactory('gurobi', solver_io='python', manage_env=True, options=options) as opt:
        opt.solve(model)  # Solved on compute server
    # Compute server connection terminated
    

    If manage_env=False (the default) is set, then users will get the old behaviour, which uses the Gurobi default/global environment. There are some minor changes:

    • Calling .close(), or exiting the context properly disposes of all models created by the solver
    with SolverFactory('gurobi', solver_io='python') as opt:
        opt.solve(model)
    # Gurobi models created by `opt` are freed; the default/global Gurobi environment is still active
    
    • Calling .close_global() disposes of models created by the solver, and disposes the Gurobi default environment. This will free all Gurobi resources assuming the user did not create any other models (e.g. via another GurobiDirect object with manage_env=False):
    opt = SolverFactory('gurobi', solver_io='python')
    try:
        opt.solve(model)
    finally:
        opt.close_global()
    # Gurobi models created by `opt` are freed, the default/global Gurobi environment is closed
    

    Finally, the available() call no longer stores errors globally and repeats them back if users retry the check. So users can do the following to queue requests if they are using a shared license (regardless of whether manage_env is set to True or False):

    with SolverFactory('gurobi', solver_io='python') as opt:
        while not available(exception_flag=False):
            time.sleep(1)
        opt.solve(model)
    

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by simonbowly 0
  • Fixing some bugs in scaling transformation

    Fixing some bugs in scaling transformation

    Fixes None .

    Summary/Motivation:

    In testing the Pyomo scaling transformation on some IDAES models, a couple of bugs were encountered that this PR aims to address.

    1. The scaling transformation tool have a check to see if there is a scaling suffix defined on the top-level block to be scaled. However, after the changes proposed for suffix behavior in #2641 and implemented for scaling in #2619 this is no longer required.
    2. When testing the scaling transformation on models with References it was discovered that these were not being properly remapped by the rename_components function. It was found that only the Reference was being renamed, but the _data dict was left pointing to the old data objects (which had since been deleted).
    3. Further, the application of the scaling factors was being applied to all components in the model, resulting in potential duplication for effort and confusion of scaling factors when References were present.
    4. propagate_solution method checks that the model has exactly one active objective function, however this is only required if calculating duals or reduced costs. This precludes it being used on models with no objective functions. There was also a bug in the code to raise an Exception if there was not exactly one objective.
    5. I also found an unrelated edge case in calculate_variable_from_constraint where assuming the function was linear resulted in an OverflowError when evaluating the function causing the method to fail.

    Changes proposed in this PR:

    • Remove check for top-level scaling suffix in scaling transformation.
    • Update rename_components method to collect and remap References during renaming.
    • Add check to skip References when applying scaling factors to models.
    • Update propagate_solution method to only check for the number of active objective functions if a dual or reduced cost suffix is present.
    • Add some additional tests to cover the fixes.
    • Add a try/except to calculate_variable_from_constraint to catch OverflowErrors in the linear stage and to move onto the non-linear stage.

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by andrewlee94 0
  • `Bunch.__delattr__` does not (necessarily) remove attribute from `Bunch` object

    `Bunch.__delattr__` does not (necessarily) remove attribute from `Bunch` object

    Summary

    When attempting attribute deletion on a Bunch object (from pyomo.common.collections), such as through delattr or del, the attribute does not seem to have been removed from the underlying dict structure. Complete removal of the attribute seems to require calls to both Bunch.__delattr__ and Bunch.__delitem__ successively.

    Steps to reproduce the issue

    # example.py
    from pyomo.common.collections import Bunch
    
    
    def display_bunch_attr_val(bunch_obj, attr):
        # attribute value remains unchanged after delattr
        print("-" * 30)
        print("Attribute value check:")
        print("Getattr:", getattr(bunch_obj, attr, None))
        print("Getitem:", bunch_obj[attr])
    
    
    def display_bunch_attr_in_keys(bunch_obj):
        print("-" * 30)
        print("Bunch keys check:")
        print(bunch_obj.keys())  # remains unchanged after delattr
        print(f"Attribute in Bunch keys: {attr_name in bunch_obj.keys()}")
        print(
            "Attribute in Bunch.__dict__ keys: "
            f"{attr_name in bunch_obj.__dict__}"
        )
    
    
    bunch = Bunch()
    attr_name = "example"
    val = 300
    
    print(f"Setting attribute name {attr_name!r} to value {val}")
    setattr(bunch, attr_name, val)
    print(f"Bunch keys: {bunch.keys()}")
    
    print("-" * 30)
    print("Invoking delattr")
    delattr(bunch, attr_name)
    
    # attribute value remains unchanged after delattr
    display_bunch_attr_val(bunch, attr_name)
    
    # bunch keys check
    display_bunch_attr_in_keys(bunch)
    
    print("-" * 30)
    print("Invoking delitem")
    del bunch[attr_name]
    
    # keys and the attribute value
    display_bunch_attr_val(bunch, attr_name)
    
    # bunch keys check
    display_bunch_attr_in_keys(bunch)
    

    Error Message

    $ python example.py
    Setting attribute name 'example' to value 300
    Bunch keys: dict_keys(['example'])
    ------------------------------
    Invoking delattr
    ------------------------------
    Attribute value check:
    Getattr: 300
    Getitem: 300
    ------------------------------
    Bunch keys check:
    dict_keys(['example'])
    Attribute in bunch keys: True
    Attribute in bunch.__dict__ keys: False
    ------------------------------
    Invoking delitem
    ------------------------------
    Attribute value check:
    Getattr: None
    Getitem: None
    ------------------------------
    Bunch keys check:
    dict_keys([])
    Attribute in bunch keys: False
    Attribute in bunch.__dict__ keys: False
    

    Information on your system

    Pyomo version: 6.4.5dev0 Python version: 3.9.13 Operating system: Ubuntu 20.04 How Pyomo was installed (PyPI, conda, source): source Solver (if applicable): N/A

    Additional information

    bug 
    opened by shermanjasonaf 0
  • Switch default NL writer to nlv2

    Switch default NL writer to nlv2

    Fixes # .

    Summary/Motivation:

    This switches the default NL writer to the new "NLv2" writer.

    Changes proposed in this PR:

    • Change default NL writer

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by jsiirola 1
  • Address `AttributeError` raised when `Constraint` with `SumExpression` is declared after `replace_expressions`

    Address `AttributeError` raised when `Constraint` with `SumExpression` is declared after `replace_expressions`

    Summary

    In event a Constraint expression contains a SumExpression obtained through the core.util.replace_expressions function, an AttributeError may be raised, as it appears that the args attribute of a sum expression is set to a tuple rather than a list.

    Steps to reproduce the issue

    import pyomo.environ as pyo
    from pyomo.core.expr.visitor import replace_expressions
    
    m = pyo.ConcreteModel()
    
    m.p = pyo.Param(range(3), initialize=1, mutable=True)
    
    m.x = pyo.Var()
    m.v = pyo.Var(range(3), initialize=1)
    
    # note: the lower bound is a SumExpression here
    m.c = pyo.Constraint(expr=2 + 3 * m.p[2] == m.x)
    
    lower_expr = replace_expressions(m.c.lower, {id(m.p[2]): m.v[2]})
    body_expr = replace_expressions(m.c.body, {id(m.p[2]): m.v[2]})
    
    # build constraint with v[2] substituted for p[2].
    # causes error, as the `args` attribute of a SumExpression
    # somewhere is a tuple (not list)
    m.c2 = pyo.Constraint(expr=lower_expr == body_expr)
    

    Error Message

    $
    ERROR: Rule failed when generating expression for Constraint c2 with index
        None: AttributeError: 'tuple' object has no attribute 'append'
    ERROR: Constructing component 'c2' from data=None failed: AttributeError:
        'tuple' object has no attribute 'append'
    Traceback (most recent call last):
      File "/home/jasherma/Documents/vim_example/pyomo_features_examples/test_err_constraint_add.py", line 34, in <module>
        m.c2 = pyo.Constraint(expr=lower_expr == body_expr)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/block.py", line 649, in __setattr__
        self.add_component(name, val)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/block.py", line 1219, in add_component
        val.construct(data)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/disable_methods.py", line 116, in construct
        return base.construct(self, data)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 763, in construct
        self._setitem_when_not_present(index, rule(block, index))
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/indexed_component.py", line 1005, in _setitem_when_not_present
        obj.set_value(value)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 922, in set_value
        return super(ScalarConstraint, self).set_value(expr)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 589, in set_value
        self._body = args[0] - args[1]
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numvalue.py", line 673, in __sub__
        return _generate_sum_expression(_sub,self,other)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numeric_expr.py", line 1335, in _generate_sum_expression
        return _self.add(-_other)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numeric_expr.py", line 642, in add
        self._args_.append(new_arg)
    AttributeError: 'tuple' object has no attribute 'append'
    

    Information on your system

    Pyomo version: 6.4.3dev0 Python version: 3.9.13 Operating system: Ubuntu 20.04 How Pyomo was installed (PyPI, conda, source): source Solver (if applicable): N/A

    Additional information

    • This exception is also raised in the event the expression is added to a ConstraintList (such as through ConstraintList.add). The PyROS solver (contrib.pyros) adds constraints to subproblems of two-stage RO models in this way. So PyROS users may be affected.
    bug 
    opened by shermanjasonaf 0
Releases(6.4.4)
UpliftML: A Python Package for Scalable Uplift Modeling

UpliftML is a Python package for scalable unconstrained and constrained uplift modeling from experimental data. To accommodate working with big data, the package uses PySpark and H2O models as base l

Booking.com 254 Dec 31, 2022
Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic p

Pyomo 1.4k Dec 28, 2022
A Python package for time series classification

pyts: a Python package for time series classification pyts is a Python package for time series classification. It aims to make time series classificat

Johann Faouzi 1.4k Jan 01, 2023
SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow

SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and workloads.

A linear regression model for house price prediction

Linear_Regression_Model A linear regression model for house price prediction. This code is using these packages, so please make sure your have install

ShawnWang 1 Nov 29, 2021
A simple machine learning package to cluster keywords in higher-level groups.

Simple Keyword Clusterer A simple machine learning package to cluster keywords in higher-level groups. Example: "Senior Frontend Engineer" -- "Fronte

Andrea D'Agostino 10 Dec 18, 2022
Learning --> Numpy January 2022 - winter'22

Numerical-Python Numpy NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along

Shahzaneer Ahmed 0 Mar 12, 2022
PyTorch extensions for high performance and large scale training.

Description FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library ext

Facebook Research 2k Dec 28, 2022
Implementation of K-Nearest Neighbors Algorithm Using PySpark

KNN With Spark Implementation of KNN using PySpark. The KNN was used on two separate datasets (https://archive.ics.uci.edu/ml/datasets/iris and https:

Zachary Petroff 4 Dec 30, 2022
Laporan Proyek Machine Learning - Azhar Rizki Zulma

Laporan Proyek Machine Learning - Azhar Rizki Zulma Project Overview Domain proyek yang dipilih dalam proyek machine learning ini adalah mengenai hibu

Azhar Rizki Zulma 6 Mar 12, 2022
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Horovod Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make dis

Horovod 12.9k Jan 07, 2023
QML: A Python Toolkit for Quantum Machine Learning

QML is a Python2/3-compatible toolkit for representation learning of properties of molecules and solids.

176 Dec 09, 2022
A comprehensive repository containing 30+ notebooks on learning machine learning!

A comprehensive repository containing 30+ notebooks on learning machine learning!

Jean de Dieu Nyandwi 3.8k Jan 09, 2023
GAM timeseries modeling with auto-changepoint detection. Inspired by Facebook Prophet and implemented in PyMC3

pm-prophet Pymc3-based universal time series prediction and decomposition library (inspired by Facebook Prophet). However, while Faceook prophet is a

Luca Giacomel 314 Dec 25, 2022
As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Crate will be the hub of various ML projects which will be the resources for the ML enthusiasts! Open Source Program: SWOC 2021 and JWOC 2022.

Machine Learning Loot Crate 💻 🧰 🔴 Welcome contributors! As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Cra

Abhishek Sharma 89 Dec 28, 2022
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
Binary Classification Problem with Machine Learning

Binary Classification Problem with Machine Learning Solving Approach: 1) Ultimate Goal of the Assignment: This assignment is about solving a binary cl

Dinesh Mali 0 Jan 20, 2022
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 648 Dec 16, 2022
Book Recommender System Using Sci-kit learn N-neighbours

Model-Based-Recommender-Engine I created a book Recommender System using Sci-kit learn's N-neighbours algorithm for my model and the streamlit library

1 Jan 13, 2022
A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

Allan Barcelos 8 Aug 10, 2022