Skip to content

[WIP] With References check for equivalent sets instead of identical Set objects #1243

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pyomo/core/base/reference.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,7 +410,7 @@ def _identify_wildcard_sets(iter_stack, index):
if len(index[i]) != len(level):
return None
# if any subset differs
if any(index[i].get(j,None) is not _set for j,_set in iteritems(level)):
if any(index[i].get(j,None) != _set for j,_set in iteritems(level)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you comment on the need for this change? Th new code will be slower (is is an O(1) operation, where != in general is O(n)). Also, by not using is, something like the following:

m.b = Block([1,2])
m.I = Set(initialize=[1,2,3])
m.J = Set(initialize=[1,2,3])
m.b[1].x = Var(m.I)
m.b[2].x = Var(m.J)
s = m.b[:].x[:]

will identify wildcard sets for s as [m.b_index, m.I], even though m.b[2].x is indexed by m.J and not m.I. When we originally wrote this logic, the idea was to only return wildcard sets if the indexing sets were demonstrably "known".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes the toy model in issue #905 where a model containing a Block constructed using a rule like the one below had to have the discretization transformation come before the arc_expansion transformation.

m = ConcreteModel()

m.s = ContinuousSet(initialize=[0,10])

def block_rule(b, i):
    b.v = Var(['a','b'], initialize=1)
    return

m.b1 = Block(m.s, rule=block_rule)
m.b2 = Block(m.s, rule=block_rule)

m.p1 = Port()
m.p2 = Port()

m.r1 = Reference(m.b1[:].v[:])

The problem is that b1[0].v and b1[10].v are not identified as having the same indexing set because a new Set is being implicitly created from the values of the Python list. This causes the ContinuousSet index in the Reference component r1 to be obscured so the indexed equality constraint that is eventually added by the arc_expansion transformation is not explicitly indexed by a ContinuousSet and therefore not expanded by the discretization transformation.

I realize this change violates the original logic which requires the user to be much more explicit when declaring their indexing sets but I also think that the above implementation is pretty common and most users would expect b1[0].v and b1[10].v to be identified as having the same indexing set (they shouldn't have to know that a Set object is being created for them implicitly)

It's possible that there are edge cases here where if we were creating References for Params or Constraints (which aren't necessarily dense) then checking is not vs. != leads to more or less intuitive behavior but for Vars I think it makes sense to check if all the elements of the sets are the same.

A couple other ideas for solving this problem of transformation ordering are:

  1. In the discretization transformation check component indexing sets for the SetOf type and automatically try to expand the component if it is found (since it might contain an obscured ContinuousSet) - I'm worried this could lead to weird side-effects if the user does anything weird with adding elements to non-ContinuousSets

  2. Make References or SetOf aware of ContinuousSets - Seems weird to make core code aware of something in Pyomo.DAE

  3. Modify the wildcard identification functionality to preserve any indexing sets that are identified as the same even if it encounters ones that aren't the same at a different level - I'm not sure how hard this would be to implement

Copy link
Member

@jsiirola jsiirola Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me add another option:

  1. Modify the wildcard identification so that it handles SetOf differently (that is, use is for Set and == for SetOf).

That said, I think I like option 3. Looking at the wildcard identification, I suspect that we could support a partial identification without too much hassle. The challenge is that we would be potentially creating several pseudoset objects, and those objects would not be attached to the model. Given that I am in favor of moving toward not requiring all set objects to be attached to the model (see #45), I don't see too much of a problem going down that path. However, I think we can only handle "floating sets" correctly after we switch over to using the set rewrite (#326) by default.

return None
return index

Expand Down
2 changes: 1 addition & 1 deletion pyomo/core/tests/unit/test_reference.py
Original file line number Diff line number Diff line change
Expand Up @@ -540,7 +540,7 @@ def b(b,i):
m.r = Reference(m.b[:].x[:])

self.assertIs(m.r.type(), Var)
self.assertIs(type(m.r.index_set()), SetOf)
self.assertIs(type(m.r.index_set()), _SetProduct)
self.assertEqual(len(m.r), 2*2)
self.assertEqual(m.r[1,3].lb, 1)
self.assertEqual(m.r[2,4].lb, 2)
Expand Down