Inverted steps 2 and 3 of the Solidago pipeline#2043
Conversation
amatissart
left a comment
There was a problem hiding this comment.
Looks good to me, with minor comments.
I would also bump the package version from v0.3.1 to v0.4.0 in solidago/src/solidago/__version__.py, as the change in the pipeline arguments can be considered as a breaking change.
| self, | ||
| trust_propagation: TrustPropagation = DefaultPipeline.trust_propagation, |
There was a problem hiding this comment.
We can make sure no client unexpectedly relies on the order of the arguments by requiring named arguments here:
| self, | |
| trust_propagation: TrustPropagation = DefaultPipeline.trust_propagation, | |
| self, | |
| *, | |
| trust_propagation: TrustPropagation = DefaultPipeline.trust_propagation, |
| logger.info(f"Pipeline 3. Computing voting rights with {str(self.voting_rights)}") | ||
| # WARNING: `privacy` may contain (user, entity) even if user has expressed no judgement | ||
| # about the entity. These users should not be given a voting right on the entity. | ||
| # For now, irrelevant privacy values are excluded in `input.get_pipeline_kwargs()` | ||
| voting_rights, entities = self.voting_rights(users, entities, vouches, privacy) | ||
| start_step3 = timeit.default_timer() | ||
| logger.info(f"Pipeline 2. Terminated in {np.round(start_step3 - start_step2, 2)} seconds") | ||
|
|
||
| logger.info(f"Pipeline 3. Learning preferences with {str(self.preference_learning)}") | ||
| user_models = self.preference_learning(judgments, users, entities, init_user_models) | ||
| voting_rights, entities = self.voting_rights(users, entities, vouches, privacy, user_models) |
There was a problem hiding this comment.
This PR could be an opportunity to solve this warning for good: maybe we could specify that any VotingRightsAssignment implementation should ensure that no voting right is assigned to a (user, entity) that has no score in user_models?
|
@lenhoanglnh Should we proceed with merging this PR? My comments can addressed separately. |
5a5b962 to
118f875
Compare
A step towards Liquid Democracy #2042
Description
Steps 2 (voting rights assignment) and 3 (individual preference learning) of Solidago were so far independent, and thus commutative.
Here, we invert them (thereby not affecting correctness), and we add the possibility of voting rights assignment to depend on learned individual preferences.
This is important as a user may typically want to delegate their voting rights on an entity, if the uncertainty on their score for this entity is too large.
Checklist
❤️ Thank you for your contribution!