-
Notifications
You must be signed in to change notification settings - Fork 60
Graphql refactor + rolling/expanding #2090
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # raphtory-graphql/src/model/graph/graph.rs # raphtory-graphql/src/model/graph/nodes.rs # raphtory/src/core/utils/errors.rs
# Conflicts: # python/tests/test_base_install/test_filters/semantics/test_edge_property_filter_semantics.py # python/tests/test_base_install/test_filters/semantics/test_node_property_filter_semantics.py # python/tests/test_base_install/test_filters/test_edge_composite_filter.py # python/tests/test_base_install/test_filters/test_edge_filter.py # python/tests/test_base_install/test_filters/test_edge_property_filter.py # python/tests/test_base_install/test_filters/test_node_composite_filter.py # python/tests/test_base_install/test_filters/test_node_filter.py # python/tests/test_base_install/test_filters/test_node_property_filter.py
…as bigger than the window
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some things to tidy up
python/tests/test_base_install/test_graphql/test_rolling_expanding.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some things we can fix later, doesn't really introduce new issues.
self_clone.graph.write_updates()?; | ||
let self_clone_2 = self.clone(); | ||
|
||
let nodes: Vec<Result<NodeView<GraphWithVectors>, GraphError>> = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should just change the type here to a HashSet instead to fix the FIXME below :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also swap the Result and Vec to get early failure? Right now it will run through all the updates even if there is failures and add everything that works (maybe that is what we want?) but then the embedding part will fail early on the first broken update which doesn't make sense if we did actually want the current behaviour. We probably actually want to return the list of all failures in this case.
impl<'graph, T: TimeOps<'graph> + Clone + 'graph> ExactSizeIterator for WindowSet<'graph, T> { | ||
//unfortunately because Interval can change size, there is no nice divide option | ||
fn len(&self) -> usize { | ||
let mut cursor = self.cursor; | ||
let mut count = 0; | ||
while cursor < self.end + self.step { | ||
let window_start = self.window.map(|w| cursor - w); | ||
if let Some(start) = window_start { | ||
if start >= self.end { | ||
break; | ||
} | ||
} | ||
count += 1; | ||
cursor = cursor + self.step; | ||
} | ||
count | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we can't do better, we shouldn't implement this as it is no better than count on the iterator but I think we could do better (the months would need some special handling). At least implement the fast option when the interval is simple?
What changes were proposed in this pull request?
Removed functions/API changes
New functions
Refactor
Bug fixes
Testing
Why are the changes needed?
Improvements in the UI timeline and also useful for Graphql users.
Multiple queries being blocked as queries were being run on the main thread.
Are there any further changes required?
The spawn blocking is probably more of a patch than a final solution, but we need to get some benchmarks going first to know what works best.