Skip to content

Add LeastCost and RandomWalk MovementModes #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 106 commits into from
Apr 4, 2025
Merged

Add LeastCost and RandomWalk MovementModes #34

merged 106 commits into from
Apr 4, 2025

Conversation

rafaqz
Copy link
Member

@rafaqz rafaqz commented Mar 11, 2025

This PR adds new algorithms for least cost and random walk movements - which should be equal to RSP for ~0 and ~Inf values of theta (if those could actually run numerically).

I'm also using the opportunity (new algs without tests) to rethink from scratch the type structure of all conscape algs, and how we organise single-target compute for fast/low-memory vector solves.

First it adds a MovementMode abstract type that has LeastCost, RandomShortestPath and RandomWalk as concrete implementations.

It then splits betweenness measures into two type heirarchies - one for edge/node betweenness as types of GraphMeasure, and a BetweennessMeasures heirarchy for Q/K/QK/Unweighted.

These objects control dispatch in compute and compute_target methods.

KullbackLeiblerDivergence is moved to connectivity_meaures as its more like them... but we add a lower abstract type SourceTargetMeasures to capture them all under the same umbrella.

@vboussange would be great to hear what you think of the structure (a lot of the alg code may be a bit broken in practice, I havent run any of this)

Edit: also replaced GridRSP with the more generic GridPrecalculations abstract type and LeastCostPrecalculations, RandomWalkPrecalculations and RandomShortestPathPrecalculations, to generalize accross all of the movement types.

@rafaqz
Copy link
Member Author

rafaqz commented Mar 14, 2025

Thanks for the review! Yes the hierarchy could be better.

But I'm afraid it needs multiple inheritance as eg. Betweenesses are mostly similar but have different outputs.

Currently multiple inheritance is implemented with traits like returntrait. But these can be better organised and renamed.

@rafaqz
Copy link
Member Author

rafaqz commented Mar 14, 2025

And yes it's all per-target now so LinearSolvers will just work. Another upside is the memory footprint is much smaller with the fundamental matrix being a single column. So running windowed we can thread at the per window level, staying well inside the ~3gb/ core ratio of our computer clusters.

One downside is that EigMax apparently needs the whole graph, and has to run at the outer level rather than in the per-target loop. So it still may not run at scale due to memory limitations.

@vboussange
Copy link
Collaborator

That's awesome. Do you plan to implement GPU support?

@rafaqz
Copy link
Member Author

rafaqz commented Mar 15, 2025

I think we can get cuda sparse solves via LinearSolve.jl? But probably lots of other things will break.

After seeing the high ratio of CPUs to GPUs in large clusters it hasn't been something I've focussed on.

Also with the per-target approach our arrays are not that big, so possibly it won't be worth the effort of launching GPU kernels or copying data back and forth. But I would like to try at some stage!

@rafaqz rafaqz changed the base branch from alg_efficiency to dev April 4, 2025 14:54
@rafaqz rafaqz merged commit 833e1ab into dev Apr 4, 2025
@rafaqz rafaqz deleted the new_movements branch April 4, 2025 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants