-
Notifications
You must be signed in to change notification settings - Fork 128
Add support for hybrid model #743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
(btw, this scratches the multiple-connectome itch, as well, since multiple projections can be configured with different connectomes, and will be summed to the same cvar) |
|
Hey there, cool stuff, sorry for the latency. I really like the direction of generalizing the simulator in terms of heterogeneity of the system, and I agree with the arrays and traits as basic building blocks. If I understand this correctly, currently the system is defined as a singular network (connectivity), with global IDs over nodes, and a set of non-overlapping masks assigning nodes to models which govern the local dynamics. The states of the system is then a tuple of arrays of states for individual subnetworks. I think that the global indexing is a necessary choice on the level of the projections, but don't see a reason for the individual subnetworks to make use of the mask. I'd have the tendency to separate the intra- and inter- subnetwork connectivity (domain might be a an alterantive term to consider), as I can imagine reuse of defined subnetworks across applications (e.g. always the same model for a certain subcortical structure with a given internal connectivity), while swapping out the rest (e.g. changing the model and connectivity for the cortex). What the networkset then should do to get the total coupling is:
I'd happily help to work through the surprises along the way. |
This is a another take on hybrid models, where multiple models are used at once, which is a more array oriented than #739 and closer to TVB style with traits etc. It introduces the following elements to the TVB datatypes (naming suggestions welcome)
From there, it turns out that flexible state shapes creates surprise in many places, where it's assumed that the state variables are homogenous across nodes. For instance, the integrator noise is set per state variable, however in different models, these need to be set with different values.
Other major nuances are related to monitors, but I think some straightforward compromises are available: for "debug" monitors like raw or temporal average, the monitor will be applied per model to produce a per-subnetwork trace. Monitors more "neuroimaging" like will choose a particular state variable and apply the forward model (gain matrix, BOLD equations) to it.
Stimulation is another open question, but I wonder if, deparating from the monolithic style, a stimulus cannot just be a "subnetwork" with a projection to the nodes being stimulated? Ideas welcome.
Performance is terrible w/ ~5-10k iter/s on default connectome w/ two models, but this is expected w/ the naive implementation and will improve dramatically after massaging for jax or a c++ impl.
@i-Zaak @ReyJ94 feedback welcome!
implementation status