Skip to content

optimise for memory for very large all by all NBLAST #40

@jefferis

Description

@jefferis
  • Use a pattern of small (e.g. 100 x 100) blocks that might 10s of seconds / a few minutes to compute
  • this should work better than doing a whole row or column that might have 20-50k neurons.
  • need to implement an x by y nblast function instead of all by all NBLAST for each block (would current NBLAST be ok?)
  • inputs could be neuronlistfh and read in for each process. I suspect that read time will be trivial compared with search time so long as blocks take 10s of seconds to compute. This might work well for memory.
  • ideally we would parallelise across those blocks with progress
  • if doing mean scores, we might want to do forward and reverse scores at the same time since they use the same sets of neurons
  • we might wish to fill a sparse matrix with the results with a threshold

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions