This repo provides a pipeline to generate high quality tetrahedral meshes of brain tissue on the cellular scale suitable for numerical simulations.
- high quality meshes of dense reconstructions of the neuropil
- both extracellular and intracellular space included
- automated pipeline from segmentation to mesh
- basic image processing steps included, to account for e.g. missing ECS from chemically fixated tissue
EMImesh uses OpenCL to run image processing steps on your GPU (if avaialable) and as such requires an OpenCL device driver. See here (https://documen.tician.de/pyopencl/misc.html) for details on installing OpenCL. If you are on ubuntu, sudo apt-get install pocl
is an easy way of installing an CPU and GPU-capable OpenCL implementation.
Further, xvfb
is required to create visualizations of the meshes on headless machines. Install with sudo apt-get install xfvb
. Other dependencies will be handled and installed via snakemake.
First, install snakemake (using e.g. mamba/conda):
mamba create -c conda-forge -c bioconda -n snakemake snakemake snakemake-storage-plugin-http snakemake-executor-plugin-cluster-generic
and activate it:
mamba activate snakemake
Then, run snakemake on your configuration file, with e.g.
snakemake --configfile configfiles/cortical_mm3.yml --use-conda --cores 8
- choose your dataset, position and size, e.g. using neuroglancer (example: cortical MM^3 dataset at position 225182-107314-22000)
- download segmented image data
- preprocess the image for meshing
- choose N largest cells
- expand, apply morphological smoothing and shrinkage/erosion to each cell
- extract the surfaces of each cell
- generate a volumetric mesh of the extracted surfaces mesh and the extracellular space in between the cells with fTetWild
The workflow is based on Snakemake. To generate meshes, install snakemake (e.g. conda install -c bioconda snakemake
), modify the config.yml
file in this repo and run snakemake --cores all --use-conda
. That's it!
Snakemake
will install all required dependencies (specified in workflow/envs/environment.yml
) and orchastrate the jobs. It also supports schedulers on HPC systems such as slurm.
The output consists of the following directories:
- raw: The downloaded segmentation as is in
.vtk
format, suitable for e.g. paraview - processed: The processed image in
.vtk
format - surfaces: The surfaces of the extracted cells in
.ply
format, again suitable for visualization with paraview or usage in other meshing software - meshes: The generated volumetric meshes in
.xdmf
format, containing labels for the extracellular space (label 1) and increasing integer values (2,..., N) for all cells. There is currently no mapping to the cell ids of the segmentation. The file_facet.xdmf
contains facet marker, where the label l corresponds to the boundary between ECS and cell l. The outer boundaries are marked asl + offset
, whereoffset
is the next higher power of ten of the number of cells (offset=int(10 ** np.ceil(np.log10(N_cells)))
)
The meshes are ready for usage with FEniCS:
from fenics import *
import numpy as np
mesh = Mesh()
infile = XDMFFile("mesh.xdmf")
infile.read(mesh)
gdim = mesh.geometric_dimension()
labels = MeshFunction("size_t", mesh, gdim)
infile.read(labels, "label")
infile.close()
# get all local labels
np.unique(labels.array())
# array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], dtype=uint64)
infile = XDMFFile("facets.xdmf")
infile.read(mesh)
gdim = mesh.geometric_dimension()
boundary_marker = MeshFunction("size_t", mesh, gdim - 1)
infile.read(boundary_marker, "boundaries")
infile.close()
# get all local facet labels
np.unique(boundary_marker.array())
# array([ 0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 101, 102,
# 103, 104, 105, 106, 107, 108, 109, 110, 111], dtype=uint64)
- currently only supports data accessible via cloud-volume
- assumes isotropic data
- does not handle intersecting cells
- agnostic to cell types - all cells are handled equal