Skip to content

feat: Auto multichannel setup for image layers #770

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 32 commits into
base: master
Choose a base branch
from

Conversation

seankmartin
Copy link
Contributor

This is to introduce a new behaviour where the new tool palettes are leveraged to create a more automated setup for multi-channel image datasets (and optionally can be also used for single channel image datasets).

Entry points

The first entry point to this is when making a new layer:
image

In this case, the user can pick whether to use the default image layer setup, or the multi-channel setup, even if they only have one channel. The option is given because the setup does some things like setting a different shader, creating a tool palette, setting default contrast limits etc.

The other entry point to this is when making an auto layer. In this case it only does the multi-channel setup if more than one channel is in the data.

Setup behaviour

The following happens:

  1. Each channel is split into a separate layer. Only the first four of these are shown, the rest are archived. If using omero, it can instead flag the active channels for rendering.
  2. These are simply named as LAYERNAME cCHANNELDIM1_CHANNELDIM2... unless the omero label metadata is present, in which case that is used instead.
  3. The color and contrast of these layers is automatically set, unless informed by omero metadata. A check is performed on the omero metadata to see if it is likely to be well setup, and if not, the automatic setting is still used. If the omero metadata is well setup it informs this instead.
  4. A tool palette with all the shader controls is shown on the left-hand side if there is no shader control palette already loaded.

Example

Example auto mode setup (dimensions manually shuffled from t z y x to x y z t after setup). Could consider doing that as part of setup I guess if helpful.

image

@seankmartin seankmartin changed the title feat: auto multichannel setup for image layers feat: Auto multichannel setup for image layers Apr 23, 2025
@fcollman
Copy link
Contributor

fcollman commented Apr 25, 2025

I tried this with the first zarr from the institute and got this error
s3://allen-genetic-tools/epifluorescence/1383646325/ome_zarr_conversion/1383646325.zarr|zarr2:

Error parsing "name" property: Expected string, but received: undefined.

i'm not sure if this is because it's searching for something needs to exist in the spec and this file doesn't have it, or its optional and neuroglancer should handle it, or whether its a full on bug.

what I can say is that neuroglancer handled this zarr before this PR and now does not.

@seankmartin
Copy link
Contributor Author

seankmartin commented Apr 25, 2025

Thanks for catching this one! Misunderstood the omero spec and thought that name was required, fixing that now

@seankmartin
Copy link
Contributor Author

seankmartin commented Apr 25, 2025

Not sure if I've correctly understood the window min/max and start/end. Is this the intended setup for the blue channel?

The metadata has the end outside the range of the max. I guess we need a verification step to clamp the start end within the min/max.

Or perhaps something is going wrong when reading the metadata
image

@fcollman
Copy link
Contributor

fcollman commented May 2, 2025

some feedback... In multichannel setup, opacity of layers should be 1.0 by default, and additive blending mode by default.

@seankmartin seankmartin marked this pull request as ready for review May 5, 2025 16:58
@krokicki
Copy link

krokicki commented Jun 4, 2025

This is fantastic, @seankmartin! (and thanks @fcollman for making me aware of this effort)

I tried it out on some of our data converted with Fiji. It's very intuitive to use and should address #541.

There is another data set that we converted with bioformats2raw, where I ran into a couple of issues. You can load it from here:

https://janelia-flylight-imagery-dev.s3.amazonaws.com/Fly-eFISH/NP01_1_1_SS00790_AstA546_CCHa1_647_1x_LOL.chunked.zarr/0/|zarr2:

This image has OMERO metadata, including channel colors. However, the colors are all set to black when the multichannel layer is created.

The second problem is that the image itself does not render correctly, due to the t dimension being placed first. If I reverse the order of the axes (from tzyx to xyzt), then it looks correct. As a point of reference, here is my attempt to render this data with the OMERO colors and correct axis ordering.

@fcollman
Copy link
Contributor

fcollman commented Jun 4, 2025

some comments.. the metadata in the .zattrs files seems to be this

      "color" : "00FF00",
      "coefficient" : 1,
      "active" : true,
      "label" : "Cam1-T1",
      "window" : {
        "min" : 40.0,
        "max" : 51986.0,
        "start" : 40.0,
        "end" : 51986.0
      },
      "family" : "linear",
      "inverted" : false
    }, {
      "color" : "FF00FF",
      "coefficient" : 1,
      "active" : true,
      "label" : "Cam2-T1",
      "window" : {
        "min" : 52.0,
        "max" : 16528.0,
        "start" : 52.0,
        "end" : 16528.0
      },
      "family" : "linear",
      "inverted" : false
    }, {
      "color" : "FF0000",
      "coefficient" : 1,
      "active" : true,
      "label" : "Cam2-T2",
      "window" : {
        "min" : 62.0,
        "max" : 32500.0,
        "start" : 62.0,
        "end" : 32500.0
      },
      "family" : "linear",
      "inverted" : false
    }, {
      "color" : "00FFFF",
      "coefficient" : 1,
      "active" : false,
      "label" : "Cam1-T3",
      "window" : {
        "min" : 78.0,
        "max" : 9636.0,
        "start" : 78.0,
        "end" : 9636.0
      },
      "family" : "linear",
      "inverted" : false
      ```

The min/max ranges are being set according to that link.  In your attempt, the min max ranges are all set to 0>4000, which isn't in the metadata. 

However, neuroglancer is not using hte hexadecimal colors correctly to set the color picker.  

with respect to the channel ordering I think this is an issue, and zarr sources should put time last by default, and channels should be split. 

@seankmartin
Copy link
Contributor Author

seankmartin commented Jun 4, 2025

Thanks @krokicki and @fcollman, glad to hear and really appreciate the detailed feedback and examples, that's a big help!

I think I got the first issue with the color, misread the spec and thought that the color was expected to be in #RRGGBB format, not RRGGBB format. Funny enough the original example from Forrest had the colors with the # so I've just updated it to check the input and put the # if needed.

As for the channel ordering, taking a look at that

@seankmartin
Copy link
Contributor Author

seankmartin commented Jun 4, 2025

I made it so that the global co-ordinate space dimensions when using the multi-channel setup should reverse from tzyx to xyzt, zyx would reverse to xyz and yx to xy. Happy to iterate upon that further to constrain that happening based on the type of the input sources, but for now it would happen for all source types that use the multi-channel setup

So now with no changes after using layer type auto and using the source

https://janelia-flylight-imagery-dev.s3.amazonaws.com/Fly-eFISH/NP01_1_1_SS00790_AstA546_CCHa1_647_1x_LOL.chunked.zarr/0/|zarr2:

The output is:
image

The image is still blank because the metadata indicates very wide start and end values as Forrest mentioned. But if I fine tune those to be the min-max from the invlerp inside of neuroglancer and turn on the last channel (it is off because it is listed as active=false in the metadata). Then you get
image

Possibly the zoom level could be set a little differently, but perhaps that is better tackled separately -- not sure

Note

Right now this reversal happens based on the channel names. If those are not likely to be some subset of x y z t for the non-channel dims -- then probably it is better to instead check if any datasources are OME-zarr and just reverse the global dims if so. Very open to input on that

@krokicki
Copy link

krokicki commented Jun 4, 2025

Works brilliantly. Thank you for such a quick fix!

We also have some data that will have dozens of channels (many rounds of FISH labeling). I don't have any of that data yet, but I made a synthetic data set with 6 channels, and that works nicely. I think the default of only showing the first 4 is great.

I can't wait until this is merged, and in the meantime I'll probably set this up on an internal instance so we can start using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants