-
Notifications
You must be signed in to change notification settings - Fork 1k
Empower Users to More Pragmatically Import Datasets & Collections From Tables #20288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
jmchilton
wants to merge
16
commits into
galaxyproject:dev
Choose a base branch
from
jmchilton:fetch_workbooks
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
5b80f36
to
dacbf39
Compare
Apply help text styling in rule builder.
dacbf39
to
c391e7f
Compare
c391e7f
to
76fbadd
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/dataset-collections
area/upload
highlight
Included in user-facing release notes at the top
kind/enhancement
kind/feature
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The User Story
Data Fetch without the Rules for Datasets
As a particular, assume you have a workbook/spreadsheet with URLs and metadata that looks something like this for individual datasets:
(I just generated the workbook manually from source data that was originally generated from the ENA a million years ago for rule builder documentaiton and vaguely documented here https://github.com/jmchilton/galaxy-examples/tree/master/ena_PRJDA60709).
Previously this data could be exported to tabular in various ways and sent to the rule builder in various ways to have the user map the columns to metadata. With the introduction of user interface elements and APIs in this PR that whole process can be fast forwarded to the very end rapidly. Here is a little walkthrough
First click the wizard-based rule builder activity introduced with https://github.com/galaxyproject/galaxy/pull/19377.
The component now has an upload icon.
You can just drop the spreadsheet file right onto this component. The columns of the spreadsheet are inspected and patterns for common column header names are used to infer what metadata is being represented. The set of metadata supplied is then used to infer if datasets are being imported, a collection is being imported, or multiple collections are. All of this is fed back to the client and the wizard is filled out automatically, the data is formatted for the rule builder and placed in the component, and the columns are mapped.
At this point the data import is completely ready to go without any additional work by the user. I've implemented column detection for all the metadata elements the rule builder supports. For basic use cases, I think this greatly eases such data import and drops the extra knowledge needed to do such a data import way, way down.
Data Fetch without the Rules for Collections
A similar example works for collections.
(Documentation and workbook for this example is likewise at https://github.com/jmchilton/galaxy-examples/tree/master/ena_PRJDB3920).
Dropping this spreadsheet on that upload icon results in the following rule builder configuration
Here the only extra piece of metadata the user needs to supply is a collection name and then the data import is ready to begin.
Adding column names for a collection name results in many collections created. Various column name patterns can be used to describe nested collections, lists of paired or unpaired datasets, nested lists of pairs, etc...
Breaking out of the Walled Garden
Galaxy can create a documented, usable workbook for the given collection type for the user and I'll show the user interface for that further down in this PR description, but I think it is important to note that the workbook does not need to be derived from this template. The Galaxy Training Network can create workbooks for various use cases and document them however they would like, external applications can produce workbooks with whatever relevant metadata they have access to, and core facilities can produce workbooks that match samples processed and their own internal workflows and metadata collection.
Starting from Scratch - Easy as 1-2-3
The rule builder wizard starts by asking if the user is creating datasets or collections, this hasn't changed with this PR. But the second page for each option has a new option - "External Workbooks". This page looks like the following for datasets and collections.
(for datasets)
(for collections)
Selecting this option adds a new page for datasets and two new pages for collections.
For collections, the type of collection to create is first configured to determine what columns to add. There is also a checkbox for the option to add a "Collection Name" column that can be used to create multiple collections per workbook. The configure workbook page for collections looks like this - largely modeled after the work done for collection builders in #19377. .
The upload page is largely the same between dataset and collections though and shown here:
From this page, the user can download the workbook, fill it out, and upload it from here. The user can click the upload icon or drop the sheet right into box 3. The process from there is the same as dropping the workbook on the icon in the upper left corner. The rule builder is fast forwarded to on the last step with the inferred columns mapped, data filled in, and paired data split appropriately.
The workbooks generated this way include documentation and column descriptions as part of the workbook.
The Learning Path and Pragmatism
I'm sad I don't have a second chance at first impressions but I think this approach should be highlighted in Galaxy training before the rule builder. It seems like a much more approachable way to introduce mapping columns to metadata and importing datasets and collections. The rule builder could then be described as a way to do these operations in a more reproducible way inside of Galaxy and within workflows. I probably don't have time to do a training document before the GCC but I would really love to get there. I don't think this approach is better or worse, I just think this is an easier thing for doing easy things and it would make introducing the harder thing for doing harder things feel more manageable and understandable.
Fetch Workbooks vs Sample Sheet Workbooks
I didn't set out to implement any of this but I was working on sample sheet work and developed a lot of these concepts for that important but more specific use case. Sample sheets allow workflow authors to describe custom metadata columns and do data entry in a spreadsheet style interface in Galaxy. I think it is important that you can do those things without Excel or external tooling in that context (so there is a data entry component implemented using AG Grid) but also I'm well aware that Excel/GoogleSheets/etc is going to probably be the easiest, most comfortable thing for most biologists. So I did a lot of work in that branch to enable that style usage - including things like encoding custom column validation, adding help to the workbook, etc... The thing I hadn't answered until recently was "how do I represent extra Galaxy metadata during uploads" and in answering that I realized the answer was something that would be very useful in the context of unrestricted dataset and collection import and not at all specific to these special workflow inputs at all. The data fetch API could do all those things and the rule builder was our interface into the data fetch API so this became a natural entry point. My thinking is that the column mapping we do here in this PR will be used for all the non-custom column definitions in sample sheet workbooks.
I think this gives rise to what in my mind are two related but distinct concepts - "fetch workbooks" (the generic workbooks implemented acting as a spreadsheet interface into the data fetch API) and "sample sheet workbooks" which will be generatable for specific workflow inputs and will extend "fetch workbooks" with custom columns that can be set up by the workflow author.
I'm going to try to not be precious about the naming here. I assume "fetch workbooks" is not something that should be user facing terminology but I am going to use it and I have used in the backend code to distinguish between the data fetch specific stuff from the generic stuff that will be shared with what I'm calling "sample sheet workbooks" in my head.
Various Technical Details
fetch/workbooks.py
vsworkbook_utils.py
Things in
workbook_utils.py
are meant to be shared with the sample sheet work pretty directly and most of it was refactored from work I already did there. I'm still working through details about how much flexibility to allow in sample sheet workbooks. The presence if workflow author named columns makes me think they might be less flexible than this approach - but maybe it is possible to figure something out.Inferring Column Mappings
To infer column mappings - I normalize the title by stripping various characters and normalizing case and then checking prefixes and suffixes. The types are mapping to the rule builder mapping types. These mapping types are defined schema extracted from the rule builder in #20282. All of this has pretty clear unit tests that I think describe the behavior pretty well so I've included these here:
Inferring Collection Type
Likewise I think the unit tests are pretty clear and describe how this mapping occurs pretty well.
Future Work
How to test the changes?
(Select all options that apply)
License