I suggest we don't download the files from Synapse at least in the first version as we will spend a lot of time staging data.
The HTAN tower workspace is setup to be able to access s3 URIs from the HTAN s3 buckets directly. (I think it also works with the google buckets as well).
When preparing the samplesheet we can get this from the dataFileBucket and dataFileKey in synapse fileviews.
We can always add back in in due course. I have a pattern to do this here where you can mix synid and s3 uri
https://github.com/ncihtan/nf-imagecleaner/blob/main/subworkflows/samplesheet_split.nf