Description
Is your feature request related to a problem? Please describe.
Short term problem: Sending S3 credentials to the ROS3 driver for h5ls and h5dump command line tools is cumbersome and not fully documented.
Long term problem: Our software should eventually rely on hdf5 using the ROS3 driver when called via netcdf-4. We think this is more likely to happen if the ROS3 driver picks up S3 credentials from environment variables or configuration files standardized by the AWS-CLI.
Describe the solution you'd like
The following currently (using hdf5 built from develop) returns https://ob-cumulus-prod-public.s3.us-west-2.amazonaws.com/PACE_OCI.20240411T164652.L1B.nc: unable to open file
export AWS_REGION=us-west-2
export AWS_ACCESS_KEY_ID=<key>
export AWS_SECRET_ACCESS_KEY=<secret>
export AWS_SESSION_TOKEN=<token>
~/.local/bin/h5ls --vfd=ros3 https://ob-cumulus-prod-public.s3.us-west-2.amazonaws.com/PACE_OCI.20240411T164652.L1B.nc
Anyone can create an Earthdata account at https://urs.earthdata.nasa.gov/, and get the temporary credentials for this bucket at https://obdaac-tea.earthdatacloud.nasa.gov/s3credentials.
For the above h5ls
call to work, I find I have to include the credentials directly.
~/.local/bin/h5ls --vfd=ros3 --s3-cred="($AWS_REGION,$AWS_ACCESS_KEY_ID,$AWS_SECRET_ACCESS_KEY,$AWS_SESSION_TOKEN)" https://ob-cumulus-prod-public.s3.us-west-2.amazonaws.com/PACE_OCI.20240411T164652.L1B.nc
Note that the inclusion of 4 comma-separated strings is overlooked in the h5ls -h
and h5dump -h
documentation. Configuration files described at the link above should be similarly supported.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status