Skip to content

add resilience to remote index being down during list files #1418

@alaniwi

Description

@alaniwi

When "list files" is clicked, it seems that the value of index_node from the dataset metadata is checked, and this is used to send a query to that index, with the following format:

https://....../esg-search/search?type=File&dataset_id=......&format=application%2Fsolr%2Bjson&offset=0&limit=10&distrib=false

If a dataset record originates from a remote Solr shard, then this can fail if the remote index is down, even though a local replica for that shard may be available. That is to say, the local replica contains both the datasets and files cores but CoG does not attempt to utilise the locally held files info.

How about changing it so that in the event of an unsuccessful response from the remote index (e.g. a 500 or a timeout), it falls back to trying the same search on the local index node? This fallback search would need to omit the distrib=false, making it more expensive because other unrelated shards are also queried, which is why I am suggesting it only as a fallback.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions