Skip to content

Enhance rbd ls Command to List Images Without Requiring an Image Name #117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

OdedViner
Copy link
Contributor

@OdedViner OdedViner commented Feb 10, 2025

This PR introduces support for running the rbd ls command without having to specify an image name.
When a user executes odf rbd ls with no additional arguments, the command now directly invokes the vendor’s ListImages function (rook-kubectl upstream) to display a detailed table listing of available images (including pool name, image name, and namespace).
If an image name (or extra arguments) is provided (e.g., odf rbd ls ), the command continues to run in the operator pod as before, ensuring backward compatibility while improving the user experience for simple image listing.


$ ./bin/odf rbd ls
Info: running 'rbd' command with args: [ls]
poolName                          imageName                                     namespace  
--------                          ---------                                     ---------  
ocs-storagecluster-cephblockpool  csi-vol-068af78a-7b07-4a6c-85ff-7ec99ceefc91  ---        
ocs-storagecluster-cephblockpool  csi-vol-27d3721c-b59f-46f5-95d9-914456eb20c5  ---        
ocs-storagecluster-cephblockpool  csi-vol-9ed88240-d269-497d-9c2c-f67357763196  ---        
ocs-storagecluster-cephblockpool  csi-vol-dd2684a1-af48-4ca0-9fcc-3a3361780fe0  ---        
ocs-storagecluster-cephblockpool  csi-vol-fad8233b-94c4-4af5-bb84-60d7b1d06271  ---        

$ ./bin/odf rbd ls ocs-storagecluster-cephblockpool
Info: running 'rbd' command with args: [ls ocs-storagecluster-cephblockpool]
csi-vol-068af78a-7b07-4a6c-85ff-7ec99ceefc91
csi-vol-27d3721c-b59f-46f5-95d9-914456eb20c5
csi-vol-9ed88240-d269-497d-9c2c-f67357763196
csi-vol-dd2684a1-af48-4ca0-9fcc-3a3361780fe0
csi-vol-fad8233b-94c4-4af5-bb84-60d7b1d06271

```

Comment on lines 25 to 33
// If the user is trying to run "rbd ls" (without possible extra arguments)
if args[0] == "ls" && len(args) == 1 {
if len(args) > 1 {
logging.Warning("Ignoring extra arguments for 'rbd ls'; running only 'rbd ls'.")
}
// Call the vendor ListImages function to do the listing
rbd.ListImages(cmd.Context(), root.ClientSets, root.OperatorNamespace, root.StorageClusterNamespace)
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// If the user is trying to run "rbd ls" (without possible extra arguments)
if args[0] == "ls" && len(args) == 1 {
if len(args) > 1 {
logging.Warning("Ignoring extra arguments for 'rbd ls'; running only 'rbd ls'.")
}
// Call the vendor ListImages function to do the listing
rbd.ListImages(cmd.Context(), root.ClientSets, root.OperatorNamespace, root.StorageClusterNamespace)
return
}
// If the user is trying to run "rbd ls" (without possible extra arguments)
if args[0] == "ls" && len(args) == 1 {
if len(args) > 1 {
logging.Warning("Ignoring extra arguments for 'rbd ls'; running only 'rbd ls'.")
}
// Call the vendor ListImages function to do the listing
rbd.ListImages(cmd.Context(), root.ClientSets, root.OperatorNamespace, root.StorageClusterNamespace)
return
}

instead of this can we do similar to https://github.com/rook/kubectl-rook-ceph/pull/333/files#diff-9b7d33b239e9bb3a80d5cda418273877bb1398a5a0f928fc478ef98b37901c5dR44-R55 ?

Copy link
Contributor Author

@OdedViner OdedViner Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@subhamkrai I changed the code based on the upstream kubectl-rook-ceph project, but I didn't add an option to list images for a specific pool only rbd ls. Do you want me to add rbd ls <pool name>?

Copy link

openshift-ci bot commented Feb 12, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: OdedViner
Once this PR has been reviewed and has the lgtm label, please ask for approval from subhamkrai. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Comment on lines 17 to 28
Run: func(cmd *cobra.Command, args []string) {
logging.Info("running 'rbd' command with args: %v", args)
// verify operator pod is running
_, err := k8sutil.WaitForPodToRun(cmd.Context(), root.ClientSets.Kube, root.OperatorNamespace, "app=rook-ceph-operator")
if err != nil {
logging.Fatal(err)
}
}

_, err = exec.RunCommandInOperatorPod(cmd.Context(), root.ClientSets, cmd.Use, args, root.OperatorNamespace, root.StorageClusterNamespace, false)
if err != nil {
logging.Fatal(err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need these for running ceph rbd commands

Copy link
Contributor Author

@OdedViner OdedViner Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@subhamkrai I called a function to verify that the rook-ceph-operator pod is running.
My question is whether we want to support only rbd ls or both rbd ls and rbd ls <pool-name>.
What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's support what is it in upstream kubectl-rook-ceph is supporting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@subhamkrai
Upstream kubectl-rook-ceph supports only rbs ls and does not support rbd ls <pool-name>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure @OdedViner I think it will support as rbd ls <poolname> is rbd ceph command. can you verify that?

@nirs
Copy link
Contributor

nirs commented Apr 10, 2025

$ ./bin/odf rbd ls
Info: running 'rbd' command with args: [ls]
poolName                          imageName                                     namespace  
--------                          ---------                                     ---------  
ocs-storagecluster-cephblockpool  csi-vol-068af78a-7b07-4a6c-85ff-7ec99ceefc91  ---        
ocs-storagecluster-cephblockpool  csi-vol-27d3721c-b59f-46f5-95d9-914456eb20c5  ---        
ocs-storagecluster-cephblockpool  csi-vol-9ed88240-d269-497d-9c2c-f67357763196  ---        
ocs-storagecluster-cephblockpool  csi-vol-dd2684a1-af48-4ca0-9fcc-3a3361780fe0  ---        
ocs-storagecluster-cephblockpool  csi-vol-fad8233b-94c4-4af5-bb84-60d7b1d06271  ---     

This looks useful, but on a real system we can have large number of images and namespaces and listing everything may be too much.

$ ./bin/odf rbd ls ocs-storagecluster-cephblockpool
Info: running 'rbd' command with args: [ls ocs-storagecluster-cephblockpool]
csi-vol-068af78a-7b07-4a6c-85ff-7ec99ceefc91
csi-vol-27d3721c-b59f-46f5-95d9-914456eb20c5
csi-vol-9ed88240-d269-497d-9c2c-f67357763196
csi-vol-dd2684a1-af48-4ca0-9fcc-3a3361780fe0
csi-vol-fad8233b-94c4-4af5-bb84-60d7b1d06271

Changing existing command semantics make the command harder to use for people that already know it.

It seems that the right place to make rbd ls easier is in ceph, not in kubectl-rook-ceph or odf. This will help all ceph users instead of only odf users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants