-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Introducing Storage Access Groups for better management for host and storage connections #10381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #10381 +/- ##
============================================
+ Coverage 16.41% 16.44% +0.03%
- Complexity 13629 13718 +89
============================================
Files 5702 5710 +8
Lines 503405 505338 +1933
Branches 60976 61255 +279
============================================
+ Hits 82626 83115 +489
- Misses 411594 412954 +1360
- Partials 9185 9269 +84
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@blueorangutan package |
@harikrishna-patnala a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✖️ el8 ✖️ el9 ✔️ debian ✖️ suse15. SL-JID 12435 |
@blueorangutan package |
@harikrishna-patnala a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 12456 |
@blueorangutan test |
@rohityadavcloud a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
@blueorangutan package |
@harikrishna-patnala a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 12459 |
@blueorangutan test |
@harikrishna-patnala a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
[SF] Trillian test result (tid-12407)
|
@blueorangutan package |
@rohityadavcloud a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 12884 |
@blueorangutan test |
@rohityadavcloud a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
[SF] Trillian test result (tid-12828)
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
8b934f0
to
668bec0
Compare
@blueorangutan package |
@harikrishna-patnala a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13358 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, Tested the following api's in a cloudstack environment consisting of multiple zones, clusters
- Executed the api list storageaccessgroups >> lists all the storageaccressgroups
(shapeblue2) 🐱 > list storageaccessgroups
{
"count": 6,
"storageaccessgroup": [
{
"name": "s@host"
},
{
"name": "s@zone"
},
{
"name": "s@storage"
},
{
"name": "s@zone2"
},
{
"name": "s@cluster"
},
{
"name": "s@pod"
}
]
}
- Executed the api list storageaccessgroups with name as parameter >>
lists all the storageaccressgroups with the name and the associated resource (zone,pod,cluster,host,storagepool)
Example
((shapeblue2) 🐱 > list storageaccessgroups name=s@zone
{
"count": 1,
"storageaccessgroup": [
{
"clusters": {
"responses": []
},
"hosts": {
"responses": []
},
"name": "s@zone",
"pods": {
"responses": []
},
"storagepools": {
"responses": []
},
"zones": {
"responses": [
{
"id": "999dffda-35d6-447f-acd7-650dcd30f2ad",
"name": "ref-trl-7931-k-Mol8-kiran-chavala"
}
]
}
}
]
}
(shapeblue2) 🐱 > list storageaccessgroups name=s@storage
{
"count": 1,
"storageaccessgroup": [
{
"clusters": {
"responses": []
},
"hosts": {
"responses": []
},
"name": "s@storage",
"pods": {
"responses": []
},
"storagepools": {
"responses": [
{
"id": "3cf30d07-0b2f-4723-a608-31c270d7c71b",
"name": "zwide"
}
]
},
"zones": {
"responses": []
}
}
]
}
- Executed the api list storageaccessgroups with keyword as parameter
(shapeblue2) 🐱 > list storageaccessgroups keyword=s@zone
{
"count": 2,
"storageaccessgroup": [
{
"name": "s@zone"
},
{
"name": "s@zone2"
}
]
}
- Executed the following list api calls with the parameter "storageaccessgroup"
(shapeblue2) 🐱 > list zones storageaccessgroup=s@zone filter=zonestorageaccessgroups,podstorageaccessgroups,clusterstorageaccessgroups,storageaccessgroups
{
"count": 1,
"zone": [
{
"storageaccessgroups": "s@zone,s@zone2"
}
]
}
(shapeblue2) 🐱 > list pods storageaccessgroup=s@pod filter=zonestorageaccessgroups,podstorageaccessgroups,clusterstorageaccessgroups,storageaccessgroups
{
"count": 1,
"pod": [
{
"storageaccessgroups": "s@pod",
"zonestorageaccessgroups": "s@zone,s@zone2"
}
]
}
(shapeblue2) 🐱 > list clusters storageaccessgroup=s@cluster filter=zonestorageaccessgroups,podstorageaccessgroups,clusterstorageaccessgroups,storageaccessgroups
{
"cluster": [
{
"podstorageaccessgroups": "s@pod",
"storageaccessgroups": "s@cluster",
"zonestorageaccessgroups": "s@zone,s@zone2"
}
],
"count": 1
}
(shapeblue2) 🐱 > list hosts storageaccessgroup=s@host filter=zonestorageaccessgroups,podstorageaccessgroups,clusterstorageaccessgroups,storageaccessgroups
{
"count": 1,
"host": [
{
"clusterstorageaccessgroups": "s@cluster",
"podstorageaccessgroups": "s@pod",
"storageaccessgroups": "s@host",
"zonestorageaccessgroups": "s@zone,s@zone2"
}
]
}
(shapeblue2) 🐱 > list storagepools storageaccessgroup=s@storage filter=zonestorageaccessgroups,podstorageaccessgroups,clusterstorageaccessgroups,storageaccessgroups
{
"count": 1,
"storagepool": [
{
"storageaccessgroups": "s@storage"
}
]
}
And also the following test cases
Test Case Execution | Result |
---|---|
Addition of storage access group tags on nfs based primary storage pool | Pass |
Verify that storage access group tags can be introduced on Host | Pass |
Verify that when a cluster is tagged with storage access group, all hosts in the cluster inherit the storage access group tag | Pass |
Verify that storage access pool tags can be introduced on Zone | Pass |
Verify that when a pod is tagged with a storage access group, all clusters in the pod inherit the storage access group tag | Pass |
Verify that when a zone is tagged with storage access group, all pods in the zone inherit the storage access group tag | Pass |
Verify that storage access group tags can be introduced on Pod | Pass |
Verify that storage access group tag can be introduced on Cluster | Pass |
CloudStack should explicitly tag hosts when storage access group tag is removed from Zone | Pass |
CloudStack should explicitly tag hosts when storage access group tag is removed from Pod | Pass |
CloudStack should explicitly tag hosts when storage access group tag is removed from Cluster | Pass |
Cloudstack should reject storage access group tag removal on host level if there are instances running volumes attached to storage pool containing this tag | Pass |
Removal of storage access group tag should be unsuccessful if all hosts have instances connected to a storage pool with the same storage access group tag | Pass |
Verify that hosts with storage access group tags can connect to storage pool with no storage access group tags | Pass |
Verify that hosts with no storage access group tags can connect to storage pool with no storage access group tags | Pass |
Verify that a host with storage access group tags 'sp1' and 'sp2' can connect to pools with the same storage access group tag 'sp1'. | Pass |
Verify that a host with storage access group tag 'sp1' and 'sp2' can connect to a pool with storage access group 'sp1' and 'sp3'. | Pass |
Admin can add/remove storage access group tag on cluster level | Pass |
Admin can add storage access group tag on zone level | Pass |
Admin can add/remove storage access group tag on pod level | Pass |
Admin can add/remove storage access group pool tag on host level independently | Pass |
Events should be generated for addition and removal storage access groups | Pass |
@blueorangutan test |
@kiranchavala a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code lgtm
Description
Documentation PR apache/cloudstack-documentation#503
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
Example:
Consider a CloudStack environment with 10 clusters, each with 5 hosts, totaling 50 hosts. When a zone-wide primary storage is added, it will by default connect to all 50 hosts. If the operator wants the storage to connect only to selected hosts in Cluster 1 and Cluster 2, they can assign a storage access group to:
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
How did you try to break this feature and the system with this change?