You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: plugins/storage/volume/storpool/README.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ independent parts:
39
39
* ./src/com/... directory tree: agent related classes and commands send from management to agent
40
40
* ./src/org/... directory tree: management related classes
41
41
42
-
The plugin is intended to be selfcontained and non-intrusive, thus ideally deploying it would consist of only
42
+
The plugin is intended to be self-contained and non-intrusive, thus ideally deploying it would consist of only
43
43
dropping the jar file into the appropriate places. This is the reason why all StorPool related communication
44
44
(ex. data copying, volume resize) is done with StorPool specific commands even when there is a CloudStack command
45
45
that does pretty much the same.
@@ -183,7 +183,7 @@ This storage tag may be used later, when defining service or disk offerings.
183
183
<td>takeSnapshot + copyAsync (S => S)</td>
184
184
</tr>
185
185
<tr>
186
-
<td>Create volume from snapshoot</td>
186
+
<td>Create volume from snapshot</td>
187
187
<td>create volume from snapshot</td>
188
188
<td>management + agent(?)</td>
189
189
<td>copyAsync (S => V)</td>
@@ -279,7 +279,7 @@ In this case only snapshots won't be downloaded to secondary storage.
279
279
280
280
#### If bypass option is enabled
281
281
282
-
The snapshot exists only on PRIMARY (StorPool) storage. From this snapshot it will be created a template on SECONADRY.
282
+
The snapshot exists only on PRIMARY (StorPool) storage. From this snapshot it will be created a template on SECONDARY.
283
283
284
284
#### If bypass option is disabled
285
285
@@ -290,7 +290,7 @@ This is independent of StorPool as snapshots exist on secondary.
290
290
### Creating ROOT volume from templates
291
291
292
292
When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage.
293
-
This is mapped to a StorPool snapshot so, creating succecutive volumes from the same template does not incur additional
293
+
This is mapped to a StorPool snapshot so, creating successive volumes from the same template does not incur additional
294
294
copying of data to PRIMARY storage.
295
295
296
296
This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done
Copy file name to clipboardExpand all lines: tools/marvin/marvin/misc/build/CI.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -72,15 +72,15 @@ systems - these are virtual/physical infrastructure mapped to cobbler profiles b
72
72
73
73
When a new image needs to be added we create a 'distro' in cobbler and associate that with a profile's kickstart. Any new systems to be hooked-up to be serviced by the profile can then be added easily by cmd line.
74
74
75
-
b. Puppet master - Cobbler reimages machines on-demand but it is upto puppet recipes to do configuration management within them. The configuration management is required for kvm hypervisors (kvm agent for eg:) and for the cloudstack management server which needs mysql, cloudstack, etc. The puppetmasterd daemon on the driver-vm is responsible for 'kicking' nodes to initiate configuration management on themselves when they come alive.
75
+
b. Puppet master - Cobbler reimages machines on-demand, but it is upto puppet recipes to do configuration management within them. The configuration management is required for kvm hypervisors (kvm agent for eg:) and for the cloudstack management server which needs mysql, cloudstack, etc. The puppetmasterd daemon on the driver-vm is responsible for 'kicking' nodes to initiate configuration management on themselves when they come alive.
76
76
77
77
So the driver-vm is also the repository of all the puppet recipes for various modules that need to be configured for the test infrastructure to work. The modules are placed in /etc/puppet and bear the same structure as our GitHub repo. When we need to affect a configuration change on any of our systems we only change the GitHub repo and the systems in place are affected upon next run.
78
78
79
79
c. dnsmasq - DNS is controlled by cobbler but its configuration of hosts is set within dnsmasq.d/hosts. This is a simple 1-1 mapping of hostnames with IPs. For the most part this should be the single place where one needs to alter for replicating the test setup. Everywhere else only DNS names are/should-be used. open ports 53, 67 on server
80
80
81
81
d. dhcp - DHCP is also done by dnsmasq. All configuration is in /etc/dnsmasq.conf. static mac-ip-name mappings are given for hypervisors while the virtual instances get dynamic ips
82
82
83
-
e. ipmitool - ipmi for power management is setup on all the test servers and the ipmitool provides a convienient cli for booting the machines on the network into PXEing.
83
+
e. ipmitool - ipmi for power management is setup on all the test servers and the ipmitool provides a convenient cli for booting the machines on the network into PXEing.
84
84
85
85
f. jenkins-slave - jenkins slave.jar is placed on the driver-vm as a service in /etc/init.d to react to jenkins schedules and to post reports to. The slave runs in headless mode as the driver-vm does not run X.
86
86
@@ -99,7 +99,7 @@ d. multi-pod tests
99
99
marvin integration
100
100
==================
101
101
102
-
once cloudstack has been installed and the hypervisors prepared we are ready to use marvin to stitch together zones, pods, clusters and compute and storage to put together a 'cloud'. once configured - we perform a cursory health check to see if we have all systemVMs running in all zones and that built-in templates are downloaded in all zones. Subsequently we are able to launch tests on this environment
102
+
once cloudstack has been installed and the hypervisors prepared we are ready to use marvin to stitch together zones, pods, clusters and compute and storage to put together a 'cloud'. once configured - we perform a cursory health check to see if we have all systemVMs running in all zones and that built-in templates are downloaded in all zones. Subsequently, we are able to launch tests on this environment
103
103
104
104
Only the latest tests from git are run on the setup. This allows us to test in a pseudo-continuous fashion with a nightly build deployed on the environment. Each test run takes a few hours to finish.
105
105
@@ -121,7 +121,7 @@ When jenkins triggers the job following sequence of actions occur on the test in
121
121
122
122
3. we fetch the last successful marvin build from builds.a.o and install it within this virtualenv. installing a new marvin on each run helps us test with the latest APIs available.
123
123
124
-
4. we fetch the latest version of the driver script from github:cloud-autodeploy. fetching the latest allows us to make adjustments to the infra without having to copy scripts in to the test infrastrcuture.
124
+
4. we fetch the latest version of the driver script from github:cloud-autodeploy. fetching the latest allows us to make adjustments to the infra without having to copy scripts in to the test infrastructure.
125
125
126
126
5. based on the hypervisor chosen we choose a profile for cobbler to reimage the hosts in the infrastructure. if xen is chosen we bring up the profile of the latest xen kickstart available in cobbler. currently - this is at xen 6.0.2. if kvm is chosen we can pick between ubuntu and rhel based host OS kickstarts.
0 commit comments