Skip to content

Commit 5dc8cab

Browse files
committed
Fixed missing image titles
Image titles and / or alt text is needed for accessibility. All images were missing titles. Fixed that.
1 parent d2ec78b commit 5dc8cab

File tree

1 file changed

+23
-23
lines changed

1 file changed

+23
-23
lines changed

adoc/CaaSP40_DI3X_Install_Guide.adoc

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -683,7 +683,7 @@ docker-compose up -d
683683

684684
Finally, you can log in to Portus and configure the registry.
685685

686-
image::portus-registry.png[portus-registry.png,scaledwidth=95%]
686+
image::portus-registry.png[title="Portus Registry",scaledwidth=95%]
687687

688688

689689
==== Installing and configuring a secure private registry using SUSE Linux Enterprise Server and the SLE-Container-Module
@@ -760,27 +760,27 @@ and already mirroring these products:
760760

761761
** link:https://scc.suse.com[SCC]
762762
+
763-
image::scc-sle.png[scc-sle,scaledwidth=99%]
763+
image::scc-sle.png[title="SUSE Customer Center: SUSE Linux Enterprise Server Overview",scaledwidth=99%]
764764

765765
++++
766766
<?pdfpagebreak?>
767767
++++
768768

769769
** link:https://documentation.suse.com/sles/12-SP4/html/SLES-all/book-smt.html[SMT]
770770
+
771-
image::scc-ses.png[scc-ses,scaledwidth=99%]
771+
image::scc-ses.png[title="SUSE Customer Center: SUSE Enterprise Storage Overview",scaledwidth=99%]
772772

773773

774774
* You should already have set up a DNS zone. In our example, where all Data Hub
775775
components are in the same DNS zone and the same subnet, it should look like:
776776
+
777-
image::dns.png[dns,scaledwidth=95%]
777+
image::dns.png[title="DNS Zone",scaledwidth=95%]
778778

779779

780780
* To be as efficient as possible when using interactive shell-scripted infrastructure deployment,
781781
we advise to use an advanced terminal client or multiplexer which will allow you to address multiple shells at once:
782782
+
783-
image::multi-s-virtinstall.png[multi-s-virtinstall,scaledwidth=95%]
783+
image::multi-s-virtinstall.png[title="Multiple Shells View",scaledwidth=95%]
784784

785785

786786
Now you can create the virtual machines.
@@ -804,11 +804,11 @@ Now you can create the virtual machines.
804804
----
805805

806806
+
807-
image::multi-s-smt.png[multi-s-smt.png,scaledwidth=95%]
807+
image::multi-s-smt.png[title="Multiple SMT View",scaledwidth=95%]
808808

809809
* Select the SUSE Enterprise Storage 5 extension:
810810
+
811-
image::multi-s-addon.png[multi-s-addon.png,scaledwidth=95%]
811+
image::multi-s-addon.png[title="Selecting SES Extensions on all Nodes",scaledwidth=95%]
812812

813813
* On the hypervisor, you should also be able to route or bridge your upcoming
814814
SUSE Enterprise Storage 5.5 network segment. In our example, for simplicity, we are
@@ -818,28 +818,28 @@ using the same bridge and network address as our CaaSP cluster:
818818
* In our example below, each node is powered by 16 GiB of RAM, 4 vCPUs,
819819
40 GiB for the root disk, and 4 × 20 GiB OSDB disks:
820820
+
821-
image::multi-s-default.png[multi-s-default.png,scaledwidth=95%]
821+
image::multi-s-default.png[title="Selecting System Role on all Nodes",scaledwidth=95%]
822822

823823
* NTP must be configured on each node:
824824
+
825-
image::multi-s-ntp.png[multi-s-ntp.png,scaledwidth=95%]
825+
image::multi-s-ntp.png[title="Configuring NTP on all Nodes",scaledwidth=95%]
826826

827827
* Deselect "AppArmor" and the unnecessary "X" and "GNOME" patterns, but select
828828
the "SUSE Enterprise Storage" pattern:
829829
+
830-
image::multi-s-patterns.png[multi-s-patterns.png,scaledwidth=95%]
830+
image::multi-s-patterns.png[title="Selecting Patterns on all Nodes",scaledwidth=95%]
831831

832832
* De-activate the firewall on the nodes.
833833

834834
* Start the installation on all nodes:
835835
+
836-
image::multi-s-install.png[multi-s-install.png,scaledwidth=95%]
836+
image::multi-s-install.png[title="Starting Installation on all Nodes",scaledwidth=95%]
837837

838838
* When the nodes have rebooted, log in and finish the network/host name and NTP
839839
configurations, so that `hostname -f` returns the FQDN of the nodes, and
840840
`ntpq -p` returns a stratum less than 16:
841841
+
842-
image::multi-s-hostname-ntp.png[multi-s-hostname-ntp.png,scaledwidth=95%]
842+
image::multi-s-hostname-ntp.png[title="Finishing NTP configuration",scaledwidth=95%]
843843

844844
* Using `ssh-keygen` then `ssh-copy-id`, spread your SUSE Enterprise Storage Admin
845845
node `ssh` public key to all other nodes.
@@ -854,11 +854,11 @@ install `salt-master` and `deepsea`.
854854

855855
* Then, restart `salt-minion` on all nodes, and restart `salt-master` on the Admin Node:
856856
+
857-
image::multi-s-salt-install-restart.png[multi-s-salt-install-restart.png,scaledwidth=95%]
857+
image::multi-s-salt-install-restart.png[title="Installing Salt on all Nodes",scaledwidth=95%]
858858

859859
* Accept the related pending Salt keys:
860860
+
861-
image::salt-key.png[salt-key.png,scaledwidth=30%]
861+
image::salt-key.png[title="Accepting Salt Keys",scaledwidth=30%]
862862

863863
* Verify that `/srv/pillar/ceph/master_minion.sls` points to your Admin Node.
864864
In our example, it contains the FQDN of our `salt-master`:
@@ -871,15 +871,15 @@ In our example, it contains the FQDN of our `salt-master`:
871871
# salt-run state.orch ceph.stage.0
872872
----
873873
+
874-
image::ceph-stage-0.png[ceph-stage-0.png,scaledwidth=90%]
874+
image::ceph-stage-0.png[title="Preparing the Cluster",scaledwidth=90%]
875875

876876
* Collect information about the nodes:
877877
+
878878
----
879879
# salt-run state.orch ceph.stage.1
880880
----
881881
+
882-
image::ceph-stage-1.png[ceph-stage-1.png,scaledwidth=90%]
882+
image::ceph-stage-1.png[titel="Collecting Information about Nodes",scaledwidth=90%]
883883

884884
* Adapt the file `/srv/pillar/ceph/proposals/policy.cfg` to your needs. In our
885885
example, where the only deployed service is OpenAttic, it contains the following:
@@ -917,15 +917,15 @@ role-openattic/cluster/ses55-admin.suse-sap.net.sls
917917
# salt-run state.orch ceph.stage.2
918918
----
919919
+
920-
image::ceph-stage-2.png[ceph-stage-2.png,scaledwidth=90%]
920+
image::ceph-stage-2.png[title="Preparing Configuration Files",scaledwidth=90%]
921921

922922
* You can now safely deploy your configuration:
923923
+
924924
----
925925
# salt-run state.orch ceph.stage.3
926926
----
927927
+
928-
image::ceph-stage-3.png[ceph-stage-3.png,scaledwidth=90%]
928+
image::ceph-stage-3.png[title="Deploying Configuration Files",scaledwidth=90%]
929929

930930
* When Stage 3 has completed successfully, check the cluster's health to
931931
ensure that everything is running properly:
@@ -934,7 +934,7 @@ ensure that everything is running properly:
934934
# ceph -s
935935
----
936936
+
937-
image::ceph-health.png[ceph-health.png,scaledwidth=90%]
937+
image::ceph-health.png[title="Checking Cluster Health",scaledwidth=90%]
938938

939939
* To get the benefits of the OpenAttic WebUI, you must now initiate
940940
`ceph.stage.4`, which will install the OpenAttic service:
@@ -943,22 +943,22 @@ image::ceph-health.png[ceph-health.png,scaledwidth=90%]
943943
# salt-run state.orch ceph.stage.4
944944
----
945945
+
946-
image::ceph-stage-4.png[ceph-stage-4.png,scaledwidth=90%]
946+
image::ceph-stage-4.png[title="Installing OpenAttic service",scaledwidth=90%]
947947

948948

949949
* You can now manage your cluster through the WebUI:
950950
+
951-
image::openattic-dash.png[openattic-dash.png,scaledwidth=90%]
951+
image::openattic-dash.png[title="SUSE Enterprise Storage WebUI",scaledwidth=90%]
952952

953953

954954
* To provide a Data Hub RBD device, you first need to create a related pool:
955955

956-
image::openattic-pool.png[openattic-pool.png,scaledwidth=90%]
956+
image::openattic-pool.png[title="Creating Ceph Pool",scaledwidth=90%]
957957

958958

959959
* Then provide access to this pool through an RBD device:
960960
+
961-
image::openattic-rbd.png[openattic-rbd.png,scaledwidth=90%]
961+
image::openattic-rbd.png[title="Accessing Ceph Pool",scaledwidth=90%]
962962

963963

964964
You can now go to <<prerequisites_caasp_cluster>> and follow the prerequisites

0 commit comments

Comments
 (0)