Skip to content

Networking nishant #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .idea/.gitignore

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 12 additions & 0 deletions .idea/DevOpsBootcampUPES.iml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 6 additions & 0 deletions .idea/inspectionProfiles/profiles_settings.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions .idea/misc.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions .idea/modules.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 6 additions & 0 deletions .idea/vcs.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

31 changes: 31 additions & 0 deletions k8smanifests/2048.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: 2048-game
spec:
replicas: 2
selector:
matchLabels:
app: 2048-game
template:
metadata:
labels:
app: 2048-game
spec:
containers:
- name: 2048-game
image: alexwhen/docker-2048
ports:
- containerPort: 5858
---
apiVersion: v1
kind: Service
metadata:
name: 2048-service
spec:
selector:
app: 2048-game
ports:
- protocol: TCP
port: 5858
targetPort: 5858
55 changes: 55 additions & 0 deletions k8smanifests/grafana.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
#Distribute credentials securely using secrets
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana
ports:
- containerPort: 3000
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-creds
key: username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-creds
key: password
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000

---
apiVersion: v1
kind: Secret
metadata:
name: grafana-creds
data:
username: #name you want with echo -n "name" | base64
password: #password you want with echo -n "password" | base64

---
71 changes: 71 additions & 0 deletions k8smanifests/grafana_statefullset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#Before you apply the below manifest, delete your grafana deployment by:
#kubectl delete deployment grafana

#Before we start, we need to enable the EBS CSI plugin in EKS, allowing the cluster
#to create EBS for individual pods (should be done only once per cluster).
# In your EKS cluster main page, choose the Add-ons tab.
# Choose Add new.
# Select Amazon EBS CSI Driver for Name.
# Add your cluster node role the AmazonEBSCSIDriverPolicy permission.

#The below example will create an
#EBS volume in AWS which dedicated to store Grafana data for a single pod.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: grafana
spec:
replicas: 1
serviceName: grafana-svc
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
securityContext:
runAsUser: 472
runAsGroup: 8020
fsGroup: 8020
containers:
- name: grafana
image: grafana/grafana
ports:
- name: grafana
containerPort: 3000
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-creds
key: username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-creds
key: password
volumeMounts:
- name: grafana-datasources-vol
mountPath: "/etc/grafana/provisioning/datasources"
- name: grafana-storage
mountPath: "/var/lib/grafana"
volumes:
- name: grafana-datasources-vol
configMap:
name: grafana-datasources
volumeClaimTemplates:
- metadata:
name: grafana-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: gp2
resources:
requests:
storage: 5Gi


17 changes: 17 additions & 0 deletions k8smanifests/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: game-ingress
spec:
rules:
- host: nishant-2048.upes-int-devops.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: game-service
port:
number: 80
ingressClassName: nginx
70 changes: 70 additions & 0 deletions k8smanifests/live-readprobe.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
#liveness probe
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: registry.k8s.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

---
#liveness using HTTP get request
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: registry.k8s.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3

---
#readiness probe
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: registry.k8s.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
35 changes: 35 additions & 0 deletions k8smanifests/mem-cpudemo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#cpu-demo
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
args:
- -cpus
- "2"
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"

---
#memory demo
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
resources:
requests:
memory: "50Mi"
limits:
memory: "100Mi"
33 changes: 33 additions & 0 deletions k8smanifests/nginx-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mynginx
spec:
selector:
app: nginx
ports:
- port: 8080
targetPort: 80


#Apply the file by kubectl apply -f youryamlname.yaml
14 changes: 7 additions & 7 deletions projects/bash_networking_security/SOLUTION
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
Local DNS Server IP
-------------------
<ip-here>


127.0.0.53

Default gateway IP
-------------------
<ip-here>


10.0.0.1

DHCP IP allocation sys-logs
-------------------
<logs-here>
Jun 19 09:42:53 ip-10-0-0-216 dhclient[377]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0x13d73e2a)
Jun 19 09:42:53 ip-10-0-0-216 dhclient[377]: DHCPOFFER of 10.0.0.216 from 10.0.0.1
Jun 19 09:42:53 ip-10-0-0-216 dhclient[377]: DHCPREQUEST for 10.0.0.216 on eth0 to 255.255.255.255 port 67 (xid=0x2a3ed713)
Jun 19 09:42:53 ip-10-0-0-216 dhclient[377]: DHCPACK of 10.0.0.216 from 10.0.0.1 (xid=0x13d73e2a)


Loading