Description
It would be nice to expose a running service to the internet using a Route
(in the case of OpenShift) or Ingress
(in the case of other Kubernetes distributions).
Here's how to create a Route
in OpenShift, given that $NAME_OF_DEPLOYMENT
is a deployment that's in the current name:
oc expose deployment $NAME_OF_DEPLOYMENT
Here's an example of an Ingress
yaml
in Kubernetes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wildwest-backend-ingress
spec:
defaultBackend:
service:
name: wildwest-backend-app
port:
number: 8080
The values we will have to change depending on the deployment are:
metadata.name
to the name of the deployment but with the suffix-ingress
to help distinguish itspec.defaultBackend.service.name
to the name of the deploymentspec.defaultBackend.service.port.number
to the port that the deployment exposes its service on
In order to create the ingress, you will need to run oc apply -f ingress-definition.yaml
.
A few final thoughts:
- It might also be possible to include the route or ingress in the devfile by embedding the YAML using the additional Kubernetes resources feature. For instance, this is how odo creates
ServiceBinding
s:
// ...
components:
- kubernetes:
inlined: |
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
creationTimestamp: null
name: my-binding
spec:
application:
group: apps
kind: Deployment
name: using-go-app
version: v1
bindAsFiles: true
detectBindingResources: true
services:
- group: postgres-operator.crunchydata.com
id: my-binding
kind: PostgresCluster
name: example-postgres
namespace: default
resource: postgresclusters
version: v1beta1
status:
secret: ""
name: my-binding
// ...
The benefit of this approach is that, if I understand correctly, odo
will manage the lifecycle of this Kubernetes object by creating it when dev is started and deleting it when dev is stopped
-
A final aspect of this change is that we should alert the user that this will expose the service to the public if the cluster is capable of exposing services to the public, and require an additional confirmation in order to do this. We shouldn't try to detect if the cluster is capable of exposing public endpoints, since that is really difficult. Instead we should always show this warning.
-
Not all Kubernetes clusters are capable of creating an ingress. They require an ingress controller. On
minikube
the ingress controller that is automatically set up is thenginx
one. Maybe we should hide the UI option if the cluster is not OpenShift and doesn't have an ingress controller set up?
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
No status