Ihre Browserversion ist veraltet. Wir empfehlen, Ihren Browser auf die neueste Version zu aktualisieren.

Full stack Nginx Plus, Prometheus, Fluentd with Springboot

This page researches the combination of Nginx Plus, Prometheus and Fluentd for facilitating Springboot apps with...

  1. Logfile tracking and tracing
  2. Healthchecking and alertmanagement output alerts to a IM like in HPSM.
  3. Fluentd aggregating logs and pipelining and sending over to Splunk through splunk Buffered Output plugin allows you to send data to a Splunk HTTP Event Collector or send data to Splunk Enterprise via TCP.
  4. OIDC SSO with NGINX Plus support and integration with a OIDC Identity Manager like keycloak.

Tracking logging with Spring Cloud Sleuth

We’ve all had the unfortunate experience of trying to diagnose a problem with a scheduled task, a multi-threaded operation, or a complex web request. Often, even when there is logging, it is hard to tell what actions need to be correlated together to create a single request. This can make diagnosing a complex action very difficult or even impossible. Often resulting in solutions like passing a unique id to each method in the request to identify the logs.

 In comes Sleuth, sleuth instrumtents your logs with the following information, enabling to query and aggregate all logic belonging to one business transaction went it enters in Splunk through fluentd's oiutput plugin.

  • Application name – This is the name we set in the properties file and can be used to aggregate logs from multiple instances of the same application.
  • TraceId – This is an id that is assigned to a single request, job, or action. Something like each unique user initiated web request will have its own traceId.
  • SpanId – Tracks a unit of work. Think of a request that consists of multiple steps. Each step could have its own spanId and be tracked individually. By default, any application flow will start with same TraceId and SpanId.

 

Prometheus is the most commonly deployed monitoring and alerting system in Kubernetes environments. Release 1.3.0 adds support for a lightweight Prometheus exporter which publishes metrics from the NGINX Plus API for consumption by Prometheus.

Study document how to configure Prometheus alertmanager to delegate to email receiver

Fluentd is an open source tool that focuses exclusively on log collection, or log aggregation. It gathers log data from various data sources and makes them available to multiple endpoints. Fluentd aims to create a unified logging layer. It is source and destination agnostic and is able to integrate with tools and components of any kind.

The helm charts are here and the log forwarder is here.

This is a post I studied on fluentd.

Configuring fluentd to forward logs to Splunk using the Fluentd secure forward output plugin, read this post

Check all the fluentd plugins here

 

Setup my playground... start with installing HELM and them boot up all tools on that cluster

Next sections show how I engineered custom settings on the HELM charts of all the add-ons

Installing Prometheus with HELM

  • helm fetch --untar stable/prometheus
  • cp values.yaml /springboot-helm-kubernetes/prometheus/myvalues.yaml
  • vim  /springboot-helm-kubernetes/prometheus/myvalues.yaml and bring in alert manager changes
  • helm install --name prometheus --values myvalues.yaml --namespace prometheus . --dry-run
  • helm install --name prometheus --values myvalues.yaml --namespace prometheus .

Installing Fluentd with HELM

  • helm fetch --untar stable/fluentd
  • cp values.yaml /springboot-helm-kubernetes/fluentd/myvalues.yaml
  • vim  /springboot-helm-kubernetes/fluentd/myvalues.yaml
  • helm install --name fluentd--values myvalues.yaml --namespace fluentd . --dry-run
  • helm install --name fluentd --values myvalues.yaml --namespace fluentd .

Installing Jenkins with HELM

  • helm fetch --untar stable/jenkins
  • cp values.yaml /springboot-helm-kubernetes/jenkins/myvalues.yaml
  • vim  /springboot-helm-kubernetes/jenkins/myvalues.yaml
  • helm install --name jenkins--values myvalues.yaml --namespace jenkins. --dry-run
  • helm install --name jenkins--values myvalues.yaml --namespace jenkins .
  • Get your 'admin' user password by running
  • printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
  • Get the Jenkins URL to visit by running these commands in the same shell: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc --namespace jenkins -w jenkins' export SERVICE_IP=$(kubectl get svc --namespace jenkins jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") echo http://$SERVICE_IP:8080/login
  • Login with the password from previous steps and the username: admin
  • Configure Access to Multiple Clusters

  • Install Kubernetes or have access to a cluster

  • Set Up a Jenkins CI/CD Pipeline with Kubernetes

Installing Nginx Ingress controller with HELM

  • helm fetch --untar stable/nginx-ingress
  • cp values.yaml /springboot-helm-kubernetes/nginx-ingress/myvalues.yaml
  • vim  /springboot-helm-kubernetes/nginx/myvalues.yaml
  • helm install --name nginx--values myvalues.yaml --namespace nginx . --dry-run
  • helm install --name nginx --values myvalues.yaml --namespace nginx .

Automatically install all add-on's mentioned about

  • /springboot-helm-kubernetes/cluster
  • chmod 777 *
  • ./instasll-helm.sh
  • kubectl get pods -w -n kube-system (until tiller becomes avaliable, before going on)
  • ./install-k8s-addons.sh

Just a few HELM commands - Searching & Install

  • helm search mysql
  • helm inspect stable/mariadb
  • helm install stable/mariadb --name mydb
  • helm status mydb | grep Persist
  • kubectl run mydb-mariadb-clitn  --rm --tty -i --image bitnama/mariadb --command -- mysql -h mydb-mariadb
  • helm inspect values stable/mariadb
  • helm install -f config.yaml stable/mariadb
  • helm install -f config.yaml --set mariadbRootPassword=whatever stable/mariadb

Upgrading and rolling back releases

  • helm get values mydb
  • helm upgrade mydb --set mariadbDatabase=anotherdatabase
  • helm get values mydb
  • helm history mydb
  • helm rollback mydb 3
  • helm history mydb
  • helm list
  • helm delete mydb
  • helm list --all
  • helm delete --purge mydb
  • helm install --set subchart2.enabled=false
  • helm install --values=custom-values.yaml mychart

Centralized logging for Containers with Springboot apps

With Docker we must think about how to log to a public space outside our ephemeral container space.

Centralized logging in Kubernetes PKS

Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.

SpringBoot Monitoring with Micrometer, Prometheus and Grafana

Micrometer provides a simple facade over the instrumentation clients for the most popular monitoring systems, allowing you to instrument your JVM-based application code without vendor lock-in. Think SLF4J, but for metrics. By abstracting away and supporting multiple monitoring systems under common semantics, the tool makes switching between different monitoring platforms quite easy.  it supports the following monitoring systems: Atlas, Datadog, Graphite, Ganglia, Influx, JMX and Prometheus. Prometheus is an in-memory dimensional time series database with a simple built-in UI, a custom query language, and math operations. Prometheus is designed to operate on a pull model, scraping metrics from application instances periodically based on service discovery.

Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: Stackdriver Logging for use with Google Cloud Platform, and Elasticsearch. You can find more information and instructions in the dedicated documents. Both use fluentd with custom configuration as an agent on the node.

Managing charts

  • helm create mychart
  • helm package mychart
  • helm create mychart --starter mystarter (located at $HELM_HOME/starters)
  • helm deb up mychart (pull over all dependent charts to the charts sub directory)
  • helm template mychart
  • helm lint mychart
  • helm install mychart --name=productpage

Helm documentation

Debugging demo if things go wrong

Read about kubernetes pod debugging

  • https://www.katacoda.com/courses/kubernetes/helm-package-manager

  • git clone https://github.com/agilesolutions/springboot-helm-kubernetes.git

  • cd springboot-helm-kubernetes/charts
  • helm inspect demo

  • kubectl run demo -ti  --image=agilesolutions/demo:latest --command -- /bin/sh

  • kubectl delete deploy/demo
  • kubectl run demo --image=agilesolutions/demo:latest --replicas=1

  • kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
  • If there are multiple containers in the pod, use: -c <container name>
  • kubectl attach dd-777989546f-m48xq -c /bin/sh -i -
  • kubectl run --image=agilesolutions/demo:latest -c /bin/sh -i -t
  • kubectl expose deployment demo --external-ip="172.17.0.20" --port=8000 --target-port=80
  • kubectl get pods

  • kubectl logs -f xxx

  • kubectl logs --previous xxx

  • kubectl exec xxx -- ls /tmp

  • kubectl exec xxx -- cat /tmp/spring.log

  • kubectl exec -ti xxx -- bin/sh

Install Nginx

read this on nginx controller

https://www.linode.com/docs/applications/containers/kubernetes/how-to-deploy-nginx-on-a-kubernetes-cluster/

  • sfsdf
  • vi /etc/hostname
  • kubectl create deployment nginx --image=nginx
  • kubectl describe deployment nginx
  • kubectl create service nodeport nginx --tcp=80:80
  • kubectl get svc
  • * Create a Service object that exposes the deployment:
    kubectl expose deployment demo --type=NodePort --name=demo-service

    kubectl describe services demo-service


    Name: example-service
    Namespace: default
    Labels: run=load-balancer-example
    Annotations: <none>
    Selector: run=load-balancer-example
    Type: NodePort
    IP: 10.32.0.16
    Port: <unset> 8080/TCP
    TargetPort: 8080/TCP
    NodePort: <unset> 31496/TCP
    Endpoints: 10.200.1.4:8080,10.200.2.5:8080
    Session Affinity: None
    Events: <none>


    curl http://<public-node-ip>:<node-port>

    https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
    https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/


    https://stackoverflow.com/questions/38965003/expose-existing-and-deployed-kubernetes-service-via-loadbalancer

Hosted by WEBLAND.CH