Ihre Browserversion ist veraltet. Wir empfehlen, Ihren Browser auf die neueste Version zu aktualisieren.

The ultimate integration

After 37 years in IT, experiencing many technologies and methodologies, I feel that we slowly have come to a point that Development and Operational IT processes reached a full end2end integration. Kubernetes is the de facto standard for managing containers, distributed applications, and virtual infrastructure. While the ecosystem is huge, it needs the tooling to providing us with development, provisioning and management facilities. Jenkins x, Helm and Istio is getting us near to the things we need to get a full end2end DevOps integration. It seems that we are heading to a next generation standard DevOps solution resulting from the partnership between Cloudbees, Atos and Google, aiming at providing customers with a complete DevOps solution running on Google Cloud Platform. Atos expects that, for the foreseeable future, most of their clients will be trying to meld some degree of DevOps and existing ITIL-based approaches to IT services management (ITSM). The alliance with Google will go a long way to accelerating that transition because it will allow Atos to deliver access to a CI/CD platform based on a consumption model.

What is Cloud Native computing

Almost everything you read here is closely related to Cloud Native software development and computing. Let's first get the definition right.

"Cloud-native technologies are used to develop applications built as services (MSA), packaged in containers, deployed and managed on elastic infrastructure (like Kubernetes) through agile DevOps processes and continuous delivery workflows".

Read about the 10 KEY ATTRIBUTES OF CLOUD-NATIVE APPLICATIONS

DevOps, about Culture and Talent (Developer oriented Innovation Culture)

The problem is,... many organizations aren't doing a good job executing software delivery. Digital innovators should treat technology as a business asset, it is an important component to their business strategy. Modern software correlates with business growth. Good culture embraces developers, making them part of the business process. Allowing them to bring their ideas forward, allowing different ideas to bubble up. These people that are working on the front lines of your software delivery process, know what is wrong, they recognize the issues that are happening. Encourage developers to have some power in the decision making process. About talent: hire good talent, you can't hire good talent if you don't have good culture. Chris Condo senior analyst at Forrester at CloudBees Days San Francisco 2019

n order to fully exploit the microservices promise, the technology must be complemented by an appropriate architectural pattern that maximizes the potential benefits of the approach. Command Query Responsibility Segregation (CQRS) is one of those patterns, and probably the most relevant.

 

 

  1. What is Command Query Responsibility Segregation (CQRS)?
  2. IBM build app CQRS
  3. CQRS github example
  4. IBM Springboot 
  5. Github example IBMMQ Spring Rabbit
  6. Distributed Sagas for Microservices

CQRS with Apache Axon

CQRS on itself is a very simple pattern. It only describes that the component of an application that processes commands should be separated from the component that processes queries. Although this separation is very simple on itself, it provides a number of very powerful features when combined with other patterns. Axon provides the building block that make it easier to implement the different patterns that can be used in combination with CQRS.

The diagram below shows an example of an extended layout of a CQRS-based event driven architecture. The UI component, displayed on the left, interacts with the rest of the application in two ways: it sends commands to the application (shown in the top section), and it queries the application for information (shown in the bottom section).

Saga Pattern Implementation with Axon and Spring Boot

Saga Pattern is a direct result of Database-per-service pattern. In Database-per-service pattern, each service has its own database. In other words, each service is responsible only for its own data.This leads to a tricky situation. Some business transactions require data from multiple services. Such transactions may also need to update or process data across services. Therefore, a mechanism to handle data consistency across multiple services is required.

  1. part 1
  2. Spring Boot Microservices – Fastest Production Ready Microservices
  3. About Saga 
  4. Saga pattern
  5. Axon Apache
  6. Saga pattern with Springboot and Active MQ

 

XP development, back to the practise of software development

The general characteristics where XP is appropriate were described by Don Wells on www.extremeprogramming.org:

  • Dynamically changing software requirements
  • Risks caused by fixed time projects using new technology
  • Small, co-located extended development team
  • The technology you are using allows for automated unit and functional tests

Excellent read

About planning

OpenTracing API

Now often deployed in separate containers it became obvious we need a way to trace transactions through various microservice layers, from the client all the way down to queues, storage, calls to external services, etc. This created a new interest in Transaction Tracing that, although not new, has now re-emerged as the third pillar of observability.

Spring 5 WebClient

Prior to Spring 5, there was RestTemplate for client-side HTTP access. RestTemplate, which is part of the Spring MVC project, enables communication with HTTP servers and enforces RESTful principles.

Spring Framework 5 introduces WebClient, a component in the new Web Reactive framework that helps build reactive and non-blocking web applications. Simply put, WebClient is an interface representing the main entry point for performing web requests.

It has been created as a part of the Spring Web Reactive module and will be replacing the classic RestTemplate in these scenarios. The new client is a reactive, non-blocking solution that works over the HTTP/1.1 protocol.

Spring has officially stated that they will deprecate RestTemplate (as stated in the RestTemplate API) in the future so if you can, use WebClient if you want to be as future proof as possible.

NOTE: As of 5.0, the non-blocking, reactive org.springframework.web.reactive.client.WebClient offers a modern alternative to the RestTemplate with efficient support for both sync and async, as well as streaming scenarios. The RestTemplate will be deprecated in a future version and will not have major new features added going forward. See the WebClient section of the Spring Framework reference documentation for more details and example code.

WebClient and Oauth OIDC

Spring Security 5 provides OAuth2 support for Spring Webflux's non-blocking WebClient class.

Keycloak and Istio

Istio is an platform that provides a common way to manage your service mesh. You may wonder what a service mesh is, well, it's an infrastructure layer dedicated to connect, secure and make reliable your different services.

Istio, in the end, will be replacing all of our circuit-breakers, intelligent load balancing or metrics librairies, but also the way how two services will communicate in a secure way. And this is of course the interesting part for Keycloak.

Building Docker Images without Docker

Kaniko is a project launched by Google that allows building Dockerfiles without Docker or the Docker daemon.

Kaniko can be used inside Kubernetes to build a Docker image and push it to a registry, supporting Docker registry, Google Container Registry and AWS ECR, as well as any other registry supported by Docker credential helpers.

This solution is still not safe, as containers run as root, but it is way better than mounting the Docker socket and launching containers in the host. For one there are no leaked resources or containers running outside the scheduler.

Build Docker images without Docker - using Kaniko, Jenkins and Kubernetes

Jenkins is a hugely popular build tool that has been around for ages and used by many people. With huge shift to Kubernetes as a platform you would naturally want to run jenkins on Kubernetes. While running Jenkins in itself on Kubernetes is not a challenge, it is a challenge when you want to build a container image using jenkins that itself runs in a container in the Kubernetes cluster.

The process of running Docker-in-Docker (DIND), and setting it up is not very interesting not to mention the hacking that you need to do to achieve it.

An alternative would be Kaniko which provides a clean approach to building and pushing container images to your repository.

  1. Instructions from Cloudbees you definitely should read on Kaniko
  2. Setting up jenkins
  3. What you need to know when using Kaniko from Kubernetes Jenkins Agents

  4.  Continuous Development with Java and Kubernetes
  5. Demystifying RBAC in Kubernetes

Speed up Kubernetes development with Cloud Code

Get a fully integrated Kubernetes development, deployment, and debugging environment within your IDE. Create and manage clusters directly from within the IDE. Under the covers Cloud Code for IDEs uses popular tools such as Skaffold, Jib and Kubectl to help you get continuous feedback on your code in real time. Debug the code within your IDEs using Cloud Code for Visual Studio Code and Cloud Code for IntelliJ by leveraging built-in IDE debugging features.

Prevent the LoP - simply skaffold dev

Open Container Initiative and OCI

This specification defines an OCI Image, consisting of a manifest, an image index (optional), a set of filesystem layers, and a configuration. The goal of this specification is to enable the creation of interoperable tools for building, transporting, and preparing a container image to run.

"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. 

Why should I use distroless images?

Restricting what's in your runtime container to precisely what's necessary for your app is a best practice employed by Google and other tech giants that have used containers in production for many years. It improves the signal to noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need.

Container Structure Tests: Unit Tests for Docker images

Usage of containers in software applications is on the rise, and with their increasing usage in production comes a need for robust testing and validation. Containers provide great testing environments, but actually validating the structure of the containers themselves can be tricky. The Docker toolchain provides us with easy ways to interact with the container images themselves, but no real way of verifying their contents. What if we want to ensure a set of commands runs successfully inside of our container, or check that certain files are in the correct place with the correct contents, before shipping?

  1. Container Structure Tests
  2. Integrating Container Structure Tests on Jenkins 2.0 pipelines
  3. What is on Google Container Tools GITHUB

Skaffold

Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. You can iterate on your application source code locally then deploy to local or remote Kubernetes clusters. Skaffold handles the workflow for building, pushing and deploying your application. It also provides building blocks and describe customizations for a CI/CD pipeline.

Like Draft, it can also be used as a building block in a CI/CD pipeline to leverage the same workflow and tooling when you are moving an application into production. Read Draft vs. Skaffold: Developing on Kubernetes

Read Faster Feedback for Delivery Pipelines with Skaffold or Continuous Development with Java and Kubernetes

Test the structure of your images before deployment: Container structure tests are defined per image in the Skaffold config. Every time an artifact is rebuilt, Skaffold runs the associated structure tests on that image. If the tests fail, Skaffold will not continue on to the deploy stage. If frequent tests are prohibitive, long-running tests should be moved to a dedicated Skaffold profile. read 

Continuous Development with Java and Kubernetes

JIB — build Java Docker images better (Java Image Builder JIB)

Containers are bringing Java developers closer than ever to a "write once, run anywhere" workflow, but containerizing a Java application is no simple task: You have to write a Dockerfile, run a Docker daemon as root, wait for builds to complete, and finally push the image to a remote registry. Not all Java developers are container experts; what happened to just building a JAR?

  • Fast - Deploy your changes fast. Jib separates your application into multiple layers, splitting dependencies from classes. Now you don’t have to wait for Docker to rebuild your entire Java application - just deploy the layers that changed
  • Reproducible - Rebuilding your container image with the same contents always generates the same image. Never trigger an unnecessary update again.
  • Daemonless - Reduce your CLI dependencies. Build your Docker image from within Maven or Gradle and push to any registry of your choice. No more writing Dockerfiles and calling docker build/push.
  1. Read Introducing Jib — build Java Docker images better and from the source. and finally read Baeldung on Jib
  2. Jib: Getting Expert Docker Results Without Any Knowledge of Docker
  3. "Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
  4. How to use JibMaven builders on skaffold.
  5. Google sample Jib project on github
  6. Read the skaffold dev home page
  7. FQA on JIB

GO - aka Golang 

After having experienced 10+ years of C programming in the 80's, it is like feeling coming home again. We go functional, statically typed and native running again, how nice.

Golang is a programming language you might have heard about a lot during the last couple years. Even though it was created back in 2009, it has started to gain popularity only in recent years. According to Go’s philosophy (which is a separate topic itself), you should try hard to not over-engineer your solutions. And this also applies to dynamically-typed programming. Stick to static types as much as possible, and use interfaces when you know exactly what sort of types you’re dealing with. Interfaces are very powerful and ubiquitous in Go.

Some features....

  • Static code analysis: Static code analysis isn’t actually something new to modern programming, but Go sort of brings it to the absolute.
  • Built-in testing and profile: Go comes with a built-in testing tool designed for simplicity and efficiency. It provides you the simplest API possible, and makes minimum assumptions
  • Race condition detection: concurrent programming is taken very seriously in Go and, luckily, we have quite a powerful tool to hunt those race conditions down. It is fully integrated into Go’s toolchain.

Some reads: 

  1. Here are some amazing advantages of Go that you don’t hear much about
  2. The love of every old school C programmer, pointers
  3. Quick intro for the java developer
  4. From the source
  5. About the GO Gopher
  6. The Evolution of Go: A History of Success
  7. Parsing Spring Cloud Config content example
  8. Reading configs from Spring Cloud Config example
  9. An interface is a great and only way to achieve Polymorphism in Go

 

About Jenkins x

Jenkins X and Istio both have one thing in common in how they tightly integrate with Kubernetes by extending the platform with specific Custom Resources. Jenkins X is a project which rethinks how developers should interact with CI/CD in the cloud with a focus on making development teams productive through automation, tooling and DevOps best practices. Jenkins X is an open source project that integrates the best-of-breed tools in the Kubernetes ecosystem to provide a CI/CD solution for cloud native applications on Kubernetes. Jenkins X also demonstrates GitOps, where environments like staging and production are Git repos. Automatic promotion of apps from one environment to another happens via pull requests and Jenkins pipelines. Jenkins X is heavily influenced by the State of DevOps reports and more recently the Accelerate book from Nicole Forsgren, Jez Humble and Gene Kim. Read more... and this introduction definitely worth reading.

Kubernetes plugin for Jenkins

Implement a Jenkins scalable infrastructure on top of Kubernetes, in which all the nodes for running the build will spin up automatically during builds execution, and will be removed right after their completion.

  1. Kubernetes plugin for Jenkins
  2. Kubernetes plugin pipeline examples
  3. How to Setup Scalable Jenkins on Top of a Kubernetes Cluster
  4. Scaling Docker enabled Jenkins with Kubernetes
  5. About DevOps 2.4 Toolkit: Continuous Deployment on k8s
  6. DevOps 2.4 Toolkit: Deploying Jenkins To A Kubernetes Cluster Using Helm
  7. Check jenkins scripted and declarative pipeline examples
  8. Check Jenkins pipeline global variable references
  9. Official Jenkins Docker Image on Github
  10. Jenkins CI/CD with Kubernetes and Helm
  11. Creating A CDP Pipeline With Jenkins- Hands-On Time
  12. How to build my own docker images in CloudBees Core
  13. Effectively Using the Kubernetes Plugin with Jenkins
  14. Create a Cloud Configuration on the Jenkins Master
  15. Configuration management: a Spring Boot use-case with Kubernetes
  16. Tips on writing Pipelines

Jenkins Distributed Builds

It is pretty common when starting with Jenkins to have a single server which runs the master and all builds, however
Jenkins architecture is fundamentally "Master+Agent". The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The reason being that workloads are often best "farmed out" to distributed servers. This may be for scale, or to provide different tools, or build on different target platforms.
Another common reason for remote agents is to enact deployments into secured environments (without the master having direct access).  Set up to offload build projects.

 

Enabled JNLP protocols
By default, the JNLP3-connect is disabled due to the known stability and scalability issues.
You can enable this protocol at your own risk using the JNLP_PROTOCOL_OPTS=-Dorg.jenkinsci.remoting.engine.
JnlpProtocol3.disabled=false property (the protocol should be enabled on the master side as well).

In Jenkins versions starting from 2.27 there is a JNLP4-connect protocol. If you use Jenkins 2.32.x LTS,
it is recommended to enable the protocol on your instance. Read 

  1. About JENKINS_JNLP_URL
  2. and how to setup envs variables on your pod template
  3. What when JNLP Agents/Slaves are not able to connect?

Automated TLS with cert-manager and letsencrypt for Kubernetes

Did you ever dream of the day where there would be free TLS certs that were automatically created and renewed when a new service shows up? Well that day has arrived. If you’ve jumped on the cool train and are running Kubernetes in production, then cert-manager is a must have. cert-manager is a service that automatically creates and manages TLS certs in Kubernetes and it is as cool as its sounds.

  1. the cert-manager service which ensures TLS certs are valid, up to date, and renew them when needed.
  2. the clusterIssuer resource which defines what Certificate Authority to use
  3. the certificate resource which defines the certificate that should be created

NGINX Plus Ingress Controller

NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a setup that simplifies scaled rollouts.

  1. Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD
  2. NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing

Configuring Ingress Resources and NSX-T Load Balancers for PCS

This topic describes example configurations for ingress routing (Layer 7) and load balancing (Layer 4) for Kubernetes clusters deployed by Enterprise Pivotal Container Service (Enterprise PKS) on vSphere with NSX-T integration.

Kubernetes Monitoring with Prometheus

Prometheus is the “must have” monitoring and alerting tool for Kubernetes and Docker. Moving from bare metal server to the cloud, I had time to investigate the proactive monitoring with k8s. The k8s project has already embraced this amazing tool, by exposing Prometheus metrics in almost all of the components.

Monitoring your k8s cluster will help your team with:

  • Proactive monitoring
  • Cluster visibility and capacity planning
  • Trigger alerts and notification, Built-in Alertmanager—sends out notifications via a number of methods based on rules that you specify. This not only eliminates the need to source an external system and API, but it also reduced the interruptions for your development team.   
  • Metrics dashboards

Things to read...

Springboot logs in Elastic Search with FluentD (ISTIO)

If you deploy a lot of micro-services with Spring Boot (or any other technology), you will have a hard time collecting and making sense of the all logs of your different applications. A lot of people refer to the triptych Elastic Search + Logstash + Kibana as the ELK stack. In this stack, Logstash is the log collector. Its role will be to redirect our logs to Elastic Search. The ISTIO setup requires to send  your custom logs to a Fluentd daemon (log collector). Fluentd is an open source log collector that supports many data outputs and has a pluggable architecture.

  1. ISTIO Logging with Fluentd
  2. Container and Service Mesh Logs
  3. Spring boot logs in Elastic Search with fluentd
  4. Github Springboot fluentd example

About Helm

Jenkins X uses Helm to install both Jenkins X and to install the applications you create in each of the Environments (like Staging and Production). Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

Read drastically Improve your Kubernetes Deployments with Helm

Kubernetes monitoring with Prometheus

As from Spring Boot 2.0, Micrometer is the default metrics export engine. Micrometer is an application metrics facade that supports numerous monitoring systems. Atlas, Datadog, Prometheus, etc. to name a few (as we will be using Prometheus in this tutorial, we will be focusing on Prometheus only).

  1. Monitoring Using Spring Boot 2.0, Prometheus part 1 nice example of Function.apply
  2. Monitoring Using Spring Boot 2.0, Prometheus part 2
  3. Kubernetes monitoring with Prometheus in 15 minutes

About Init Containers

You may be familiar with a concept of Init scripts — programs that configure runtime, environment, dependencies, and other prerequisites for the applications to run. Kubernetes implements similar functionality with Init containers that run before application containers are started. In order for the main app to start, all commands and requirements specified in the Init container should be successfully met. Otherwise, a pod will be restarted, terminated or stay in the pending state until the Init container completes.

A common use-case is to pre-populate config files specifically designed for a type of environment like test or production. Similarly to app containers, Init containers use Linux namespaces. Because these namespaces are different from the namespaces of app containers, Init containers end up with their unique filesystem views. You can leverage these filesystem views to give Init containers access to secrets that app containers cannot access. Typically init containers could pull application configuration from a secured environment and provide that config on a Volume that the init container and the application share. This is typically accomplished by defining a emptyDir volume at the Pod level. Containers in the Pod can all read and write the same files in the emptyDir volume, in this case, holding files that configuration-manager (Init Container) provisions while the app Container serves that data to load its configurations.

  1. Understanding init containers
  2. Create a Pod that has an Init Container
  3. Using InitContainers to pre-populate Volume data in Kubernetes
  4. Kubernetes init containers by example
  5. Introduction to Init Containers in Kubernetes

About Istio

When building microservice based application, a myriad of complexities arises, we need Service Discovery, Load balancing, Authentication and RBAC role based access. Istio provides capabilities for traffic monitoring, access control, discovery, security, observability through monitoring, and logging and other useful management capabilities to your deployed services. It delivers all that and does not require any changes to the code of any of those services. A great read and complete walkthrough and another one on dzone. Important part of Istio is providing observability and get monitoring data without any in-process instrumentation, read more about using OpenTracing with Istio/Envoy.

Operators

CoreOS introduced a class of software in the Kubernetes community called an Operator. An Operator builds upon the basic Kubernetes resource and controller concepts but includes application domain knowledge to take care of common tasks. 

Like Kubernetes's built-in resources, an Operator doesn't manage just a single instance of the application, but multiple instances across the cluster. As an example, the built-in ReplicaSet resource lets users set a desired number number of Pods to run, and controllers inside Kubernetes ensure the desired state set in the ReplicaSet resource remains true by creating or removing running Pods. There are many fundamental controllers and resources in Kubernetes that work in this manner, including Services, Deployments, and Daemon Sets. There are two concrete examples:

  • The etcd Operator creates, configures, and manages etcd clusters. etcd is a reliable, distributed key-value store introduced by CoreOS for sustaining the most critical data in a distributed system, and is the primary configuration datastore of Kubernetes itself. 
  • The Prometheus Operator creates, configures, and manages Prometheus monitoring instances. Prometheus is a powerful monitoring, metrics, and alerting tool, and a Cloud Native Computing Foundation (CNCF) project supported by the CoreOS team.

Read about the operator framework. This is my first take on learning and practising my first operator. The goal is to introduce a new operator to more flexible manage application configuration on ConfigMaps. For this read Could a Kubernetes Operator become the guardian of your ConfigMaps and Kubernetes Operator Development Guidelines

Spring and more...

REST has quickly become the de-facto standard for building web services on the web because they’re easy to build and easy to consume.

About Cloud Native Java Apps with Quarkus

Java is more than 20 years old. The JVM solved a huge problem and allowed us to write code once and run it in multiple platforms and operating systems. With Containers we can now package our apps, libs and OS resources into a single container that can run anywhere. The JVM portability is now less relevant. All the overhead of running the app in a JVM inside a container is useless, so AOT makes perfect sense if you are going to package your apps in containers.

Financial institutions, government, retail and many other industries which have millions of lines of code written in Java which they cannot afford to rewrite. GraalVM and specifically Substrate VM are now opening the door for a bright and long future for the Java language. Quarkus integrates Java libraries that companies integrated on their Enterprise Platforms, these include Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more.

About Hibernate and Panache

Being more productive with Hibernate ORM. What is Panache? Hibernate ORM is the de facto JPA implementation and offers you the full breadth of an Object Relational Mapper. It makes complex mappings possible, but many simple and common mappings can also be complex. Hibernate ORM with Panache focuses on making your entities trivial and fun to write and use with Quarkus.

What does Panache offer, read more. and find the Quarkus example sources and the Hibernate ORM Panache guide.

About MicroProfile and Jakarta EE (Open Source Cloud Native Java)

What is a microservice? MicroProfile is a vendor-neutral programming model which is designed in the open for developing Java microservices. It provides the core capabilities you need to build fault-tolerant, scalable microservices.

Because MicroProfile is developed in the open and is a collaboration between several partners, it means we can be innovative and fast! MicroProfile is part of the Eclipse Foundation.

 

What is Jakarta EE? For many years, Java EE has been a major platform for mission-critical enterprise applications. In order to accelerate business application development for a cloud-native world, leading software vendors collaborated to move Java EE technologies to the Eclipse Foundation where they will evolve under the Jakarta EE brand.

Hosted by WEBLAND.CH