Ihre Browserversion ist veraltet. Wir empfehlen, Ihren Browser auf die neueste Version zu aktualisieren.

The ultimate integration

After 37 years in IT, experiencing many technologies and methodologies, I feel that we slowly have come to a point that Development and Operational IT processes reached a full end2end integration. Kubernetes is the de facto standard for managing containers, distributed applications, and virtual infrastructure. While the ecosystem is huge, it needs the tooling to providing us with development, provisioning and management facilities. Jenkins x, Helm and Istio is getting us near to the things we need to get a full end2end DevOps integration. It seems that we are heading to a next generation standard DevOps solution resulting from the partnership between Cloudbees, Atos and Google, aiming at providing customers with a complete DevOps solution running on Google Cloud Platform. Atos expects that, for the foreseeable future, most of their clients will be trying to meld some degree of DevOps and existing ITIL-based approaches to IT services management (ITSM). The alliance with Google will go a long way to accelerating that transition because it will allow Atos to deliver access to a CI/CD platform based on a consumption model.

What is Cloud Native computing

Almost everything you read here is closely related to Cloud Native software development and computing. Let's first get the definition right.

"Cloud-native technologies are used to develop applications built as services (MSA), packaged in containers, deployed and managed on elastic infrastructure (like Kubernetes) through agile DevOps processes and continuous delivery workflows".

Read about the 10 KEY ATTRIBUTES OF CLOUD-NATIVE APPLICATIONS

DevOps, about Culture and Talent (Developer oriented Innovation Culture)

The problem is,... many organizations aren't doing a good job executing software delivery. Digital innovators should treat technology as a business asset, it is an important component to their business strategy. Modern software correlates with business growth. Good culture embraces developers, making them part of the business process. Allowing them to bring their ideas forward, allowing different ideas to bubble up. These people that are working on the front lines of your software delivery process, know what is wrong, they recognize the issues that are happening. Encourage developers to have some power in the decision making process. About talent: hire good talent, you can't hire good talent if you don't have good culture. Chris Condo senior analyst at Forrester at CloudBees Days San Francisco 2019

DevOps with Concourse on Pivotal Container Service (PKS)

Pivotal Container Service PKS is a platform built by Pivotal to ease the burden of deploying and operating Kubernetes clusters. PKS builds on top of Cloud Foundry’s container runtime (formerly Kubo) that utilizes BOSH to handle both Day 1 and Day 2 operations for Kubernetes. In short: PKS brings brings production grade Kubernetes clusters to the enterprise.

Concourse is an open source continuous integration and continuous delivery (CI/CD) system designed for teams that practice test-driven development and continuous delivery. Teams automate delivery of their software as pipelines which execute testing, packaging, and deployment as often as every commit. Concourse pipelines are configured via YAML, which can be versioned controlled as part of the project code. Pipelines can scale to projects of any complexity. Pipelines are displayed visually to show the status of build runs. Read more about Concourse.

Clair (Open Source Vulnerability Analysis for your Containers) Docker image vulnerability scanning base on CVE datasources, preventing your organization to pull software Docker images to production before being scanned (content trust), watch the talk on Building Developer Pipelines with PKS, Harbor, Clair, and Concourse

Developer locality, Because spinning up new clusters is a breeze, there’s no reason not to have clusters for development teams. Because BOSH (hence also PKS) can deploy on internal infrastructure like vSphere, developers can access clusters on the local internal network without having to reach out to the nearest AWS/GCP/Azure datacentre, lowering latency and increasing responsiveness.

 

About Jenkins x

Jenkins X and Istio both have one thing in common in how they tightly integrate with Kubernetes by extending the platform with specific Custom Resources. Jenkins X is a project which rethinks how developers should interact with CI/CD in the cloud with a focus on making development teams productive through automation, tooling and DevOps best practices. Jenkins X is an open source project that integrates the best-of-breed tools in the Kubernetes ecosystem to provide a CI/CD solution for cloud native applications on Kubernetes. Jenkins X also demonstrates GitOps, where environments like staging and production are Git repos. Automatic promotion of apps from one environment to another happens via pull requests and Jenkins pipelines. Jenkins X is heavily influenced by the State of DevOps reports and more recently the Accelerate book from Nicole Forsgren, Jez Humble and Gene Kim. Read more... and this introduction definitely worth reading.

About DevOps 2.4 Toolkit: Continuous Deployment on k8s

 

About Helm

Jenkins X uses Helm to install both Jenkins X and to install the applications you create in each of the Environments (like Staging and Production). Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

About Istio

When building microservice based application, a myriad of complexities arises, we need Service Discovery, Load balancing, Authentication and RBAC role based access. Istio provides capabilities for traffic monitoring, access control, discovery, security, observability through monitoring, and logging and other useful management capabilities to your deployed services. It delivers all that and does not require any changes to the code of any of those services. A great read and complete walkthrough and another one on dzone. Important part of Istio is providing observability and get monitoring data without any in-process instrumentation, read more about using OpenTracing with Istio/Envoy.

Operators

CoreOS introduced a class of software in the Kubernetes community called an Operator. An Operator builds upon the basic Kubernetes resource and controller concepts but includes application domain knowledge to take care of common tasks. 

Like Kubernetes's built-in resources, an Operator doesn't manage just a single instance of the application, but multiple instances across the cluster. As an example, the built-in ReplicaSet resource lets users set a desired number number of Pods to run, and controllers inside Kubernetes ensure the desired state set in the ReplicaSet resource remains true by creating or removing running Pods. There are many fundamental controllers and resources in Kubernetes that work in this manner, including Services, Deployments, and Daemon Sets. There are two concrete examples:

  • The etcd Operator creates, configures, and manages etcd clusters. etcd is a reliable, distributed key-value store introduced by CoreOS for sustaining the most critical data in a distributed system, and is the primary configuration datastore of Kubernetes itself. 
  • The Prometheus Operator creates, configures, and manages Prometheus monitoring instances. Prometheus is a powerful monitoring, metrics, and alerting tool, and a Cloud Native Computing Foundation (CNCF) project supported by the CoreOS team.

Read about the operator framework. This is my first take on learning and practising my first operator. The goal is to introduce a new operator to more flexible manage application configuration on ConfigMaps. For this read Could a Kubernetes Operator become the guardian of your ConfigMaps and Kubernetes Operator Development Guidelines

About Kubeless

When we look at the Serverless movement, this is very much about shortening the time it takes to get from source to deployment and production. The question then is, if Kubernetes is a great platform to deploy and operate other distributed systems on, and that serverless is yet a new PaaS-like approach, shouldn't we be able to develop a serverless platform on top of Kubernetes?

The answer is yes. Kubernetes is the perfect system to build a serverless solution on top of. Enter kubeless, what I call a Kubernetes-native serverless solution.

About Cloud Native Java Apps with Quarkus

Java is more than 20 years old. The JVM solved a huge problem and allowed us to write code once and run it in multiple platforms and operating systems. With Containers we can now package our apps, libs and OS resources into a single container that can run anywhere. The JVM portability is now less relevant. All the overhead of running the app in a JVM inside a container is useless, so AOT makes perfect sense if you are going to package your apps in containers.

Financial institutions, government, retail and many other industries which have millions of lines of code written in Java which they cannot afford to rewrite. GraalVM and specifically Substrate VM are now opening the door for a bright and long future for the Java language. Quarkus integrates Java libraries that companies integrated on their Enterprise Platforms, these include Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more.

About Hibernate and Panache

Being more productive with Hibernate ORM. What is Panache? Hibernate ORM is the de facto JPA implementation and offers you the full breadth of an Object Relational Mapper. It makes complex mappings possible, but many simple and common mappings can also be complex. Hibernate ORM with Panache focuses on making your entities trivial and fun to write and use with Quarkus.

What does Panache offer, read more. and find the Quarkus example sources and the Hibernate ORM Panache guide.

About MicroProfile and Jakarta EE (Open Source Cloud Native Java)

What is a microservice? MicroProfile is a vendor-neutral programming model which is designed in the open for developing Java microservices. It provides the core capabilities you need to build fault-tolerant, scalable microservices.

Because MicroProfile is developed in the open and is a collaboration between several partners, it means we can be innovative and fast! MicroProfile is part of the Eclipse Foundation.

The project has released three updates to MicroProfile in 2017 and are working on more for 2018. Open Liberty is implementing these updates as fast as they are being agreed. The most recent release of Open Liberty, 18.0.0.1, contains a full implementation of MicroProfile 1.3.

What is Jakarta EE? For many years, Java EE has been a major platform for mission-critical enterprise applications. In order to accelerate business application development for a cloud-native world, leading software vendors collaborated to move Java EE technologies to the Eclipse Foundation where they will evolve under the Jakarta EE brand.

About SmallRye

SmallRye improves the developer experience for Cloud Native development through implementing Eclipse MicroProfile. Offering important functionality for Cloud environments.

Hosted by WEBLAND.CH