Ihre Browserversion ist veraltet. Wir empfehlen, Ihren Browser auf die neueste Version zu aktualisieren.

The ultimate integration

After 37 years in IT, experiencing many technologies and methodologies, I feel that we slowly have come to a point that Development and Operational IT processes reached a full end2end integration. Kubernetes is the de facto standard for managing containers, distributed applications, and virtual infrastructure. While the ecosystem is huge, it needs the tooling to providing us with development, provisioning and management facilities. Jenkins x, Helm and Istio is getting us near to the things we need to get a full end2end DevOps integration. It seems that we are heading to a next generation standard DevOps solution resulting from the partnership between Cloudbees, Atos and Google, aiming at providing customers with a complete DevOps solution running on Google Cloud Platform. Atos expects that, for the foreseeable future, most of their clients will be trying to meld some degree of DevOps and existing ITIL-based approaches to IT services management (ITSM). The alliance with Google will go a long way to accelerating that transition because it will allow Atos to deliver access to a CI/CD platform based on a consumption model.

What is Cloud Native computing

Almost everything you read here is closely related to Cloud Native software development and computing. Let's first get the definition right.

"Cloud-native technologies are used to develop applications built as services (MSA), packaged in containers, deployed and managed on elastic infrastructure (like Kubernetes) through agile DevOps processes and continuous delivery workflows".

Read about the 10 KEY ATTRIBUTES OF CLOUD-NATIVE APPLICATIONS

DevOps, about Culture and Talent (Developer oriented Innovation Culture)

The problem is,... many organizations aren't doing a good job executing software delivery. Digital innovators should treat technology as a business asset, it is an important component to their business strategy. Modern software correlates with business growth. Good culture embraces developers, making them part of the business process. Allowing them to bring their ideas forward, allowing different ideas to bubble up. These people that are working on the front lines of your software delivery process, know what is wrong, they recognize the issues that are happening. Encourage developers to have some power in the decision making process. About talent: hire good talent, you can't hire good talent if you don't have good culture. Chris Condo senior analyst at Forrester at CloudBees Days San Francisco 2019

Speed up Kubernetes development with Cloud Code

Get a fully integrated Kubernetes development, deployment, and debugging environment within your IDE. Create and manage clusters directly from within the IDE. Under the covers Cloud Code for IDEs uses popular tools such as Skaffold, Jib and Kubectl to help you get continuous feedback on your code in real time. Debug the code within your IDEs using Cloud Code for Visual Studio Code and Cloud Code for IntelliJ by leveraging built-in IDE debugging features.

Prevent the LoP - simply skaffold dev

Open Container Initiative and OCI

This specification defines an OCI Image, consisting of a manifest, an image index (optional), a set of filesystem layers, and a configuration. The goal of this specification is to enable the creation of interoperable tools for building, transporting, and preparing a container image to run.

"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. 

Why should I use distroless images?

Restricting what's in your runtime container to precisely what's necessary for your app is a best practice employed by Google and other tech giants that have used containers in production for many years. It improves the signal to noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need.

Container Structure Tests: Unit Tests for Docker images

Usage of containers in software applications is on the rise, and with their increasing usage in production comes a need for robust testing and validation. Containers provide great testing environments, but actually validating the structure of the containers themselves can be tricky. The Docker toolchain provides us with easy ways to interact with the container images themselves, but no real way of verifying their contents. What if we want to ensure a set of commands runs successfully inside of our container, or check that certain files are in the correct place with the correct contents, before shipping?

  1. Container Structure Tests
  2. Integrating Container Structure Tests on Jenkins 2.0 pipelines
  3. What is on Google Container Tools GITHUB

Skaffold

Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. You can iterate on your application source code locally then deploy to local or remote Kubernetes clusters. Skaffold handles the workflow for building, pushing and deploying your application. It also provides building blocks and describe customizations for a CI/CD pipeline.

Like Draft, it can also be used as a building block in a CI/CD pipeline to leverage the same workflow and tooling when you are moving an application into production. Read Draft vs. Skaffold: Developing on Kubernetes

Read Faster Feedback for Delivery Pipelines with Skaffold or Continuous Development with Java and Kubernetes

Test the structure of your images before deployment: Container structure tests are defined per image in the Skaffold config. Every time an artifact is rebuilt, Skaffold runs the associated structure tests on that image. If the tests fail, Skaffold will not continue on to the deploy stage. If frequent tests are prohibitive, long-running tests should be moved to a dedicated Skaffold profile. read 

JIB — build Java Docker images better (Java Image Builder JIB)

Containers are bringing Java developers closer than ever to a "write once, run anywhere" workflow, but containerizing a Java application is no simple task: You have to write a Dockerfile, run a Docker daemon as root, wait for builds to complete, and finally push the image to a remote registry. Not all Java developers are container experts; what happened to just building a JAR?

  • Fast - Deploy your changes fast. Jib separates your application into multiple layers, splitting dependencies from classes. Now you don’t have to wait for Docker to rebuild your entire Java application - just deploy the layers that changed
  • Reproducible - Rebuilding your container image with the same contents always generates the same image. Never trigger an unnecessary update again.
  • Daemonless - Reduce your CLI dependencies. Build your Docker image from within Maven or Gradle and push to any registry of your choice. No more writing Dockerfiles and calling docker build/push.
  1. Read Introducing Jib — build Java Docker images better and from the source. and finally read Baeldung on Jib
  2. Jib: Getting Expert Docker Results Without Any Knowledge of Docker
  3. "Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
  4. How to use JibMaven builders on skaffold.
  5. Google sample Jib project on github
  6. Read the skaffold dev home page
  7. FQA on JIB

GO - aka Golang 

After having experienced 10+ years of C programming in the 80's, it is like feeling coming home again. We go functional, statically typed and native running again, how nice.

Golang is a programming language you might have heard about a lot during the last couple years. Even though it was created back in 2009, it has started to gain popularity only in recent years. According to Go’s philosophy (which is a separate topic itself), you should try hard to not over-engineer your solutions. And this also applies to dynamically-typed programming. Stick to static types as much as possible, and use interfaces when you know exactly what sort of types you’re dealing with. Interfaces are very powerful and ubiquitous in Go.

Some features....

  • Static code analysis: Static code analysis isn’t actually something new to modern programming, but Go sort of brings it to the absolute.
  • Built-in testing and profile: Go comes with a built-in testing tool designed for simplicity and efficiency. It provides you the simplest API possible, and makes minimum assumptions
  • Race condition detection: concurrent programming is taken very seriously in Go and, luckily, we have quite a powerful tool to hunt those race conditions down. It is fully integrated into Go’s toolchain.

Some reads: 

  1. Here are some amazing advantages of Go that you don’t hear much about
  2. The love of every old school C programmer, pointers
  3. Quick intro for the java developer
  4. From the source
  5. About the GO Gopher
  6. The Evolution of Go: A History of Success
  7. Parsing Spring Cloud Config content example
  8. Reading configs from Spring Cloud Config example
  9. An interface is a great and only way to achieve Polymorphism in Go

DevOps with Concourse on Pivotal Container Service (PKS)

Pivotal Container Service PKS is a platform built by Pivotal to ease the burden of deploying and operating Kubernetes clusters. PKS builds on top of Cloud Foundry’s container runtime (formerly Kubo) that utilizes BOSH to handle both Day 1 and Day 2 operations for Kubernetes. In short: PKS brings brings production grade Kubernetes clusters to the enterprise.

Concourse is an open source continuous integration and continuous delivery (CI/CD) system designed for teams that practice test-driven development and continuous delivery. Teams automate delivery of their software as pipelines which execute testing, packaging, and deployment as often as every commit. Concourse pipelines are configured via YAML, which can be versioned controlled as part of the project code. Pipelines can scale to projects of any complexity. Pipelines are displayed visually to show the status of build runs. Read more about Concourse.

Clair (Open Source Vulnerability Analysis for your Containers) Docker image vulnerability scanning base on CVE datasources, preventing your organization to pull software Docker images to production before being scanned (content trust), watch the talk on Building Developer Pipelines with PKS, Harbor, Clair, and Concourse

Developer locality, Because spinning up new clusters is a breeze, there’s no reason not to have clusters for development teams. Because BOSH (hence also PKS) can deploy on internal infrastructure like vSphere, developers can access clusters on the local internal network without having to reach out to the nearest AWS/GCP/Azure datacentre, lowering latency and increasing responsiveness. 

About Jenkins x

Jenkins X and Istio both have one thing in common in how they tightly integrate with Kubernetes by extending the platform with specific Custom Resources. Jenkins X is a project which rethinks how developers should interact with CI/CD in the cloud with a focus on making development teams productive through automation, tooling and DevOps best practices. Jenkins X is an open source project that integrates the best-of-breed tools in the Kubernetes ecosystem to provide a CI/CD solution for cloud native applications on Kubernetes. Jenkins X also demonstrates GitOps, where environments like staging and production are Git repos. Automatic promotion of apps from one environment to another happens via pull requests and Jenkins pipelines. Jenkins X is heavily influenced by the State of DevOps reports and more recently the Accelerate book from Nicole Forsgren, Jez Humble and Gene Kim. Read more... and this introduction definitely worth reading.

Kubernetes plugin for Jenkins

Implement a Jenkins scalable infrastructure on top of Kubernetes, in which all the nodes for running the build will spin up automatically during builds execution, and will be removed right after their completion.

  1. Kubernetes plugin for Jenkins
  2. Kubernetes plugin pipeline examples
  3. How to Setup Scalable Jenkins on Top of a Kubernetes Cluster
  4. Scaling Docker enabled Jenkins with Kubernetes
  5. About DevOps 2.4 Toolkit: Continuous Deployment on k8s
  6. DevOps 2.4 Toolkit: Deploying Jenkins To A Kubernetes Cluster Using Helm
  7. Check jenkins scripted and declarative pipeline examples
  8. Check Jenkins pipeline global variable references
  9. Official Jenkins Docker Image on Github
  10. Jenkins CI/CD with Kubernetes and Helm
  11. Creating A CDP Pipeline With Jenkins- Hands-On Time
  12. How to build my own docker images in CloudBees Core
  13. Effectively Using the Kubernetes Plugin with Jenkins
  14. Create a Cloud Configuration on the Jenkins Master
  15. Configuration management: a Spring Boot use-case with Kubernetes
  16. Tips on writing Pipelines

Automated TLS with cert-manager and letsencrypt for Kubernetes

Did you ever dream of the day where there would be free TLS certs that were automatically created and renewed when a new service shows up? Well that day has arrived. If you’ve jumped on the cool train and are running Kubernetes in production, then cert-manager is a must have. cert-manager is a service that automatically creates and manages TLS certs in Kubernetes and it is as cool as its sounds.

  1. the cert-manager service which ensures TLS certs are valid, up to date, and renew them when needed.
  2. the clusterIssuer resource which defines what Certificate Authority to use
  3. the certificate resource which defines the certificate that should be created

Kubernetes Ingress vs LoadBalancer vs NodePort (Ingress with Nginx)

These options all do the same thing. They let you expose a service to external network requests. They let you send a request from outside the Kubernetes cluster to a service inside the cluster. NodePort is a configuration setting you declare in a service’s YAML. Set the service spec’s type to NodePort. Then, Kubernetes will allocate a specific port on each Node to that service, and any request to your cluster on that port gets forwarded to the service. You can set a service to be of type LoadBalancer the same way you’d set NodePort— specify the type property in the service’s YAML. There needs to be some external load balancer functionality in the cluster, typically implemented by a cloud provider. NodePort and LoadBalancer let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services. This makes it decoupled and isolated from the services you want to expose. It also helps you to consolidate routing rules into one place. This seems to be the most flexible option, although often the most confusing for new users, is the Ingress Controller. An Ingress Controller can sit in front of many services within our cluster, routing traffic to them and depending on the implementation, can also add functionality like SSL termination, path rewrites, or name-based virtual hosts. There is a growing ecosystem of ingress controllers, some leveraging well-known load balancers and proxies, and some new cloud-native implementations. There are ingress controllers for most of the familiar tools in this space, like HAProxy and Nginx, alongside new Kubernetes native implementations like Ambassador and Contour, both of which leverage the Envoy proxy.

  1. Do not use Nginx Ingress without a defined scope!!!!!!!!!!!!!!!!
  2. Trouble shooting NGinx Controller
  3. Kubernetes Ingress with Nginx
  4. Ingress Controllers for Kubernetes
  5. Deploying Envoy as an API Gateway for Microservices
  6. Kubernetes core documentation on Ingress

name-based virtual hosting with Ingress: With that you can define routes to different services inside of your Kubernetes cluster, depending on the incoming requests' hostnames. This allows you to run multiple services on the same IP address.

path-based Ingress: With a path-based Ingress you can route specific paths to specific services. On top of that you also get the possibility to load balance the inbound connections to different services, depending on their paths, e.g. to implement API versioning.

  1. NGINX TLS secret, PEM-encoded X.509, RSA (2048) secret
  2. Terminate TLS through the nginx Ingress controller.
  3. Using Ingress with SSL/TLS termination and HTTP/2
  4. Managing TLS certificates with cert-manager, is a Kubernetes addon

If you need to use HTTP/2 features for your application, you have to pass through the HTTPS connection directly to your backend, even if the proxy used HTTPS internally then you still wouldn't have an end-to-end encryption. For these cases the Ingress controller has an option to enable SSL/TLS pass through. At first you have to install the Ingress controller with a specific parameter, to make SSL/TLS pass through available as a feature (if an Ingress controller is already installed, you have to remove it first): read Passing through HTTPS and HTTP/2 with Ingress NGINX.

NGINX Plus Ingress Controller

NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a setup that simplifies scaled rollouts.

  1. Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD
  2. NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing

Configuring Ingress Resources and NSX-T Load Balancers for PCS

This topic describes example configurations for ingress routing (Layer 7) and load balancing (Layer 4) for Kubernetes clusters deployed by Enterprise Pivotal Container Service (Enterprise PKS) on vSphere with NSX-T integration.

Kubernetes Dashboard

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard.

Kubernetes Monitoring with Prometheus

Prometheus is the “must have” monitoring and alerting tool for Kubernetes and Docker. Moving from bare metal server to the cloud, I had time to investigate the proactive monitoring with k8s. The k8s project has already embraced this amazing tool, by exposing Prometheus metrics in almost all of the components.

Monitoring your k8s cluster will help your team with:

  • Proactive monitoring
  • Cluster visibility and capacity planning
  • Trigger alerts and notification, Built-in Alertmanager—sends out notifications via a number of methods based on rules that you specify. This not only eliminates the need to source an external system and API, but it also reduced the interruptions for your development team.   
  • Metrics dashboards

Things to read...

 

IBM Kabanero

Kabanero brings together foundational open source technologies into a modern microservices-based framework for building applications on Kubernetes. Kabanero is focused on simplifying the task of architecting, developing, deploying, and managing cloud-native apps, using tailored application stacks and tightly integrated tooling that works in harmony with open source. Developing apps for container platforms requires harmony between developers, architects, and operations. Today’s developers need to be efficient at much more than writing code. Architects and operations get overloaded with choices, standards, and compliance. Kabanero speeds development of applications built for Kubernetes while meeting the technology standards and policies your company defines. Design, develop, deploy, and manage with speed and control!

  1. Welcome to Kabanero
  2. Watch YouTube on Kabanero
  3. IBM Kabanero Melds Multiple Open Source Kubernetes Projects
  4. IBM open-sources Kabanero tools for collaborating on Kubernetes apps
  5. manage the progression of apps from development and testing all the way to production deployment with Razee
  6. Eclipse Codewind, Container development unleashed

Razee, 

Razee, a new open source continuous delivery tool that helps manage applications at scale. Razee gives developers valuable insight into their Kubernetes cluster deployments and helps simplify how clusters can be managed and scaled across hybrid cloud environments. Razee addresses one of the most common challenges of managing multiple Kubernetes workloads across numerous clusters—the complex process of generating inventory and scripts that describe actions on a cluster-by-cluster and application-by-application basis. Razee allows you to manage deployments to a large number of clusters thanks to its pull-based deployment model that provides self-updating clusters. Razee comes as an addition to the growing list of new open source projects IBM leads or contributes to—like Istio, Knative.

  1. Introducing Razee, a new open source continuous delivery tool that helps manage applications at scale
  2. What Is Razee, and Why IBM Open Sourced It
  3. razee.io

Springboot logs in Elastic Search with FluentD (ISTIO)

If you deploy a lot of micro-services with Spring Boot (or any other technology), you will have a hard time collecting and making sense of the all logs of your different applications. A lot of people refer to the triptych Elastic Search + Logstash + Kibana as the ELK stack. In this stack, Logstash is the log collector. Its role will be to redirect our logs to Elastic Search. The ISTIO setup requires to send  your custom logs to a Fluentd daemon (log collector). Fluentd is an open source log collector that supports many data outputs and has a pluggable architecture.

  1. ISTIO Logging with Fluentd
  2. Container and Service Mesh Logs
  3. Spring boot logs in Elastic Search with fluentd
  4. Github Springboot fluentd example

About Helm

Jenkins X uses Helm to install both Jenkins X and to install the applications you create in each of the Environments (like Staging and Production). Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

Read drastically Improve your Kubernetes Deployments with Helm

Kubernetes monitoring with Prometheus

As from Spring Boot 2.0, Micrometer is the default metrics export engine. Micrometer is an application metrics facade that supports numerous monitoring systems. Atlas, Datadog, Prometheus, etc. to name a few (as we will be using Prometheus in this tutorial, we will be focusing on Prometheus only).

  1. Monitoring Using Spring Boot 2.0, Prometheus part 1 nice example of Function.apply
  2. Monitoring Using Spring Boot 2.0, Prometheus part 2
  3. Kubernetes monitoring with Prometheus in 15 minutes

About Init Containers

You may be familiar with a concept of Init scripts — programs that configure runtime, environment, dependencies, and other prerequisites for the applications to run. Kubernetes implements similar functionality with Init containers that run before application containers are started. In order for the main app to start, all commands and requirements specified in the Init container should be successfully met. Otherwise, a pod will be restarted, terminated or stay in the pending state until the Init container completes.

A common use-case is to pre-populate config files specifically designed for a type of environment like test or production. Similarly to app containers, Init containers use Linux namespaces. Because these namespaces are different from the namespaces of app containers, Init containers end up with their unique filesystem views. You can leverage these filesystem views to give Init containers access to secrets that app containers cannot access. Typically init containers could pull application configuration from a secured environment and provide that config on a Volume that the init container and the application share. This is typically accomplished by defining a emptyDir volume at the Pod level. Containers in the Pod can all read and write the same files in the emptyDir volume, in this case, holding files that configuration-manager (Init Container) provisions while the app Container serves that data to load its configurations.

  1. Understanding init containers
  2. Create a Pod that has an Init Container
  3. Using InitContainers to pre-populate Volume data in Kubernetes
  4. Kubernetes init containers by example
  5. Introduction to Init Containers in Kubernetes

About Istio

When building microservice based application, a myriad of complexities arises, we need Service Discovery, Load balancing, Authentication and RBAC role based access. Istio provides capabilities for traffic monitoring, access control, discovery, security, observability through monitoring, and logging and other useful management capabilities to your deployed services. It delivers all that and does not require any changes to the code of any of those services. A great read and complete walkthrough and another one on dzone. Important part of Istio is providing observability and get monitoring data without any in-process instrumentation, read more about using OpenTracing with Istio/Envoy.

Operators

CoreOS introduced a class of software in the Kubernetes community called an Operator. An Operator builds upon the basic Kubernetes resource and controller concepts but includes application domain knowledge to take care of common tasks. 

Like Kubernetes's built-in resources, an Operator doesn't manage just a single instance of the application, but multiple instances across the cluster. As an example, the built-in ReplicaSet resource lets users set a desired number number of Pods to run, and controllers inside Kubernetes ensure the desired state set in the ReplicaSet resource remains true by creating or removing running Pods. There are many fundamental controllers and resources in Kubernetes that work in this manner, including Services, Deployments, and Daemon Sets. There are two concrete examples:

  • The etcd Operator creates, configures, and manages etcd clusters. etcd is a reliable, distributed key-value store introduced by CoreOS for sustaining the most critical data in a distributed system, and is the primary configuration datastore of Kubernetes itself. 
  • The Prometheus Operator creates, configures, and manages Prometheus monitoring instances. Prometheus is a powerful monitoring, metrics, and alerting tool, and a Cloud Native Computing Foundation (CNCF) project supported by the CoreOS team.

Read about the operator framework. This is my first take on learning and practising my first operator. The goal is to introduce a new operator to more flexible manage application configuration on ConfigMaps. For this read Could a Kubernetes Operator become the guardian of your ConfigMaps and Kubernetes Operator Development Guidelines

About Kubeless

When we look at the Serverless movement, this is very much about shortening the time it takes to get from source to deployment and production. The question then is, if Kubernetes is a great platform to deploy and operate other distributed systems on, and that serverless is yet a new PaaS-like approach, shouldn't we be able to develop a serverless platform on top of Kubernetes?

The answer is yes. Kubernetes is the perfect system to build a serverless solution on top of. Enter kubeless, what I call a Kubernetes-native serverless solution.

Kubeapps

Kubeapps is a Kubernetes dashboard that supercharges your Kubernetes cluster with simple browse and click deployment of applications. Kubeapps provides a complete application delivery environment that empowers users to launch, review and share applications. 

Kubeapps allows you to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
  • Browse and provision external services from the Service Catalog and available Service Brokers
  • Connect Helm-based applications to external services with Service Catalog Bindings
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Readings...

  1. Install Kubeapps on your Kubernetes cluster
  2. Read from the sources
  3. Watch Deploying Containerized Applications with Kubeapps
  4. Watch Kubeapps in action 

Spring and more...

REST has quickly become the de-facto standard for building web services on the web because they’re easy to build and easy to consume.

About Cloud Native Java Apps with Quarkus

Java is more than 20 years old. The JVM solved a huge problem and allowed us to write code once and run it in multiple platforms and operating systems. With Containers we can now package our apps, libs and OS resources into a single container that can run anywhere. The JVM portability is now less relevant. All the overhead of running the app in a JVM inside a container is useless, so AOT makes perfect sense if you are going to package your apps in containers.

Financial institutions, government, retail and many other industries which have millions of lines of code written in Java which they cannot afford to rewrite. GraalVM and specifically Substrate VM are now opening the door for a bright and long future for the Java language. Quarkus integrates Java libraries that companies integrated on their Enterprise Platforms, these include Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more.

About Hibernate and Panache

Being more productive with Hibernate ORM. What is Panache? Hibernate ORM is the de facto JPA implementation and offers you the full breadth of an Object Relational Mapper. It makes complex mappings possible, but many simple and common mappings can also be complex. Hibernate ORM with Panache focuses on making your entities trivial and fun to write and use with Quarkus.

What does Panache offer, read more. and find the Quarkus example sources and the Hibernate ORM Panache guide.

About MicroProfile and Jakarta EE (Open Source Cloud Native Java)

What is a microservice? MicroProfile is a vendor-neutral programming model which is designed in the open for developing Java microservices. It provides the core capabilities you need to build fault-tolerant, scalable microservices.

Because MicroProfile is developed in the open and is a collaboration between several partners, it means we can be innovative and fast! MicroProfile is part of the Eclipse Foundation.

The project has released three updates to MicroProfile in 2017 and are working on more for 2018. Open Liberty is implementing these updates as fast as they are being agreed. The most recent release of Open Liberty, 18.0.0.1, contains a full implementation of MicroProfile 1.3.

What is Jakarta EE? For many years, Java EE has been a major platform for mission-critical enterprise applications. In order to accelerate business application development for a cloud-native world, leading software vendors collaborated to move Java EE technologies to the Eclipse Foundation where they will evolve under the Jakarta EE brand.

About SmallRye

SmallRye improves the developer experience for Cloud Native development through implementing Eclipse MicroProfile. Offering important functionality for Cloud environments.


Daemonless - Reduce your CLI dependencies. Build your Docker image from within Maven or Gradle and push to any registry of your choice. No more writing Dockerfiles and calling docker build/push.

Hosted by WEBLAND.CH