Ihre Browserversion ist veraltet. Wir empfehlen, Ihren Browser auf die neueste Version zu aktualisieren.

View words on Java Records

Kafka Streams and JUnit testing

DataOps with Lenses

Kafka in MSA world and SAGA transactions (Pattern: Event sourcing)

Problem: How to reliably/atomically update the database and send messages/events?

  • 2PC is not an option
  • If the database transaction commits messages must be sent. Conversely, if the database rolls back, the messages must not be sent
  • Messages must be sent to the message broker in the order they were sent by the service. This ordering must be preserved across multiple service instances that update the same aggregate

Studies and reads

EventSource store events with Kafka Streams on RocksDB

Here comes rocksdb (Rocksdb is Facebook's open-source of an embeddable persistent key-value store using a log-structured database engine, written entirely in C++ - http://rocksdb.org/).

For stateful operations, Kafka Streams uses local state stores that are made fault-tolerant by associated changelog topics stored in Kafka. For these state stores, Kafka Streams uses RocksDB as its default storage to maintain local state on a computing node. RocksDB is a highly adaptable, embeddable, and persistent key-value store that was originally built by the Engineering team at Facebook. Many companies use RocksDB in their infrastructure to get high performance to serve data. Kafka Streams configures RocksDB to deliver a write-optimized state store.

Kafka and Transactions and Commits

KTables and KStreams

Kafka Junit5 embedded

Kafka Junit 5 TestContainers

What is Reactive?

The term, “reactive”, refers to programming models that are built around reacting to change, like network components reacting to I/O events, like UI controllers reacting to mouse events. R2DBC(Reactive Relational Database Connectivity) was created out of the need for a non-blocking application stack to handle concurrency with a small number of threads and scale with fewer hardware resource.

You can think of data processed by a reactive application as moving through an assembly line. Reactor is both the conveyor belt and the workstations. The raw material pours from a source (the original Publisher) and ends up as a finished product ready to be pushed to the consumer (or Subscriber). The raw material can go through various transformations and other intermediary steps or be part of a larger assembly line that aggregates intermediate pieces together. 

The core of reactive programming is a data stream that we can observe and react to, even apply back pressure as well. This leads to non-blocking execution and hence to better scalability with fewer threads of execution.

Backpressure is when a downstream can tell an upstream to send it fewer data in order to prevent it from being overwhelmed. The Consumer gets control over the speed at which data is emitted

R2DBC Reactive Relational Database Connectivity makes it easier to build Spring-powered applications that use relational data access technologies in a reactive application stack.

Reactive and blocking world

Remember that non-blocking, and asynchronous servers work using a single main thread (or a very small number of them). Blocking that thread blocks the entire web server. Don’t ever do this. The high performance that non-blocking servers like Netty can potentially achieve is largely a product of their not having to perform a lot of context switching between threads. When you want to use a blocking API in a non-blocking context, the only way to handle it is to push it off onto a worker thread. There’s a trade-off here because once you start pushing work off from the main thread onto other threads you start to reduce the efficiency of the non-blocking model. The more you do this, the more you start to look like a traditional web server.

Building Reactive Springboot apps with Kafka

Kafka and Avro

Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro helps define a binary format for your data, as well as map it to the programming language of your choice.

Kafka Streams

Kafka @ https://www.confluent.io/

Kafka is a mechanism for programs to exchange information, but its home ground is event-based communication, where events are business facts that have value to more than one service and are worth keeping around. This is emphasized by the core mantra of event-driven services: Centralize an immutable stream of business facts.
Kafka is based on the abstraction of a distributed commit log. By splitting a log into partitions, Kafka is able to scale-out systems. As such, Kafka models events as key/value pairs. Internally, keys and values are just sequences of bytes, but externally in your programming language of choice, they are often structured objects represented in your language's type system.

Get the Ultimate, Cloud-Native Apache Kafka® Experience. No more cluster sizing, scaling, over provisioning, ZooKeeper management or hardware.

Kafka Stream With Spring Boot

Reactive Streams

Reactive is a programming model built around the concept of reacting to changes, like network components reacting to I/O

Reactive streams use a push model. This means that items are pushed on the stream at the pace of the publisher, regardless of whether the subscriber can follow or not (no worries, backpressure is a key-feature of reactive streams). reactive streams are lazy, and won’t start as long as there is no subscriber present. That means that a subscriber is always necessary with reactive streams. I’ve mentioned before that publishers are asynchronous in nature, but are they always asynchronous? The answer to that is, no, not always. Whether or not a reactive stream is synchronous or asynchronous, depends on the type of publisher you use.

The key expected benefit of reactive and non-blocking is the ability to scale with a small, fixed number of threads and less memory.

Java Faker and Kafka

It is a library that can be used to generate a wide array of real-looking data from a mobile number, addresses, names to popular culture references. This is really helpful when we want to use some placeholder but don't have actual data. In the microservices-based development, we need data to do against validation, and generate the dummy test data is quite a challenging task.

 

RestAssured

Rest Assured enables you to test REST APIs using java libraries and integrates well with Maven. Rest Assured has a gherkin type syntax which is shown below code. If you are a fan of BDD (Behavior Driven Development), I believe that you will love this kind of syntax. REST Assured also follows a BDD style and  .given().when().then() gives each request a standardized schema.

RestTemplate testing with Spring Boot @RestClientTest Slice

In Spring Boot 1.4, the team has made a solid effort to simplify and speed up the creation and testing of REST clients. Compared to WireMock for testing our RestTemplate in isolation, this solution requires less setup as everything is part of Spring Boot.

Testing RestController with the @WebMvcTest sliced context

Where else, https://reflectoring.io/Tutorials on Spring Boot and Java, thoughts about the Software Craft, and relevant book reviews. Because it's just as important to understand the Why as it is to understand the How. Have fun!

The SpringBoot @SpyBean to the rescue

 Spy wraps the real bean but allows you to verify method invocation and mock individual methods without affecting any other method of the real bean

JUnit5, Spring and Mockito

JUnit 5 parameterized tests

JUnit5 LifeCycle manageming large resources

 By default, both JUnit 4 and 5 create a new instance of the test class before running each test method. This provides a clean separation of state between tests. JUnit 5 allows us to modify the lifecycle of the test class using the @TestInstance annotation. 

Spring Cloud Stream

Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.

Monads and Java

Monads come from the functional programming world and is used in many places in many different ways. But the most concrete explanation id say is that a Monad accepts a type of “something” (this could be an int, string or any other type) and returns a new type containing your “something” type.

About PECS - Producer Extends Consumer Super and Lambda's

Wildcards and PECS (term first coined by Joshua Bloch in his book Effective Java): A wildcard is a type argument that uses a question mark, ?, which may or may not have an upper or lower bound. Type arguments without bounds are useful, but have limitations. If you declare a List of unbounded type, as in A List with an unbounded wildcard, you can read from it but not write to it.

Functional Patterns in Java

The release of Lambdas supposed one of the largest quality jumps in the Java language in its history; mainly due to the fact that it opened a wide new range of possibilities to the language. Functional programming is a programming paradigm which has its roots in lambda calculus that uses a declarative approach; instead of the most commonly known imperative paradigm.

2 legged vs 3 legged OAuth

I short 2 legged and 3 legged OAuth refers to the number of players involved in the OAuth dance. 3 legged OAuth is used when you as a user wants to allow an application to access a service on your behalf. 2 legged OAuth is used when an application needs to access a service using a service account. Two-legged OAuth, or "signed fetch", takes advantage of OAuth's signatures to authenticate server-to-server requests. It doesn't need to involve the user nor any access tokens.

Keeping fit on Java - learning points

UML freshup

 CompletableFuture and Async programming

Multi-threading is similar to multitasking, but enables the processing of executing multiple threads simultaneously, rather than multiple processes. The CompletableFuture, introduced in Java 8, provides an easy way to write asynchronous, non-blocking and multi-threaded code. Spring has the ability to implement the Command Pattern extremely well.

SOLID Principles

The SOLID Principles are five principles of Object-Oriented class design. The SOLID principles were first introduced by the famous computer scientist Robert J. Martin (a.k.a Uncle Bob) in his paper in 2000. But the SOLID acronym was introduced later by Michael Feathers.

  • The Single Responsibility Principle
  • The Open-Closed Principle
  • The Liskov Substitution Principle
  • The Interface Segregation Principle
  • The Dependency Inversion Principle
  1. The SOLID Principles of Object-Oriented
  2. SOLID Design Principles Explained: Dependency Inversion Principle with Code Examples – Stackify
  3. Open Close and Strategy pattern at DZone

The principle is very simple. All the strategy classes must implement a specific strategy interface. The class that uses the strategies, called the context class, is not bound to those specific strategy classes, but it is tied to the strategy interface. 

Gang of Four Design Patterns

As a Java developer using the Spring Framework to develop enterprise class applications, you will encounter the GoF Design Patterns on a daily basis. The GoF Design Patterns are broken into three categories: Creational Patterns for the creation of objects; Structural Patterns to provide relationship between objects; and finally, Behavioral Patterns to help define how objects interact.

Reference Data Pattern

By reference types I mean reference data, or lookup values, or – if you want to be flash – taxonomies. Typically, the values defined here are used in drop-down lists in your application's user interface. They may also appear as headings on a report.

As your data model evolves over time and new reference types are required, you don't need to keep making changes to your database for each new reference type. You just need to define new configuration data.

Strategy Pattern leveraging Java 8 Lamda's

Typically, programmers tend to bundle all the algorithm logic in the host class, resulting in a monolithic class with multiple switch case or conditional statements. Such pitfalls in enterprise applications result in rippling effects across the application making the application fragile. You can typically avoid this by introducing the Strategy pattern as demonstrated below.

 

What this does:

  1. Statically define a family of algorithms at one location (interface Strategy)
  2. Follows SOLID Open-Closed Principle (Open (CaseStrategy) for extension, Closed (Entity Case) for modification). Adding new algorithms to the Strategy interface, letting us extending new functionality without touching existing code for our Case Entity.

@Data
@Builder
public class Case implements CaseStrategy {

    private String phase;
    private String status;

    private Instant dueDate = Instant.now();    

}

 

public interface Strategy<T> {

    default T thenApply(Consumer<T> logic) {
        logic.accept((T) this);
        return (T) this;

    }

}

 

public interface CaseStrategy extends Strategy<Case> {

 

    static Consumer<Case> transfer(State state) {
        return c -> {c.setPhase(state.getPhase()); c.setStatus(state.getStatus());}

    }

    static Consumer<Case> priReview() {
        return c -> c.setPhase(CasePhase.PRI_REVIEW.name());

    }

    static Consumer<Case> rejected() {
        return c -> c.setStatus(CaseStatus.REJECTED.name());

    }

    static Consumer<Case> verify(Boolean check) {
        return c -> {
            if (check && c.getDueDate().isBefore(Instant.now())) {
                c.setStatus(CaseStatus.DECLINED.name());

            }
        };
    }

}

// Jupiter JUnit5 TDD red green refactor

class CaseTest {

 

    @ParameterizedTest

    @MethodSource("stateProvider")

    GivenInitialState_whenTransfering_thenVerifCaseStatus_andPhase(State state) {

        // GIVEN

        Case testCase = Case.builder().build();

        // WHEN

        testCase.andThen(transfer(state));

 

        Assertions.assertAll("Check case phase and status"

            // THEN

            , () -> assertEquals(state.getStatus(), testCase.getStatus())

            // AND

            , () -> assertEquals(state.getPhase(), testCase.getPhase()));

    }

}

 

static Stream<State> stateProvider() {

 

    return Stream.of(State.builder().phase(CasePhase.INFOREQUEST).status(OPEN).build()

                            , State.builder().phase(CasePhase.CLOSINGSUMMARY).status(PENDING).build());

    }

}

        

source

Liquibase changelogs in SQL format

Java interview questions

How to write Clean Java Code

Besides studying hard, to become a good software developer in Java or any other language, you must master concepts and code conventions to make a clean code and easy to maintain.

Hosted by WEBLAND.CH