View words on Java Records
- Java 14 Record Keyword | Baeldung
- How to use Java Records - Xebia
- How to Validate and Normalize Data For Java Record Classes? | by Uğur Taş | Codimis Medium
- java - Nested Spring configuration (ConfigurationProperties) in records - Stack Overflow
- Java Records: Data carrier classes (jfeatures.com)
Kafka Streams and JUnit testing
DataOps with Lenses
Kafka in MSA world and SAGA transactions (Pattern: Event sourcing)
Problem: How to reliably/atomically update the database and send messages/events?
- 2PC is not an option
- If the database transaction commits messages must be sent. Conversely, if the database rolls back, the messages must not be sent
- Messages must be sent to the message broker in the order they were sent by the service. This ordering must be preserved across multiple service instances that update the same aggregate
Studies and reads
- Event sourcing, CQRS, stream processing and Apache Kafka: What’s the connection? | Confluent
- Event sourcing using Kafka
- Distributed Data for Microservices — Event Sourcing vs. Change Data Capture
- Finally the Pattern: Event sourcing
EventSource store events with Kafka Streams on RocksDB
Here comes rocksdb (Rocksdb is Facebook's open-source of an embeddable persistent key-value store using a log-structured database engine, written entirely in C++ - http://rocksdb.org/).
For stateful operations, Kafka Streams uses local state stores that are made fault-tolerant by associated changelog topics stored in Kafka. For these state stores, Kafka Streams uses RocksDB as its default storage to maintain local state on a computing node. RocksDB is a highly adaptable, embeddable, and persistent key-value store that was originally built by the Engineering team at Facebook. Many companies use RocksDB in their infrastructure to get high performance to serve data. Kafka Streams configures RocksDB to deliver a write-optimized state store.
Kafka and Transactions and Commits
- Difference between session.timeout.ms and max.poll.interval.ms for Kafka >= 0.10.1
- Confluence about consumer groups
- About Consumer groups and multiple services and instances
- Kafka Consumer Auto Offset Reset
- How Kafka's Consumer Auto Commit Configuration Can Lead to Potential Duplication or Data Loss
- Kafka - When to commit?
- How Kafka commits messages - Masterspringboot -
- Isolation level in Apache Kafka consumers on waitingforcode.com - articles about Apache Kafka
- Intro to Apache Kafka with Spring | Baeldung
KTables and KStreams
- What is a KTable in Kafka Streams? (confluent.io)
- In KafkaStreams, when to choose between a KTable or a KStream? (danlebrero.com)
Kafka Junit5 embedded
- Testing an Apache Kafka Integration within a Spring Boot Application and JUnit 5 | mimacom
- No need for Schema Registry in your Spring-Kafka tests
Kafka Junit 5 TestContainers
- https://www.confluent.io/blog/advanced-testing-techniques-for-spring-kafka/
- https://medium.zenika.com/dockerize-your-integration-tests-8d26a7425baa
- https://gquintana.github.io/2019/07/03/Kafka-integration-tests.html
- at stackoverflow
- Good example github
- https://www.testcontainers.org/modules/kafka/
- https://www.baeldung.com/spring-dynamicpropertysource
- https://www.baeldung.com/spring-boot-kafka-testing
- https://kreuzwerker.de/en/post/testing-a-kafka-consumer-with-avro-schema-messages-in-your-spring-boot
- https://howtodoinjava.com/kafka/spring-boot-jsonserializer-example/
- https://rmoff.net/2018/08/02/kafka-listeners-explained/
- https://medium.com/@marcelo.hossomi/running-kafka-in-docker-machine-64d1501d6f0b
- https://pawelpluta.com/introduce-any-testcontainer-into-your-spring-application/
- how to start KafkaAdmin after staring KafkaContainers
- Kafka Streams, June5 and TestContainers (Baeldung)
What is Reactive?
The term, “reactive”, refers to programming models that are built around reacting to change, like network components reacting to I/O events, like UI controllers reacting to mouse events. R2DBC(Reactive Relational Database Connectivity) was created out of the need for a non-blocking application stack to handle concurrency with a small number of threads and scale with fewer hardware resource.
You can think of data processed by a reactive application as moving through an assembly line. Reactor is both the conveyor belt and the workstations. The raw material pours from a source (the original Publisher) and ends up as a finished product ready to be pushed to the consumer (or Subscriber). The raw material can go through various transformations and other intermediary steps or be part of a larger assembly line that aggregates intermediate pieces together.
The core of reactive programming is a data stream that we can observe and react to, even apply back pressure as well. This leads to non-blocking execution and hence to better scalability with fewer threads of execution.
Backpressure is when a downstream can tell an upstream to send it fewer data in order to prevent it from being overwhelmed. The Consumer gets control over the speed at which data is emitted
- Spring’s WebFlux / Reactor Parallelism and Backpressure
- Reactive Systems in Java
- https://www.vinsguru.com/reactor-hot-publisher-vs-cold-publisher/
- Reactive programming - A simple introduction
- Mono vs Flux in project Reactor
- Reactor Hot Publisher vs Cold Publisher
- Reactor map vs flatMap
- Understanding Reactive’s .flatMap() Operator
- Testing Reactive Streams Using StepVerifier and TestPublisher
- Kafka No Longer Requires ZooKeeper
- Testing Reactive Microservice in Spring Boot — Unit Testing
- Testing Reactive Microservice in Spring Boot — Understanding Reactive & Choosing Test Stack
- How to Make Legacy Code Reactive
- What Does Mono.defer() Do?
- Which operator do I need?
- Curated List of Learning Material
- Project Reactor Reference Documentation
- @WebFluxTest with WebTestClient
- Testing your Router Functions with WebTestClient
- Webflux: The functional approach
- Going reactive with Spring Data
- R2DBC stands for Reactive Relational Database Connectivity
- R2DBC latest reference documentation
- Accessing data with R2DBC
R2DBC Reactive Relational Database Connectivity makes it easier to build Spring-powered applications that use relational data access technologies in a reactive application stack.
Reactive and blocking world
Remember that non-blocking, and asynchronous servers work using a single main thread (or a very small number of them). Blocking that thread blocks the entire web server. Don’t ever do this. The high performance that non-blocking servers like Netty can potentially achieve is largely a product of their not having to perform a lot of context switching between threads. When you want to use a blocking API in a non-blocking context, the only way to handle it is to push it off onto a worker thread. There’s a trade-off here because once you start pushing work off from the main thread onto other threads you start to reduce the efficiency of the non-blocking model. The more you do this, the more you start to look like a traditional web server.
- How to Make Legacy Code Reactive
- How Do I Wrap a Synchronous, Blocking Call?
- How to Prevent Reactive Java Applications from Stalling
- https://www.split.io/blog/reactive-java-spring-webflux/
- how-to-wrap-a-flux-with-a-blocking-operation-in-the-subscribe
- Flight of the Flux 3 - Hopping Threads and Schedulers
Building Reactive Springboot apps with Kafka
- Good intro Kafka
- Spring Boot and Kafka – Practical Example (thepracticaldeveloper.com)
- Apache Kafka and Reactive Spring Boot | Object Partners
- A reactive stack with Spring Boot, Kafka and Angular (wordpress.com)
- Reactor Kafka Reference Guide (projectreactor.io)
- reactor-kafka/SampleConsumer.java at main · reactor/reactor-kafka · GitHub
- Reactor Kafka Reference Guide (projectreactor.io)
- Head-First Reactive Workshop
- Head-First Reactive Workshop sources
- blog series reactive programming
- Great read reactor by example
Kafka and Avro
Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro helps define a binary format for your data, as well as map it to the programming language of your choice.
- https://dzone.com/articles/gentle-and-practical-introduction-to-apache-avro-part-1
- https://codingharbour.com/apache-kafka/guide-to-apache-avro-and-kafka/
- https://dzone.com/articles/kafka-avro-serialization-and-the-schema-registry
- https://www.confluent.io/blog/avro-kafka-data/
- https://www.baeldung.com/spring-cloud-stream-kafka-avro-confluent
- https://www.confluent.io/blog/schema-registry-avro-in-spring-boot-application-tutorial/
- https://www.confluent.io/blog/apache-kafka-spring-boot-application/
Kafka Streams
- https://www.baeldung.com/java-kafka-streams-vs-kafka-consumer
- https://blog.rockthejvm.com/kafka-streams/
- https://developer.confluent.io/learn-kafka/kafka-streams/get-started/
- https://www.tutorialworks.com/kafka-vs-streams-vs-connect/
- https://docs.confluent.io/platform/current/streams/concepts.html
- https://www.confluent.io/blog/ksql-streaming-sql-for-apache-kafka/
Kafka @ https://www.confluent.io/
Kafka is a mechanism for programs to exchange information, but its home ground is event-based communication, where events are business facts that have value to more than one service and are worth keeping around. This is emphasized by the core mantra of event-driven services: Centralize an immutable stream of business facts.
Kafka is based on the abstraction of a distributed commit log. By splitting a log into partitions, Kafka is able to scale-out systems. As such, Kafka models events as key/value pairs. Internally, keys and values are just sequences of bytes, but externally in your programming language of choice, they are often structured objects represented in your language's type system.
Get the Ultimate, Cloud-Native Apache Kafka® Experience. No more cluster sizing, scaling, over provisioning, ZooKeeper management or hardware.
- https://developer.confluent.io/get-started/spring-boot/#introduction
- https://developer.confluent.io/learn-kafka/spring/confluent-cloud/
- https://spring.io/projects/spring-kafka
- https://developer.confluent.io/learn-kafka/spring/hands-on-spring-boot-for-confluent/
- Kafka vs. Other Systems (REST, Enterprise Service Bus, Database) (confluent.io)
- Confluent Platform | Confluent Documentation
- Understanding Kafka Topic Partitions (best introduction)
- Kafkademy
Kafka Stream With Spring Boot
- Kafka Stream With Spring Boot | Vinsguru
- Introduction to KafkaStreams in Java | Baeldung
- Kafka Streams with Spring Boot & Confluent Cloud
- What is a KTable in Kafka Streams? (confluent.io)
- Using Kafka Streams - Mapping and FIltering Data (confluent.io)
Reactive Streams
Reactive is a programming model built around the concept of reacting to changes, like network components reacting to I/O
Reactive streams use a push model. This means that items are pushed on the stream at the pace of the publisher, regardless of whether the subscriber can follow or not (no worries, backpressure is a key-feature of reactive streams). reactive streams are lazy, and won’t start as long as there is no subscriber present. That means that a subscriber is always necessary with reactive streams. I’ve mentioned before that publishers are asynchronous in nature, but are they always asynchronous? The answer to that is, no, not always. Whether or not a reactive stream is synchronous or asynchronous, depends on the type of publisher you use.
The key expected benefit of reactive and non-blocking is the ability to scale with a small, fixed number of threads and less memory.
- https://dimitr.im/difference-between-mono-and-flux
- http://www.reactive-streams.org/
- https://dzone.com/articles/what-are-reactive-streams-in-java
- https://www.baeldung.com/reactor-core
- Reactive programming must for Spring Cloud Functions
- https://jstobigdata.com/java/backpressure-in-project-reactor/
- Reactor Flux Create vs Generate
- https://www.baeldung.com/reactor-core
- https://www.infoq.com/articles/reactor-by-example/
Java Faker and Kafka
It is a library that can be used to generate a wide array of real-looking data from a mobile number, addresses, names to popular culture references. This is really helpful when we want to use some placeholder but don't have actual data. In the microservices-based development, we need data to do against validation, and generate the dummy test data is quite a challenging task.
- https://reflectoring.io/spring-boot-kafka/
- Java Faker library to generate fake data. | by Maheshwar Ligade | techwasti | Medium
- Send Messages to Confluent Cloud with Spring Boot
- A Guide to JavaFaker | Baeldung
RestAssured
Rest Assured enables you to test REST APIs using java libraries and integrates well with Maven. Rest Assured has a gherkin type syntax which is shown below code. If you are a fan of BDD (Behavior Driven Development), I believe that you will love this kind of syntax. REST Assured also follows a BDD style and .given().when().then()
gives each request a standardized schema.
- Best Guide on MockMvc testing SpringBoot REST API endpoints
- WebMvc and Spring Boot Test Slices
- https://www.guru99.com/rest-assured.html
- https://www.toolsqa.com/rest-assured-tutorial/
RestTemplate testing with Spring Boot @RestClientTest Slice
In Spring Boot 1.4, the team has made a solid effort to simplify and speed up the creation and testing of REST clients. Compared to WireMock for testing our RestTemplate
in isolation, this solution requires less setup as everything is part of Spring Boot.
- Auto-configured REST Clients
- https://www.baeldung.com/restclienttest-in-spring-boot
- https://rieckpil.de/testing-your-spring-resttemplate-with-restclienttest/
- What the Heck Is the SpringExtension Used For?
- Injecting qualified RestTemplate instances
- Testing with Spring Boot's @TestConfiguration Annotation
Testing RestController with the @WebMvcTest sliced context
Where else, https://reflectoring.io/, Tutorials on Spring Boot and Java, thoughts about the Software Craft, and relevant book reviews. Because it's just as important to understand the Why as it is to understand the How. Have fun!
The SpringBoot @SpyBean to the rescue
Spy wraps the real bean but allows you to verify method invocation and mock individual methods without affecting any other method of the real bean
- https://www.logicbig.com/tutorials/spring-framework/spring-boot/testing-with-spy-bean.html
- https://shekhargulati.com/2017/07/20/using-spring-boot-spybean/
- @Mock vs. @MockBean When Testing Spring Boot Applications
- Improve build times with Context Caching from Spring Test
JUnit5, Spring and Mockito
- JUnit 5 tutorial part 1, testing with Mockito, Hamcrest
- JUnit 5 Annotations with Examples
- https://www.baeldung.com/junit-5
- https://junit.org/junit5/docs/current/user-guide/
- https://www.baeldung.com/mockito-junit-5-extension
- Junit5, MockMvc builder standalone testing
JUnit 5 parameterized tests
- https://www.baeldung.com/parameterized-tests-junit-5
- A More Practical Guide to JUnit 5 Parameterized Tests
- JUnit 5 Tutorial: Writing Parameterized Tests
JUnit5 LifeCycle manageming large resources
By default, both JUnit 4 and 5 create a new instance of the test class before running each test method. This provides a clean separation of state between tests. JUnit 5 allows us to modify the lifecycle of the test class using the @TestInstance annotation.
Spring Cloud Stream
Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.
- https://spring.io/projects/spring-cloud-stream
- https://www.baeldung.com/spring-cloud-stream
- https://tanzu.vmware.com/developer/guides/event-streaming/scs-what-is/
Monads and Java
Monads come from the functional programming world and is used in many places in many different ways. But the most concrete explanation id say is that a Monad accepts a type of “something” (this could be an int, string or any other type) and returns a new type containing your “something” type.
About PECS - Producer Extends Consumer Super and Lambda's
Wildcards and PECS (term first coined by Joshua Bloch in his book Effective Java): A wildcard is a type argument that uses a question mark, ?, which may or may not have an upper or lower bound. Type arguments without bounds are useful, but have limitations. If you declare a List of unbounded type, as in A List with an unbounded wildcard, you can read from it but not write to it.
- https://howtodoinjava.com/java/generics/java-generics-what-is-pecs-producer-extends-consumer-super/
- https://stackoverflow.com/questions/4343202/difference-between-super-t-and-extends-t-in-java
- https://nofluffjuststuff.com/magazine/2016/09/time_to_really_learn_generics_a_java_8_perspective
- https://howtodoinjava.com/java8/generic-functional-interfaces/
Functional Patterns in Java
The release of Lambdas supposed one of the largest quality jumps in the Java language in its history; mainly due to the fact that it opened a wide new range of possibilities to the language. Functional programming is a programming paradigm which has its roots in lambda calculus that uses a declarative approach; instead of the most commonly known imperative paradigm.
- https://medium.com/swlh/a-new-java-functional-style-f522dad40d32
- https://codeburst.io/mastering-the-new-functional-java-2eb2f7472079
- https://betterprogramming.pub/functional-patterns-in-java-b2b781f84124
- https://theboreddev.com/functional-patterns-in-java/
- https://dzone.com/articles/functional-programming-patterns-with-java-8
2 legged vs 3 legged OAuth
I short 2 legged and 3 legged OAuth refers to the number of players involved in the OAuth dance. 3 legged OAuth is used when you as a user wants to allow an application to access a service on your behalf. 2 legged OAuth is used when an application needs to access a service using a service account. Two-legged OAuth, or "signed fetch", takes advantage of OAuth's signatures to authenticate server-to-server requests. It doesn't need to involve the user nor any access tokens.
- 2 legged vs 3 legged OAuth - lekkimworld.com
- What are the differences between two-legged and three-legged OAuth? - Quora
Keeping fit on Java - learning points
- https://www.baeldung.com/java-thread-safety (must read)
- HowtodoinJava
- Spring guides
- DZone java zone
- winterbe
- Journaldev
- Tutorialpoint
- Callicoder.com (CompletableFuture)
- https://dzone.com/articles/20-examples-of-using-javas-completablefuture
- Quick Guide to Spring Bean Scopes | Baeldung
- @bean
- @Autowired, @Resource and @Inject | Baeldung
- Why Sneaky Throw? Bean Scopes
- More Fun with Wildcards
- Generic methods and Wildcard character
UML freshup
- UML Class Diagram Tutorial (visual-paradigm.com)
- Java to UML Diagrams
- UML Association vs Aggregation vs Composition with EXAMPLE (guru99.com)
- https://www.upgrad.com/blog/what-is-composition-in-java-with-examples/
- https://dzone.com/articles/an-introduction-to-the-java-collections-framework
- https://www.baeldung.com/spring-transactional-propagation-isolation
CompletableFuture and Async programming
Multi-threading is similar to multitasking, but enables the processing of executing multiple threads simultaneously, rather than multiple processes. The CompletableFuture, introduced in Java 8, provides an easy way to write asynchronous, non-blocking and multi-threaded code. Spring has the ability to implement the Command Pattern extremely well.
- Java CompletableFuture Tutorial with Examples
- Creating Asynchronous Methods in Springboot
- Guide To CompletableFuture Baeldung
- Spring Boot: Creating Asynchronous Methods Using @Async Annotation
- HowToDoInJava (Spring @Async rest controller example)
- Be aware of ForkJoinPook.CommonPool
SOLID Principles
The SOLID Principles are five principles of Object-Oriented class design. The SOLID principles were first introduced by the famous computer scientist Robert J. Martin (a.k.a Uncle Bob) in his paper in 2000. But the SOLID acronym was introduced later by Michael Feathers.
- The Single Responsibility Principle
- The Open-Closed Principle
- The Liskov Substitution Principle
- The Interface Segregation Principle
- The Dependency Inversion Principle
- The SOLID Principles of Object-Oriented
- SOLID Design Principles Explained: Dependency Inversion Principle with Code Examples – Stackify
- Open Close and Strategy pattern at DZone
The principle is very simple. All the strategy classes must implement a specific strategy interface. The class that uses the strategies, called the context class, is not bound to those specific strategy classes, but it is tied to the strategy interface.
Gang of Four Design Patterns
As a Java developer using the Spring Framework to develop enterprise class applications, you will encounter the GoF Design Patterns on a daily basis. The GoF Design Patterns are broken into three categories: Creational Patterns for the creation of objects; Structural Patterns to provide relationship between objects; and finally, Behavioral Patterns to help define how objects interact.
- Baeldung SpringBoot and Design Patterns
- SpringFrameWork Guru about GOF
- Which GoF Design pattern will be changed or influenced by the introduction of lambdas in Java8?
- State Design Pattern in Java
- State of the Lambda, why it will replace parts of GOF behavioral patterns
- Good example of Strategy pattern on How to do in java (EmployeePredicates
Reference Data Pattern
By reference types I mean reference data, or lookup values, or – if you want to be flash – taxonomies. Typically, the values defined here are used in drop-down lists in your application's user interface. They may also appear as headings on a report.
As your data model evolves over time and new reference types are required, you don't need to keep making changes to your database for each new reference type. You just need to define new configuration data.
Strategy Pattern leveraging Java 8 Lamda's
- Strategy design pattern at Baeldung
- Fluent Builder pattern at DZone
- Strategy pattern at Howtodoinjava
Typically, programmers tend to bundle all the algorithm logic in the host class, resulting in a monolithic class with multiple switch case or conditional statements. Such pitfalls in enterprise applications result in rippling effects across the application making the application fragile. You can typically avoid this by introducing the Strategy pattern as demonstrated below.
What this does:
- Statically define a family of algorithms at one location (interface Strategy)
- Follows SOLID Open-Closed Principle (Open (CaseStrategy) for extension, Closed (Entity Case) for modification). Adding new algorithms to the Strategy interface, letting us extending new functionality without touching existing code for our Case Entity.
@Data
@Builder
public class Case implements CaseStrategy {
private String phase;
private String status;
private Instant dueDate = Instant.now();
}
public interface Strategy<T> {
default T thenApply(Consumer<T> logic) {
logic.accept((T) this);
return (T) this;
}
}
public interface CaseStrategy extends Strategy<Case> {
static Consumer<Case> transfer(State state) {
return c -> {c.setPhase(state.getPhase()); c.setStatus(state.getStatus());}
}
static Consumer<Case> priReview() {
return c -> c.setPhase(CasePhase.PRI_REVIEW.name());
}
static Consumer<Case> rejected() {
return c -> c.setStatus(CaseStatus.REJECTED.name());
}
static Consumer<Case> verify(Boolean check) {
return c -> {
if (check && c.getDueDate().isBefore(Instant.now())) {
c.setStatus(CaseStatus.DECLINED.name());
}
};
}
}
// Jupiter JUnit5 TDD red green refactor
class CaseTest {
@ParameterizedTest
@MethodSource("stateProvider")
GivenInitialState_whenTransfering_thenVerifCaseStatus_andPhase(State state) {
// GIVEN
Case testCase = Case.builder().build();
// WHEN
testCase.andThen(transfer(state));
Assertions.assertAll("Check case phase and status"
// THEN
, () -> assertEquals(state.getStatus(), testCase.getStatus())
// AND
, () -> assertEquals(state.getPhase(), testCase.getPhase()));
}
}
static Stream<State> stateProvider() {
return Stream.of(State.builder().phase(CasePhase.INFOREQUEST).status(OPEN).build()
, State.builder().phase(CasePhase.CLOSINGSUMMARY).status(PENDING).build());
}
}
Liquibase changelogs in SQL format
- https://docs.liquibase.com/concepts/basic/sql-format.html
- https://www.liquibase.org/blog/plain-sql
- https://www.techgeeknext.com/spring-boot/spring-boot-liquibase-sql-example
- https://roytuts.com/evolving-database-using-spring-boot-liquibase/
- https://objectpartners.com/2018/05/09/liquibase-and-spring-boot/
- https://reflectoring.io/database-migration-spring-boot-liquibase/
- https://docs.liquibase.com/change-types/home.html
- https://auth0.com/blog/integrating-spring-data-jpa-postgresql-liquibase/
- https://medium.com/@auth0/integrating-spring-data-jpa-postgresql-and-liquibase-79b1bc65082e
Java interview questions
How to write Clean Java Code
Besides studying hard, to become a good software developer in Java or any other language, you must master concepts and code conventions to make a clean code and easy to maintain.