Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
Top 7 Mistakes When Testing JavaFX Applications
Converting ActiveMQ to Jakarta (Part III: Final)
Amazon EKS makes running containerized applications easier, but it doesn’t give you automatic visibility into JVM internals like memory usage or garbage collection. For Java applications, observability requires two levels of integration: Cluster-level monitoring for pods, nodes, and deploymentsJVM-level APM instrumentation for heap, GC, threads, latency, etc. New Relic provides both via Helm for infrastructure metrics, and a lightweight Java agent for full JVM observability. In containerized environments like Kubernetes, surface-level metrics (CPU, memory) aren’t enough. For Java apps, especially those built on Spring Boot, the real performance story lies inside the JVM. Without insight into heap usage, GC behavior, and thread activity, you're flying blind. New Relic bridges this gap by combining infrastructure-level monitoring (via Prometheus and kube-state-metrics) with application-level insights from the JVM agent. This dual visibility helps teams reduce mean time to resolution (MTTR), avoid OOMKilled crashes, and tune performance with confidence. This tutorial covers: Installing New Relic on EKS via HelmInstrumenting your Java microservice with New Relic’s Java agentJVM tuning for container environmentsMonitoring GC activity and memory usageCreating dashboards and alerts in New RelicOptional values.yaml file, YAML bundle, and GitHub repo Figure 1: Architecture of JVM monitoring on Amazon EKS using New Relic. The Java microservice runs inside an EKS pod with the New Relic JVM agent attached. It sends GC, heap, and thread telemetry to New Relic APM. At the same time, Prometheus collects Kubernetes-level metrics, which are forwarded to New Relic for unified observability. Prerequisites Amazon EKS cluster with kubectl and helm configuredA Java-based app (e.g., Spring Boot) deployed in EKSNew Relic account (free tier is enough)Basic understanding of JVM flags and Kubernetes manifests Install New Relic’s Kubernetes Integration (Helm) This installs the infrastructure monitoring components for cluster, pod, and container-level metrics. Step 1: Add the New Relic Helm repository Shell helm repo add newrelic https://helm-charts.newrelic.com helm repo update Step 2: Install the monitoring bundle Shell helm install newrelic-bundle newrelic/nri-bundle \ --set global.licenseKey=<NEW_RELIC_LICENSE_KEY> \ --set global.cluster=<EKS_CLUSTER_NAME> \ --namespace newrelic --create-namespace \ --set newrelic-infrastructure.enabled=true \ --set kube-state-metrics.enabled=true \ --set prometheus.enabled=true Replace <NEW_RELIC_LICENSE_KEY> and <EKS_CLUSTER_NAME> with your actual values. Instrument Your Java Microservice With the New Relic Agent Installing the Helm chart sets up cluster-wide observability, but to monitor JVM internals like heap usage, thread activity, or GC pauses, you need to attach the New Relic Java agent. This gives you: JVM heap, GC, thread metricsResponse times, error rates, transaction tracesGC pauses and deadlocks Dockerfile (add agent): Dockerfile ADD https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip /opt/ RUN unzip /opt/newrelic-java.zip -d /opt/ JVM startup args: Shell -javaagent:/opt/newrelic/newrelic.jar Required environment variables: YAML - name: NEW_RELIC_APP_NAME value: your-app-name - name: NEW_RELIC_LICENSE_KEY valueFrom: secretKeyRef: name: newrelic-license key: license_key Create the secret: Shell kubectl create secret generic newrelic-license \ --from-literal=license_key=<YOUR_NEW_RELIC_LICENSE_KEY> Capture Kubernetes Metrics New Relic Helm install includes: newrelic-infrastructure → Node, pod, container metricskube-state-metrics → Kubernetes objectsprometheus-agent → Custom metrics support Verify locally: Shell kubectl top pods kubectl top nodes In New Relic UI, go to: Infrastructure → Kubernetes JVM Tuning for GC and Containers To avoid OOMKilled errors and track GC behavior, tune your JVM for Kubernetes: Recommended JVM Flags: Shell -XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XshowSettings:vm -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/tmp/gc.log Make sure /tmp is writable or mount it via emptyDir. Pod resources: YAML resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "1Gi" cpu: "500m" Align MaxRAMPercentage with limits.memory. Why JVM Monitoring Matters in Kubernetes Kubernetes enforces resource limits on memory and CPU, but by default, the JVM doesn’t respect those boundaries. Without proper tuning, the JVM might allocate more memory than allowed, triggering OOMKilled errors. Attaching the New Relic Java agent gives you visibility into GC pauses, heap usage trends, and thread health all of which are critical in autoscaling microservice environments. With these insights, you can fine-tune JVM flags like `MaxRAMPercentage`, detect memory leaks early, and make data-driven scaling decisions. Dashboards and Alerts in New Relic Create an alert for GC pause time: Go to Alerts & AI → Create alertSelect metric: JVM > GC > Longest GC pauseSet threshold: e.g., pause > 1000 ms Suggested Dashboards: JVM heap usageGC pause trendsPod CPU and memory usageError rate and latency Use New Relic’s dashboard builder or import JSON from your repo. Forwarding GC Logs to Amazon S3 While New Relic APM provides GC summary metrics, storing full GC logs is helpful for deep memory analysis, tuning, or post-mortem debugging. Since container logs are ephemeral, the best practice is to forward these logs to durable storage like Amazon S3. Why S3? Persistent log storage beyond pod restartsUseful for memory tuning, forensic reviews, or auditsCost-effective compared to real-time log ingestion services Option: Use Fluent Bit with S3 Output Plugin 1. Enable GC logging with: Shell -Xloggc:/tmp/gc.log 2. Mount /tmp with emptyDir in your pod 3. Deploy Fluent Bit as a sidecar or DaemonSet Make sure your pod or node has an IAM role with s3:PutObject permission to the target bucket. This setup ensures your GC logs are continuously shipped to S3 for safe, long-term retention even after the pod is restarted or deleted. Troubleshooting Tips Problem Fix APM data not showing Verify license key, agent path, app traffic JVM metrics missing Check -javaagent setup and environment vars GC logs not collected Check -Xloggc path, permissions, volume mount Kubernetes metrics missing Ensure Prometheus is enabled in Helm values Check logs with: Shell kubectl logs <pod-name> --container <container-name> Conclusion New Relic allows you to unify infrastructure and application observability in Kubernetes environments. With JVM insights, GC visibility, and proactive alerts, DevOps and SRE teams can detect and resolve performance issues faster. After setting up JVM and Kubernetes monitoring, consider enabling distributed tracing to get visibility across service boundaries. You can also integrate New Relic alerts with Slack, PagerDuty, or Opsgenie to receive real-time incident notifications. Finally, use custom dashboards to compare performance across dev, staging, and production environments, helping your team catch regressions early and optimize for reliability at scale.
Many times, while developing at work, I needed a template for a simple application from which to start adding specific code for the project at hand. In this article, I will create a simple Java application that connects to a database, exposes a few rest endpoints and secures those endpoints with role based access. The purpose is to have a minimal and fully working application that can then be customized for a particular task. For the databases, we will use PostgreSQL and for security, we will go with Keycloak, both deployed in containers. During development, I used podman to test that the containers are created correctly (an alternative to docker—they are interchangeable for the most part) as a learning experience. The application itself is developed using the Spring Boot framework with Flyway for database versioning. All of these technologies are industry standards in the Java EE world with a high chance to be used in a project. The requirement around which to build our prototype is a library application that exposes REST endpoints allowing the creation of authors, books, and the relationships between them. This will allow us to implement a many-to-many relationship that can then be expanded for any purpose imaginable. The fully working application can be found at https://github.com/ghalldev/db_proto The code snippets in this article are taken from that repository. Before creating the containers be sure to define the following environment variables with your preferred values (they are ommitted on purpose in the tutorial to avoid propagating default values used by multiple users): Shell DOCKER_POSTGRES_PASSWORD DOCKER_KEYCLOAK_ADMIN_PASSWORD DOCKER_GH_USER1_PASSWORD Configure PostgreSQL: Shell docker container create --name gh_postgres --env POSTGRES_PASSWORD=$DOCKER_POSTGRES_PASSWORD --env POSTGRES_USER=gh_pguser --env POSTGRES_INITDB_ARGS=--auth=scram-sha-256 --publish 5432:5432 postgres:17.5-alpine3.22 docker container start gh_postgres Configure Keycloak: first is the container creation and start: Shell docker container create --name gh_keycloak --env DOCKER_GH_USER1_PASSWORD=$DOCKER_GH_USER1_PASSWORD --env KC_BOOTSTRAP_ADMIN_USERNAME=gh_admin --env KC_BOOTSTRAP_ADMIN_PASSWORD=$DOCKER_KEYCLOAK_ADMIN_PASSWORD --publish 8080:8080 --publish 8443:8443 --publish 9000:9000 keycloak/keycloak:26.3 start-dev docker container start gh_keycloak after the container is up and running, we can go ahead and create the realm, user and roles (these command must to be run inside the running container): Shell cd $HOME/bin ./kcadm.sh config credentials --server http://localhost:8080 --realm master --user gh_admin --password $KC_BOOTSTRAP_ADMIN_PASSWORD ./kcadm.sh create realms -s realm=gh_realm -s enabled=true ./kcadm.sh create users -s username=gh_user1 -s email="[email protected]" -s firstName="gh_user1firstName" -s lastName="gh_user1lastName" -s emailVerified=true -s enabled=true -r gh_realm ./kcadm.sh set-password -r gh_realm --username gh_user1 --new-password $DOCKER_GH_USER1_PASSWORD ./kcadm.sh create roles -r gh_realm -s name=viewer -s 'description=Realm role to be used for read-only features' ./kcadm.sh add-roles --uusername gh_user1 --rolename viewer -r gh_realm ./kcadm.sh create roles -r gh_realm -s name=creator -s 'description=Realm role to be used for create/update features' ./kcadm.sh add-roles --uusername gh_user1 --rolename creator -r gh_realm ID_ACCOUNT_CONSOLE=$(./kcadm.sh get clients -r gh_realm --fields id,clientId | grep -B 1 '"clientId" : "account-console"' | grep -oP '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}') ./kcadm.sh update clients/$ID_ACCOUNT_CONSOLE -r gh_realm -s 'fullScopeAllowed=true' -s 'directAccessGrantsEnabled=true' The user gh_user1 is created in the realm gh_realm with the roles viewer and creator. You may have noticed that instead of creating a new client we are using one of the default clients that come with Keycloak: account-console. This is for convenience reasons, in a real scenario you would create a specific client which would then be updated to have fullScopeAllowed(causes the realm roles to be added to the token - not added by default) and directAccessGrantsEnabled(allows token to be generated by using the openid-connect/token endpoint, from Keycloak, in our case with curl). The created roles can then be used inside the Java application to restrict access to certain functionality according to our agreed contract—the viewer can only access read-only operations while creator can do create, update and delete. Of course, in the same style all kinds of roles can be created for whatever reasons, as long as the agreed contract is well-defined and understood by everyone. The roles can be further added to groups, but that is not included in this tutorial. But before being able to actually use the roles, we have to tell the Java application how to extract the roles—this is needed since the way Keycloak adds the roles to the JWT is particular to it so we have to write a piece of custom code to translate them in something Spring Security can use: Java @Bean public JwtAuthenticationConverter jwtAuthenticationConverter() { //follow the same pattern as org.springframework.security.oauth2.server.resource.authentication.JwtGrantedAuthoritiesConverter Converter<Jwt, Collection<GrantedAuthority>> keycloakRolesConverter = new Converter<>() { private static final String DEFAULT_AUTHORITY_PREFIX = "ROLE_"; //https://github.com/keycloak/keycloak/blob/main/services/src/main/java/org/keycloak/protocol/oidc/TokenManager.java#L901 private static final String KEYCLOAK_REALM_ACCESS_CLAIM_NAME = "realm_access"; private static final String KEYCLOAK_REALM_ACCESS_ROLES = "roles"; @Override public Collection<GrantedAuthority> convert(Jwt source) { Collection<GrantedAuthority> grantedAuthorities = new ArrayList<>(); Map<String, List<String>> realmAccess = source.getClaim(KEYCLOAK_REALM_ACCESS_CLAIM_NAME); if (realmAccess == null) { logger.warn("No " + KEYCLOAK_REALM_ACCESS_CLAIM_NAME + " present in the JWT"); return grantedAuthorities; } List<String> roles = realmAccess.get(KEYCLOAK_REALM_ACCESS_ROLES); if (roles == null) { logger.warn("No " + KEYCLOAK_REALM_ACCESS_ROLES + " present in the JWT"); return grantedAuthorities; } roles.forEach( role -> grantedAuthorities.add(new SimpleGrantedAuthority(DEFAULT_AUTHORITY_PREFIX + role))); return grantedAuthorities; } }; JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter(); jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(keycloakRolesConverter); return jwtAuthenticationConverter; } There are other important configurations done in the AppConfiguration class, like enabling method security and disabling csrf. Now we can use the annotation org.springframework.security.access.prepost.PreAuthorize in the REST controller to restrict access: Java @PostMapping("/author") @PreAuthorize("hasRole('creator')") public void addAuthor(@RequestParam String name, @RequestParam String address) { authorService.add(new AuthorDto(name, address)); } @GetMapping("/author") @PreAuthorize("hasRole('viewer')") public String getAuthors() { return authorService.allInfo(); } In this way, only users that are authenticated successfully and have the roles list in hasRole can call the endpoints, otherwise they will get an HTTP 403 Forbidden error. After the containers have started and are configured, the Java application can start but not before adding the database password—this can be done with an env variable (below is a linux shell example): Shell export SPRING_DATASOURCE_PASSWORD=$DOCKER_POSTGRES_PASSWORD And now, if all is up and running corectly, we can use curl to test out application(all the commands below are linux shell). Logging in with the previously created user gh_user1 and extracting the authentication token: Shell KEYCLOAK_ACCESS_TOKEN=$(curl -d 'client_id=account-console' -d 'username=gh_user1' -d "password=$DOCKER_GH_USER1_PASSWORD" -d 'grant_type=password' 'http://localhost:8080/realms/gh_realm/protocol/openid-connect/token' | grep -oP '"access_token":"\K[^"]*') Creating a new author (this will test that the creator role works): Shell curl -X POST --data-raw 'name="GH_name1"&address="GH_address1"' -H "Authorization: Bearer $KEYCLOAK_ACCESS_TOKEN" 'localhost:8090/library/author' Retrieving all the authors in the library (this will test that the viewer role works): Shell curl -X GET -H "Authorization: Bearer $KEYCLOAK_ACCESS_TOKEN" 'localhost:8090/library/author' And with this you should have all that is needed to create your own Java application, expanding and configuring it as needed.
I recently experimented with QtJambi, a Java wrapper for the well-known Qt C++ library used to build GUIs. Here are some initial thoughts, remarks and observations: Building a QtJambi project can be somewhat challenging. It requires installing the Qt framework, configuring system paths to Qt’s native libraries, and setting proper JVM options. Although it is possible to bundle native libraries within the wrapper JARs, I haven’t tried this yet.The overall development approach is clean and straightforward. You create windows or dialogs, add layouts, place widgets (components or controls) into those layouts, configure widgets and then display the window or dialog to the user. This model should feel familiar to anyone with GUI experience.Diving deeper, QtJambi can become quite complex, comparable to usual Java Swing development. The API sometimes feels overly abstracted with many layers that could potentially be simplified.There is an abundance of overloaded methods and constructors, which can make it difficult to decide which ones to use. For example, the QShortcut class has 34 different constructors. This likely comes from a direct and not fully optimized mapping from the C++ Qt API.Like Swing, QtJambi is not thread-safe. All GUI updates must occur on the QtJambi UI thread only. Ignoring this can cause crashes, not just improper UI refresh like in Swing.There is no code reuse between Java Swing and QtJambi. Even concepts that appear close and reusable are not shared. QtJambi is essentially a projection of C++ Qt’s architecture and design patterns into Java, so learning it from scratch is necessary even for experienced Swing developers.Using AI tools to learn QtJambi can be tricky. AI often mixes Java Swing concepts with QtJambi, resulting in code that won’t compile. It can also confuse Qt’s C++ idioms when translating them directly to Java, which doesn’t always fit.Despite being a native wrapper, QtJambi has some integration challenges, especially on macOS. For example, handling the application Quit event works differently and only catching window-close events behaves properly out of the box. In contrast, native Java QuitHandler support is easier and more reliable there, but it doesn't work with QtJambi.Mixing Java AWT with QtJambi is problematic. This may leads to odd behaviors or crashes. The java.awt.Desktop class also does not function in this context.If you want a some times challenging Java GUI framework with crashes and quirks, QtJambi fits the bill! It brings a lot of power but also some of complexity and instability compared to standard Java UI options.There is a GUI builder that works with Qt, but it is possible to use its designs in QtJambi, generating source code or loading designs at runtime. The only issue: the cost starts from $600 per year for small businesses to >$5,000 per year for larger companies. Notable Applications Built With QtJambi Notable applications built with QtJambi are few. One example is the Interactive Brokers desktop trading platform (IBKR Desktop), which uses QtJambi for its user interface. Beyond this, well-known commercial or open-source projects created specifically with QtJambi are scarce and often not widely publicized. Most QtJambi usage tends to be in smaller-scale or internal tools rather than major flagship applications. This limited visibility can make it challenging to pitch QtJambi adoption to decision-makers. QtJambi Licensing QtJambi doesn’t have a separate commercial license; it inherits Qt’s licensing model. Qt can be used under free LGPL/GPL licenses if you comply with their terms, or under paid commercial licenses that provide additional advantages and fewer restrictions. Make sure to check your ability to comply with LGPL/GPL or your need for commercial licensing before proceeding. Should You Consider QtJambi For Your Desktop Apps? There are three strong contenders for desktop applications: Java Swing, JavaFX, and QtJambi. There is also SWT, but I would prefer to avoid it. If you already have stable, well-functioning Java Swing applications and lack the resources or justification to rewrite them, staying with Swing is usually the best approach. The effort and risk of migrating large, mature codebases often outweigh the benefits unless there is a strong business case. For new desktop projects, both JavaFX and QtJambi are worth evaluating. JavaFX is typically safest when you want: A well-supported, familiar Java-based framework.Easier development, packaging, and deployment.Powerful animations, modern UIs, and broad tools and community support.Reliable, long-term support for business needs without high-performance or native integration demands. QtJambi is a strong choice if your application requires: Superior graphics performance and efficient rendering.A native look and feel across platforms.Responsive, complex interfaces, or advanced custom widget support. Be prepared for a steeper learning curve, more complex build processes, possible native library management, and licensing issues. Summary QtJambi is a performant and powerful, yet sometimes complex, Java wrapper for the Qt GUI framework, providing a native look and feel along with a wide range of advanced widgets. While it is well-suited for high-performance, native-like applications, it comes with a steep learning curve, more intricate setup requirements, and limited community support compared to JavaFX or Swing. Despite these challenges, QtJambi is worth considering for developers who need cross-platform consistency, efficient rendering, and access to Qt’s rich feature set.
This article will explain some basics of the HashiCorp Consul service and its configurations. It is a service networking solution that provides service registry and discovery capabilities, which integrate seamlessly with Spring Boot. You may have heard of Netflix Eureka; here, Consul works similarly but offers many additional features. Notably, it supports the modern reactive programming paradigm. I will walk you through with the help of some applications. Used Libraries Spring BootSpring Cloud GatewaySpring Cloud ConsulSpring Boot Actuator The architecture includes three main components: ConsulService applicationGateway 1. Consul We have to download and install the Consul service in the system from the Hashicorp Consul official website. For development purposes, we have to start it using a command in PowerShell (in Windows). PowerShell consul agent -dev Consul Dashboard This is the place where we can see all the applications registered with Consul. The default port for accessing the Consul dashboard is 8500. Once it starts successfully, you will see something like below. The next step is to register the Gateway and Service applications to Consul. Once those are added, they will appear in this same dashboard. When multiple instances of the same service are running, Consul continuously monitors their health using "Actuator." If any of them report an unhealthy status, Consul will automatically deregister them from the registry. 2. Service Application It is a simple service application for exposing the APIs. We added an @EnableDiscoveryClient annotation in the main class to register the service in Consul for service discovery. If you run the application under multiple ports then you can see multiple instances in consul dashboard. Used the Actuator to expose the health status. Main Class Java @SpringBootApplication @EnableDiscoveryClient public class ServiceApp { public static void main(String[] args) { SpringApplication.run(ServiceApp.class, args); } } Maven Configuration XML <properties> <java.version>21</java.version> <spring.cloud.version>2023.0.4</spring.cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-consul-all</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> </dependencies> Application Property File Properties files # Assigning a unique name for the service spring.application.name=service-app # Application will use random ports server.port=0 spring.webflux.base-path=/userService logback.log.file.path=./logs/service # ~~~ Consul Configuration ~~~ # It assigns a unique ID to each instance of the service when running multiple instances, # allowing them to be registered individually in Consul for service discovery. spring.cloud.consul.discovery.instance-id=${spring.application.name}-${server.port}-${random.int[1,99]} # To access centralized configuration data from Consul spring.cloud.consul.config.enabled=false # To register the service in Consul using its IP address instead of the hostname. spring.cloud.consul.discovery.prefer-ip-address=true # The service will register itself in Consul under this name, which the gateway will use for service discovery while routing requests. spring.cloud.consul.discovery.service-name=${spring.application.name} # Ip to communicate with consul server spring.cloud.consul.host=localhost # Consul runs on port 8500 by default, unless it is explicitly overridden in the configuration. spring.cloud.consul.port=8500 # Remapping the Actuator URL in Consul since a base path has been added. spring.cloud.consul.discovery.health-check-path=${spring.webflux.base-path}/actuator/health # Time interval to check the health of service. spring.cloud.consul.discovery.health-check-interval=5s # Time need to wait for the health check response before considering it as timed out spring.cloud.consul.discovery.health-check-timeout=5s # The maximum amount of time a service can remain in an unhealthy state before Consul marks it as critical and removes it from the service catalog. #spring.cloud.consul.discovery.health-check-critical-timeout=1m Sample API Java @GetMapping(value = "getStatus", produces = MediaType.APPLICATION_JSON_VALUE) public Mono<ResponseEntity<Object>> healthCheck() { logger.info("<--- Service to get status request : received --->"); logger.info("<--- Service to get status response : given --->"); return Mono.just(ResponseEntity.ok("Success from : " + portListener.getPort())); } 3. Gateway It is developed with the help of Spring Cloud Gateway. And it consists of the same libraries as the Service application. Consul is used for registering and service discovery of the application. Used the Actuator to expose the health status. Main Class Java @SpringBootApplication @EnableDiscoveryClient public class GatewayApp { public static void main(String[] args) { SpringApplication.run(GatewayApp.class, args); } } Maven Configuration Java <properties> <java.version>21</java.version> <spring.cloud.version>2023.0.4</spring.cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-consul-all</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> </dependencies> Application Property File Properties files # Assigning a unique name for the service spring.application.name=gateway-app server.port=3000 logback.log.file.path=./logs/gateway # ~~~ Consul Configuration ~~~ # It is used in Spring Cloud Gateway to handle automatic route discovery from a service registry # When we are configuring as false, we have to explicitly configure routing of each API requests. spring.cloud.gateway.discovery.locator.enabled=false spring.cloud.consul.discovery.instance-id=${spring.application.name}-${server.port}-${random.int[1,99]} spring.cloud.consul.config.enabled=false spring.cloud.consul.discovery.prefer-ip-address=true spring.cloud.consul.discovery.service-name=${spring.application.name} spring.cloud.consul.host=localhost spring.cloud.consul.port=8500 Since we have set spring.cloud.gateway.discovery.locator.enabled to false, we need to explicitly configure the routing for each API request as shown below. For the routing destination URL, instead of specifying the actual URL of the service application, we map it to the load-balanced (lb) URL provided by Consul using the service name. In normal gateway spring.cloud.gateway.routes[0].uri=http://192.168.1.10:5000In service discovery enabled gateway spring.cloud.gateway.routes[0].uri=lb://service-app Properties files #~~~ Example for a url routing ~~~ spring.cloud.gateway.routes[0].id=0 # Instead of configuring the actual url of service application, we are mapping in to the lb url of "consul" with service name. spring.cloud.gateway.routes[0].uri=lb://service-app # Rest of the configuration will keep as same as spring cloud gateway configuration spring.cloud.gateway.routes[0].predicates[0]=Path=/userService/** spring.cloud.gateway.routes[0].filters[0]=RewritePath=/userService/(?<segment>.*), /userService/${segment} spring.cloud.gateway.routes[0].filters[1]=PreserveHostHeader Final Consul Dashboard Here, we can see one instance of gateway-app and two instances of service-app, as I am running two instances of the service app under different ports. Testing Let's test it by calling a sample API through the gateway to verify that it's working. Upon the first API call: Upon the second API call:We can see that each time the API returns a response from a different instance. GitHub Please check here to get the full project. Thanks for reading!
Introduction Concurrent programming remains a crucial part of building scalable, responsive Java applications. Over the years, Java has steadily enhanced its multithreaded programming capabilities. This article reviews the evolution of concurrency from Java 8 through Java 21, highlighting important improvements and the impactful addition of virtual threads introduced in Java 21. Starting with Java 8, the concurrency API saw significant enhancements such as Atomic Variables, Concurrent Maps, and the integration of lambda expressions to enable more expressive parallel programming. Key improvements introduced in Java 8 include: Threads and ExecutorsSynchronization and LocksAtomic Variables and ConcurrentMap Java 21, released in late 2023, brought a major evolution with virtual threads, fundamentally changing how Java applications can handle large numbers of concurrent tasks. Virtual threads enable higher scalability for server applications, while maintaining the familiar thread-per-request programming model. Probably, the most important feature in Java 21 is Virtual Threads. In Java 21, the basic concurrency model of Java remains unchanged, and the Stream API is still the preferred way to process large data sets in parallel. With the introduction of Virtual Threads, the Concurrent API now delivers better performance. In today’s world of microservices and scalable server applications, the number of threads must grow to meet demand. The main goal of Virtual Threads is to enable high scalability for server applications, while still using the simple thread-per-request model. Virtual Threads Before Java 21, the JDK’s thread implementation used thin wrappers around operating system (OS) threads. However, OS threads are expensive: If each request consumes an OS thread for its entire duration, the number of threads quickly becomes a scalability bottleneck.Even when thread pools are used, throughput is still limited because the actual number of threads is capped. The aim of Virtual Threads is to break the 1:1 relationship between Java threads and OS threads. A virtual thread applies a concept similar to virtual memory. Just like virtual memory maps a large address space to a smaller physical memory, Virtual Threads allow the runtime to create the illusion of having many threads by mapping them to a small number of OS threads. Platform threads (traditional threads) are thin wrappers around OS threads. Virtual Threads, on the other hand, are not tied to any specific OS thread. A virtual thread can execute any code that a platform thread can run. This is a major advantage—existing Java code can often run on virtual threads with little or no modification. Virtual threads are hosted by platform threads ("carriers"), which are still scheduled by the OS. For example, you can create an executor with virtual threads like this: Java ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); Example With Comparison Virtual threads only consume OS threads while actively performing CPU-bound tasks. A virtual thread can be mounted or unmounted on different carrier threads throughout its lifecycle. Typically, a virtual thread will unmount itself when it encounters a blocking operation (such as I/O or a database call). Once that blocking task is complete, the virtual thread resumes execution by being mounted on any available carrier thread. This mounting and unmounting process occurs frequently and transparently—without blocking OS threads. Example — Source Code Example01CachedThreadPool.java In this example, an executor is created using a Cached Thread Pool: Java var executor = Executors.newCachedThreadPool() Java package threads; import java.time.Duration; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example01CachedThreadPool { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newCachedThreadPool()' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (var executor = Executors.newCachedThreadPool()) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example01CachedThreadPoolTest { @Test @Order(1) public void test_1000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(1_000_000); } } Test results on my PC: Example02FixedThreadPool.java Executor is created using Fixed Thread Pool: Java var executor = Executors.newFixedThreadPool(500) Java package threads; import java.time.Duration; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example02FixedThreadPool { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newFixedThreadPool(500)' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (var executor = Executors.newFixedThreadPool(500)) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example02FixedThreadPoolTest { @Test @Order(1) public void test_1000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(1_000_000); } } Test results on my PC: Example03VirtualThread.java Executor is created using Virtual Thread Per Task Executor: Java var executor = Executors.newVirtualThreadPerTaskExecutor() Java package threads; import java.time.Duration; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example03VirtualThread { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newVirtualThreadPerTaskExecutor()' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example03VirtualThreadTest { @Test @Order(1) public void test_1000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(1_000_000); } @Test @Order(5) public void test_2_000_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(2_000_000); } } Test results on my PC: Conclusion You can clearly see the difference in execution time (in milliseconds) between the various executor implementations used to process all NUMBER_OF_TASKS. It's worth experimenting with different values for NUMBER_OF_TASKS to observe how performance varies. The advantage of virtual threads becomes especially noticeable with large task counts. When NUMBER_OF_TASKS is set to a high number—such as 1,000,000—the performance gap is significant. Virtual threads are much more efficient at handling a large volume of tasks, as demonstrated in the table below: I'm confident that after this clarification, if your application processes a large number of tasks using the concurrent API, you'll strongly consider moving to Java 21 and taking advantage of virtual threads. In many cases, this shift can significantly improve the performance and scalability of your application. Source code: GitHub Repository – Comparing Threads in Java 21
If you’ve ever tried to scale your organization’s data infrastructure beyond a few teams, you know how fast a carefully planned “data lake” can degenerate into an unruly “data swamp.” Pipelines are pushing files nonstop, tables sprout like mushrooms after a rainy day, and no one is quite sure who owns which dataset. Meanwhile, your real-time consumers are impatient for fresh data, your batch pipelines crumble on every schema change, and governance is an afterthought at best. At that point, someone in a meeting inevitably utters the magic word: data mesh. Decentralized data ownership, domain-oriented pipelines, and self-service access all sound perfect on paper. But in practice, it can feel like you’re trying to build an interstate highway system while traffic is already barreling down dirt roads at full speed. This is where Apache Iceberg and Apache Flink come to the rescue. Iceberg delivers database-like reliability on top of your data lake, while Flink offers real-time, event-driven processing at scale. Together, they form the backbone of a Data Mesh that actually works — complete with time travel, schema evolution, and ACID guarantees. Best of all, you don’t need to sign away your soul to a proprietary vendor ecosystem. The Data Mesh Pain Points Before diving into the solution, let’s be brutally honest about what happens when organizations adopt Data Mesh without robust infrastructure: Unclear ownership – Multiple teams write to the same tables, creating chaos.Schema drift – An upstream service silently adds or changes a column, and downstream consumers break without warning.Inconsistent data states – Real-time pipelines read half-written data while batch jobs rewrite partitions mid-flight.Governance nightmares – Regulators ask what data you served last quarter, and your only answer is a nervous shrug. The dream of self-service analytics quickly devolves into constant firefighting. Teams need real-time streams, historical replay, and reproducible datasets, yet traditional data lakes weren’t designed for these requirements. They track files, not logical datasets, and they lack strong consistency or concurrency control. Why Iceberg + Flink Changes the Game Apache Iceberg: Reliability Without Lock-In Time travel lets you query historical table states — no more guesswork about last month’s data.Schema evolution enables adding, renaming, or promoting columns without breaking readers.ACID transactions prevent race conditions and ensure readers never see partial writes.Open table format works with Spark, Flink, Trino, Presto, or even plain SQL — no vendor lock-in. Apache Flink: True Real-Time Processing Exactly-once semantics for event streams ensure clean, accurate writes.Unified streaming and batch in one engine eliminates separate pipeline maintenance.Stateful processing supports building materialized views and aggregations directly over streams. Together, they allow domain-oriented teams to produce real-time, governed data products that behave like versioned datasets rather than fragile event logs. Iceberg Fundamentals for a Real-Time Mesh Time Travel for Debugging and Auditing Iceberg snapshots track every table change. Need to see your sales table during Black Friday? Just run: SQL SELECT * FROM sales_orders FOR SYSTEM_VERSION AS OF 1234567890; This isn’t just a convenience for analysts — it’s essential for regulatory compliance and operational debugging. Schema Evolution Without Breaking Pipelines Iceberg assigns stable column IDs and supports type promotion. Adding fields to Flink sink tables won’t disrupt downstream jobs: SQL ALTER TABLE customer_data ADD COLUMN preferred_language STRING; Even renaming columns is safe, since logical identity is decoupled from physical layout. ACID Transactions to Prevent Data Races In a true Data Mesh, multiple teams may publish into adjacent partitions. Iceberg ensures isolation, so readers never see half-written data — even when concurrent Flink jobs perform upserts or CDC ingestion. Flink + Iceberg in Action Consider a real-time product inventory domain: Step 1: Define an Iceberg Table for Product Events SQL CREATE TABLE product_events ( product_id BIGINT, event_type STRING, quantity INT, warehouse STRING, event_time TIMESTAMP, ingestion_time TIMESTAMP ) USING ICEBERG PARTITIONED BY (days(event_time)); Step 2: Stream Updates With Flink Flink ingests from Kafka (or any source), transforms data, and writes directly into Iceberg: SQL TableDescriptor icebergSink = TableDescriptor.forConnector("iceberg") .option("catalog-name", "my_catalog") .option("namespace", "inventory") .option("table-name", "product_events") .format("parquet") .build(); table.executeInsert(icebergSink); Every commit becomes an Iceberg snapshot — no more wondering if your table is consistent. Step 3: Build Derived Domain Tables Another Flink job aggregates events into a fresh inventory table: SQL CREATE TABLE current_inventory ( product_id BIGINT, total_quantity INT, last_update TIMESTAMP ) USING ICEBERG PARTITIONED BY (product_id); Data Mesh Superpowers With Iceberg + Flink Reproducibility – Run analytics against any historical table snapshot.Decentralized ownership – Each domain team owns its tables, yet they remain queryable mesh-wide.Unified real-time and batch – Flink handles both streaming ingestion and historical backfills.Interoperability – Iceberg tables are queryable via Spark, Trino, Presto, or standard SQL engines. Operational Best Practices Partition on real query dimensions (often temporal). Avoid tiny files and over-partitioning.Automate compaction and snapshot cleanup to maintain predictable performance.Validate schema changes in CI/CD pipelines to catch rogue columns early.Monitor metadata – Iceberg exposes metrics on partition pruning, file size, and snapshot lineage. Lessons Learned from Production Start small – Migrate one domain at a time to avoid a “big bang” failure.Automate governance – Use table metadata to track ownership without adding manual overhead.Use snapshot tags for milestones – Quarterly closes, product launches, or audit checkpoints become easy to reproduce.Document partitioning strategies – Your future self will thank you when query performance needs tuning. The Bottom Line Apache Iceberg and Apache Flink give you the building blocks for a real-time Data Mesh that actually scales and stays sane. With time travel, schema evolution, and ACID guarantees, you can replace brittle pipelines and ad hoc governance with a stable, future-proof platform. You no longer need to choose between speed and reliability or sacrifice flexibility for vendor lock-in. The result? Teams deliver data products faster.Analysts trust the numbers.
Introduction: Problem Definition and Suggested Solution Idea This article is a a technical article for Java developers that suggest a solution for a major pain point of analyzing very long stack traces searching for meaningful information in a pile of frameworks related stack trace lines. The core idea of the solution is to provide a capability to intelligently filter out irrelevant parts of the stack trace without losing important and meaningful information. The benefits are two-fold: 1. Making stack trace much easier to read and analyze, making it more clear and concise 2. Making stack trace much shorter and saving space Stack trace is a lifesaver when debugging or trying to figure out what went wrong in your application. However, when working with logs on the server side you can come across huge stack trace that contains the longest useless tail of various frameworks and Application Server related packages. And somewhere in this pile, there are several lines of a relevant trace and they may be in different segments separated by useless information. It becomes a nightmare to search for a relevant stuff. Here is a link, "Filtering the Stack Trace From Hell" that describes the same problem with real-life examples (not for the fainthearted :)). Despite the obvious value of this capability, the Java ecosystem offers very few, if any, libraries with built-in support for stack trace filtering out of the box. Developers often resort to writing custom code or regex filters to parse and shorten stack traces—an ad-hoc and fragile solution that’s hard to maintain and reuse. Some logging frameworks such as Log4J and Logback might provide basic filtering options based on log levels or format, but they don't typically allow for the granular control over stack trace How the Solution Works and How to Use It The Utility is provided as part of Open Source Java library called MgntUtils. It is available on Maven Central as well as on Github (including source code and Javadoc). Here is a direct link to Javadoc. The solution implementation is provided in class TextUtils in method getStacktrace() with several overridden signatures. Here is the direct Javadoc link to getStacktrace() method with detailed explanation of the functionality. So the solution is that user can set a relevant package prefix (or multiple prefixes srting with MgntUtils library version 1.7.0.0) of the packages that are relevant. The stack trace filtering will work based on the provided prefixes in the following way: 1. The error message is always printed. 2. The first lines of the stack trace are always printed as well until the first line matching one of the prefixes is found. 3. Once the first line matching one of the prefixes is found this and all the following lines that ARE matching one of the prefixes will be printed 4. Once the first line that is NOT matching any of the prefixes is found - this first non-matching line is printed but all the following non-matching lines are replaced with single line ". . ." 5. If at some point another line matching one of the prefixes is found this and all the following matching lines will be printed. and now the logic just keep looping between points 4 and 5 Stack trace could consist of several parts such as the main section, "Caused by" section and "Supressed" Section. Each part is filtered as a separate section according to the logic described above. Also, the same utility (starting from version 1.5.0.3) has method getStacktrace() that takes CharSequence interface instead of Throwable and thus allows to filter and shorten stackt race stored as a string the same way as a stack trace extracted from Throwable. So, essentially stack traces could be filtered "on the fly" at run time or later on from any text source such as a log. (Just to clarify - the utility does not support parsing and modifying the entire log file. It supports filtering just a stack trace that as passed as a String. So if anyone wants to filter exceptions in a log file they would have to parse the log file and extract stack trace(s) as separate strings and then can use this utility to filter each individual stack trace). Here is a usage example. Note that the first parameter of getStacktrace() method in this example is Throwable. Let's say your company's code always resides in packages that start with "com.plain.*" So you set such a prefix and do this: Java logger.info(TextUtils.getStacktrace(e,true,"com.plain.")); This will filter out all the useless parts of the trace according to the logic described above, leaving you with very concise stack trace. Also, user can pre-set the prefix (or multiple prefixes) and then just use the convenience method: Java TextUtils.getStacktrace(e); It will do the same. To preset the prefix just use the method: Java TextUtils.setRelevantPackage("com.plain."); Method setRelevantPackage() supports setting multiple prefixes, so you can use it like this: Java TextUtils.setRelevantPackage("com.plain.", "com.encrypted."); If you would like to pre-set this value by configuration then starting with the library version 1.1.0.1 you can set Environment Variable "MGNT_RELEVANT_PACKAGE" or System Property "mgnt.relevant.package" to value "com.plain." and the property will be set to that value without you invoking method TextUtils.setRelevantPackage("com.plain."); explicitly in your code. Note that System property value would take precedence over the environment variable if both were set. Just a reminder that with System property you can add it in your command line using -D flag: "-Dmgnt.relevant.package=com.plain." Note that System property value would take precedence over environment variable if both are set. IMOPRTANT: Note that for both environment variable and system property if multiple prefixes needed to be set than list them one after another separated by semicolon (;) For Example: "com.plain.;com.encrypted." There is also a flexibility here: If you do have pre-set prefixes but for some particular case you would wish to filter according to different set of prefixes you can use the method signature that takes prefixes as parameter and it will override the globally pre-set prefixes just for this time: Java logger.info(TextUtils.getStacktrace(e,true,"org.alternative.")); Here is an example of filtered vs unfiltered stack trace. You will get the following filtered stack trace: Plain Text com.plain.BookListNotFoundException: Internal access error at com.plain.BookService.listBooks() at com.plain.BookService$$FastClassByCGLIB$$e7645040.invoke() at net.sf.cglib.proxy.MethodProxy.invoke() ... at com.plain.LoggingAspect.logging() at sun.reflect.NativeMethodAccessorImpl.invoke0() ... at com.plain.BookService$$EnhancerByCGLIB$$7cb147e4.listBooks() at com.plain.web.BookController.listBooks() instead of the unfiltered version: Plain Text com.plain.BookListNotFoundException: Internal access error at com.plain.BookService.listBooks() at com.plain.BookService$$FastClassByCGLIB$$e7645040.invoke() at net.sf.cglib.proxy.MethodProxy.invoke() at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed() at com.plain.LoggingAspect.logging() at sun.reflect.NativeMethodAccessorImpl.invoke0() at sun.reflect.NativeMethodAccessorImpl.invoke() at sun.reflect.DelegatingMethodAccessorImpl.invoke() at java.lang.reflect.Method.invoke() at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs() at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod() at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.interceptor.AbstractTraceInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.transaction.interceptor.TransactionInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept() at com.plain.BookService$$EnhancerByCGLIB$$7cb147e4.listBooks() at com.plain.web.BookController.listBooks() In Conclusion The MgntUtils library is written and maintained by me. If you require any support or have any question or would like a short demo you can contact me through LinkedIn - send me a message or request connection. I will do my best to respond
A JDK Enhancement Proposal (JEP) is a formal process used to propose and document improvements to the Java Development Kit. It ensures that enhancements are thoughtfully planned, reviewed, and integrated to keep the JDK modern, consistent, and sustainable over time. Since its inception, many JEPs have introduced significant language and runtime features that shape the evolution of Java. One such important proposal, JEP 400, introduced in JDK 18 in 2022, standardizes UTF-8 as the default charset, addressing long-standing issues with platform-dependent encoding and improving Java’s cross-platform reliability. Traditionally, Java’s I/O API, introduced in JDK 1.1, includes classes like FileReader and FileWriter that read and write text files. These classes rely on a Charset to correctly interpret byte data. When a charset is explicitly passed to the constructor, like in: Java public FileReader(File file, Charset charset) throws IOException public FileWriter(String fileName, Charset charset) throws IOException the API uses that charset for file operations. However, these classes also provide constructors that don’t take a charset: Java public FileReader(String fileName) throws IOException public FileWriter(String filename) throws IOException In these cases, Java defaults to the platform’s character set. As per the JDK 17 documentation: "The default charset is determined during virtual-machine startup and typically depends upon the locale and charset of the underlying operating system." This behavior can lead to bugs when files are written and read using different character sets—especially across environments. To address this inconsistency, JEP 400 proposed using UTF-8 as the default charset when none is explicitly provided. This change makes Java applications more predictable and less error-prone, especially in cross-platform environments. As noted in the JDK 18 API: "The default charset is UTF-8, unless changed in an implementation-specific manner." Importantly, this update doesn’t remove the ability to specify a charset. Developers can still set it via constructors or the JVM flag -Dfile.encoding. Lets see the problem under discussion using an example: Java package com.jep400; import java.io.FileWriter; import java.io.IOException; import java.nio.charset.Charset; public class WritesFiles { public static void main(String[] args) { System.out.println("Current Encoding: " + Charset.defaultCharset().displayName()); writeFile(); } private static void writeFile() { try (FileWriter fw = new FileWriter("fw.txt")){ fw.write("résumé"); System.out.println("Completed file writing."); } catch (IOException e) { e.printStackTrace(); } } } In the method writeFile, we used a FileWriter constructor that does not take a character set as a parameter. As a result, the JDK falls back on the default character set, which is either specified via the -Dfile.encoding JVM argument or derived from the platform’s locale. The program writes a file containing some text. To simulate a character set mismatch, we run the program with a specific encoding: Java -Dfile.encoding=ISO-8859-1 com.jep400.WritesFiles Here, we’re explicitly setting the character set to ISO-8859-1 to mimic running the program on a system where the default charset is ISO-8859-1 and no charset is passed programmatically. When executed, the program produces the following output: Java Output: Current Encoding: ISO-8859-1 Completed file writing. Consider the following file that reads the same file but with different encoding After the above program completes, it creates a file named fw.txt. Next, let’s look at a program that reads the fw.txt file created by the previous program. Java import java.io.FileReader; import java.io.IOException; import java.nio.charset.Charset; public class ReadsFiles { public static void main(String[] args) { System.out.println("Current Encoding: " + Charset.defaultCharset().displayName()); readFile(); } private static void readFile() { try(FileReader fr = new FileReader("fw.txt")) { int character; while ((character = fr.read()) != -1) { System.out.print((char) character); } } catch (IOException e) { e.printStackTrace(); } } } In the readFile method, we use a FileReader constructor that does not specify a character set. To simulate running the program on a platform with a different default character set, we pass a VM argument: java -Dfile.encoding=UTF-8 com.jep400.ReadsFiles The following output will be displayed when running this command: Java Current Encoding: UTF-8 r�sum� The output shows text that does not match what the first program wrote. This highlights the problem of not explicitly specifying the character set when reading and writing files, instead relying on the platform’s default character set. This mismatch can cause the same incorrect output in the following scenarios: When the programs run on different machines with different default character sets.When upgrading to JDK 18 or later, which changes the default charset behavior. Now, let’s see how the output looks when running the same programs in a JDK 18+ environment. When running the first program, this output is observed: Java Current Encoding: UTF-8 Completed file writing. When the second program is run, the output appears as follows: Java Current Encoding: UTF-8 résumé We can see that the data is written and read using the standard UTF-8 character set, effectively resolving the character-set issues encountered earlier. Conclusion Since its introduction in JDK 18, JEP 400’s adoption of UTF-8 as the default charset has become a foundational improvement for Java applications worldwide. By standardizing on UTF-8, it effectively eliminates many charset-related issues that developers faced when running code across different platforms. While not a new change, its continued impact ensures better consistency and fewer bugs in modern Java projects. Developers should still specify charsets explicitly when necessary, but relying on UTF-8 as the default enhances cross-platform compatibility and helps future-proof applications as the Java ecosystem rapidly evolves. While not always required, aligning with this default supports consistency across diverse environments.
Jenkins is a widely used automation server that plays a big role in modern software development. It helps teams streamline their continuous integration and continuous delivery (CI/CD) processes by automating tasks like building, testing, and deploying applications. One of the key strengths of Jenkins is its flexibility. It easily integrates with a wide range of tools and technologies, making it adaptable to different project needs. In the previous articles, we learnt about setting up Jenkins and a Jenkins agent using Docker Compose. In this tutorial blog, we will learn what Jenkins jobs are and how to set them up for a Maven project. What Are Jenkins Jobs? In Jenkins, a job is simply a task or process, like building, testing, or deploying an application. It can also help in running automated tests for the project with specific steps and conditions. With a Jenkins Job, the tests can be automatically run whenever there’s a code change. Jenkins will clone the source code from version control like Git, compile the code, and run the tests based on the requirements. These jobs can also be scheduled for a later run, providing flexibility to run the tests on demand. This helps make testing faster and more consistent. Jenkins jobs can also be triggered using webhooks whenever a commit is pushed to the remote repository, enabling seamless Continuous Integration and Development. Different Types of Jenkins Jobs Jenkins supports multiple types of job items, each designed for different purposes. Depending on the complexity of the project and its requirements, we can choose the type of Jenkins job that best fits our needs. Let’s quickly discuss the different types of jobs available in Jenkins: Freestyle Project This is the standard job type in Jenkins that is popular and widely used. It pulls code from one SCM, runs the build steps serially, and then does follow-up tasks like saving artifacts and sending email alerts. Pipeline Jenkins Pipeline is a set of tools that help build, test, and deploy code automatically by creating a continuous delivery workflow right inside Jenkins. Pipelines define the entire build and deployment process as code, using Pipeline domain-specific language (DSL) syntax. Pipeline provides the tools to model and manage simple or complex workflows directly with Jenkins. The definition of a Jenkins Pipeline is written into a text file called jenkinsfile that can be committed to a project’s source control repository. Multi-Configuration Project A multi-configuration project is best for projects that require running multiple setups, such as testing on different environments or creating builds for specific platforms. It’s helpful in cases where builds share many similar steps, which would otherwise need to be repeated manually. The Configuration Matrix allows us to define which steps to reuse and automatically creates a multi-axis setup for different build combinations. Multibranch Pipeline The Multibranch Pipeline lets us set up different Jenkinsfiles for each branch of the project. Jenkins automatically searches and runs the correct pipeline for each branch if it has a Jenkinsfile in the code repository. This is very helpful as Jenkins handles managing separate pipelines for each branch. Organization Folders With Organization Folders, Jenkins can watch over an entire organization on GitHub, Bitbucket, GitLab, or Gitea. Whenever it finds a repository with branches or pull requests that include a Jenkinsfile, it automatically sets up a Multibranch Pipeline. Maven Project With the Maven Project, Jenkins can seamlessly build a Maven project. Jenkins makes use of the POM file to automatically handle the setup by greatly reducing the need for manual configuration. How to Set Up a Jenkins Job for a Maven Project It is essential to configure and set up the Maven Integration plugin in Jenkins before proceeding with configuring the Jenkins Job for the Maven project. Initially, the option for the Maven project is not displayed unless the Maven Integration plugin is installed. Installing Maven Integration Plugin in Jenkins The Maven Integration plugin can be installed using the following steps: Step 1 Log in to Jenkins and navigate to the Manage Jenkins > Manage Plugins page. Step 2 On the Manage Plugins Page, select the “Available plugins” from the left-hand menu and search for “Maven integration plugin”. Select the Plugin and click on the “Install” button on the top right of the page. Step 3 After successful installation, restart Jenkins. Navigate to the Manage Jenkins > Manage Plugins page and select “Installed plugins” from the left-hand menu. The Maven integration plugin should be listed in the installed plugin list. After the Maven integration plugin is installed successfully, we need to configure Maven in Jenkins. Configure Maven in Jenkins Maven can be configured in Jenkins by following these steps: Step 1 Navigate to Manage Jenkins > Tools window. Step 2 Scroll down to the Maven installations section. Step 3 Click on the Add Maven button. Fill in the mandatory field for Name with an appropriate name(I have updated the name to “Maven_Latest”). Next, tick the “Install automatically” checkbox and select the appropriate version of Maven to install. We will be using the latest version(3.9.10) in the Version dropdown. Click on “Apply” and “Save” to save the configuration. With this, we have configured Maven in Jenkins and are now fully set to create a Jenkins job for the Maven Project. Configuring a Jenkins Job for a Maven Project We will be setting up a Jenkins job for the test automation project that will run API automation tests. This Maven project available on GitHub contains API automation tests written using the REST Assured library in Java. A Jenkins job for a Maven project can be configured with the following steps: Step 1 Click on “New Item” on the homepage of Jenkins. Step 2 Select the Maven project from the list of job names and add a name for the job. Click on the OK button to continue. Next, Jenkins will take us to the Job’s configuration page. Step 3 Select “Git” in the Source Code Management section. Enter the repository URL (https://github.com/mfaisalkhatri/rest-assured-examples.git). This URL should be the one that we select to clone the repository.The respective branch name for the job. We’ll add the branch name as “*/master” as we need to pull the code from the master branch to run the tests. Make sure we add the correct branch name here. Step 4 In the Pre-Steps section, update the Root POM, Goals, and Options. Root POM: Add the path to the POM.xml file. Generally, it is in the root folder. So, the value should be the default “pom.xml”.Goals and Options: In this text box, we need to provide the Maven command to run the project. As the API tests are organized in different testng.xml files, and it is available in the test-suite folder in the project, we need to provide the following command to execute: Plain Text clean install -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml Decoding the Maven Command clean install: The clean command tells Maven to clean the project. It will delete the target/ directory, erasing all the previously built and compiled artifacts. The install command will compile the code and run the tests. -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml “-D” uses the System Property of “suite-xml” and passes the value “test-suite/restfulbookersuitejenkins.xml” to it. In the POM.xml, the “suite-xml” property is defined within the Maven Surefire plugin. The default value for this “suite-xml” is set to “test-suite/testng.xml” in the Properties section in POM.xml. However, as we have multiple testng.xml files and need to run the tests from a specific “restfulbookersuitejenkins.xml,” we will be overwriting the suite-xml using -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml. Given below are the contents of the restfulbookersuitejenkins.xml XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Restful Booker Test Suite" thread-count="3" parallel="tests"> <listeners> <listener class-name="in.reqres.utility.TestListener"/> </listeners> <test name="Restful Booker tests on Jenkins"> <parameter name="agent" value="jenkins"/> <classes> <class name="com.restfulbooker.RestfulBookerE2ETests"> <methods> <include name="createBookingTest"/> <include name="getBookingTest"/> <include name="testTokenGeneration"/> <include name="updateBookingTest"/> <include name="updatePartialBookingTest"/> <include name="deleteBookingTest"/> <include name="checkBookingIsDeleted"/> </methods> </class> </classes> </test> </suite> This file contains end-to-end tests for the RESTful Booker Demo APIs. Step 4 Click on the Apply and Save button. This will save the configuration for the job. The Jenkins job for the Maven project has been configured successfully. Running the Jenkins Job There are multiple ways to run the Jenkins Job as mentioned below: By manually clicking on the “Build Now” button.Using Webhooks to run the build as soon as the developer pushes code to the remote repository. Let’s run the Jenkins Job manually by clicking on the “Build Now” button. Once the job is started, a progress bar is displayed on the left-hand side of the screen. On clicking the Job #, as shown in the screenshot above, Jenkins will take us to the Job details page, where the Job’s details and its progress are displayed. The live console logs of the job can be verified by clicking on the Console Output link on the left-hand menu. Similarly, the Job’s more granular details can be viewed by scrolling down on the Console output page. The console output shows that a total of 7 test cases were run and all of them passed. Finally, it shows that the build was generated successfully. These detailed logging allows us to check the minute details of the job execution and its progress. In case the test within the Maven project fails, it can be verified here, and accordingly, an action can be taken. Verifying the Job Status The job’s status can be checked after the execution is complete. It provides a historical view of the Job runs, which can help in analysing the test failures to check the stability of the project. A graphical representation of the historical data is also provided, which shows us that the build failed for the first three runs, and passed after the 5th and 6th runs. Job Dashboard Jenkins provides us with the Job Dashboard, which helps us know the current status of the job. Summary Jenkins is a powerful tool for setting up the CI/CD pipeline to automate the different stages of deployment. It offers multiple options to set up jobs, one of which is using the Maven Project. We can easily configure the Jenkins job for a Maven project by installing the Maven integration plugin and setting up Maven in Jenkins. The job pulls the code from the SCM and executes the goal as provided by the user in the job’s configuration. Jenkins provides detailed job execution status on the dashboard. It provides more granular details on the Job’s console output. The historical graph and job run details help stakeholders verify the stability of the build and take further actions.
When discussing the history of software development, we can observe an increase in software complexity, characterized by more rules and conditions. When it comes to modern applications that rely heavily on databases, testing how the application interacts with its data becomes equally important. It is where data-driven testing plays a crucial role. Data-driven testing helps increase software quality by enabling tests with multiple data sets, which means the same test runs multiple times with different data inputs. Automating these tests also ensures scalability and repeatability across your test suite, reducing human error, boosting productivity, saving time, and guaranteeing that the same mistake doesn't happen twice. Modern applications often depend on databases to store and manipulate critical data; indeed, the data is the soul of any modern application. Thus, it's essential to validate that these operations function correctly across a range of scenarios. Writing traditional unit tests often falls short because they don't account for the variability of data that real-world applications encounter. This is where data-driven testing shines. When we talk about data-driven tests, it gives you the capability to automate those tests with different inputs, including several cases, to check if your application keeps up. Exploring this approach ensures that your application handles data consistently and reliably, helping you avoid bugs that may only appear with specific data types, formats, or combinations of data. Data-driven testing is a strategy where the same test is run multiple times with different sets of input data. Rather than writing separate test cases for each data variation, you use one test method and provide other data sets to test against. Exploring more of the data-driven testing goes beyond reducing redundancy in your test code and also improves test coverage by ensuring the system behaves as expected across all types of data. Data-driven test flow In this article, we will explore this capability with Java and Jupiter. Live Session: Implementing Data-Driven Testing With Jakarta NoSQL and Jakarta Data In this section, we will walk through a live example using Java SE, Jakarta NoSQL, and Jakarta Data to demonstrate data-driven testing in action. For our example, we will build a simple hotel management system that tracks room status and integrates with Oracle NoSQL as the database. Prerequisites Before diving into the code, ensure you have Oracle NoSQL running either on the cloud or locally using Docker. You can quickly start Oracle NoSQL by running the following command: Shell docker run -d --name oracle-instance -p 8080:8080 ghcr.io/oracle/nosql:latest-ce Once the database is up and running, we're ready to start building the project. You can also find the full project on GitHub: Data-Driven Test with Oracle NoSQL Step 1: Structure the Entity We begin by defining the Room entity, which represents a hotel room in our system. This entity is mapped to the database using the @Entity annotation, and each field corresponds to a column in the database: Java @Entity public class Room { @Id private String id; @Column private int number; @Column private RoomType type; @Column private RoomStatus status; @Column private CleanStatus cleanStatus; @Column private boolean smokingAllowed; @Column private boolean underMaintenance; } Step 2: Room Repository Next, we create the RoomRepository interface, which uses Jakarta Data and NoSQL annotations to define queries for various room-related operations: Java @Repository public interface RoomRepository { @Query("WHERE type = 'VIP_SUITE' AND status = 'AVAILABLE' AND underMaintenance = false") List<Room> findVipRoomsReadyForGuests(); @Query("WHERE type <> 'VIP_SUITE' AND status = 'AVAILABLE' AND cleanStatus = 'CLEAN'") List<Room> findAvailableStandardRooms(); @Query("WHERE cleanStatus <> 'CLEAN' AND status <> 'OUT_OF_SERVICE'") List<Room> findRoomsNeedingCleaning(); @Query("WHERE smokingAllowed = true AND status = 'AVAILABLE'") List<Room> findAvailableSmokingRooms(); @Save void save(List<Room> rooms); @Save Room newRoom(Room room); void deleteBy(); @Query("WHERE type = :type") List<Room> findByType(@Param("type") String type); } In this repository, we define several queries to retrieve rooms based on different conditions, such as finding available rooms, rooms that need cleaning, or rooms that allow smoking. We also include methods for saving, deleting, and querying rooms by type. To test our repository, we want to ensure that we are using a test container instead of a production environment. For this, we set up a DatabaseContainer singleton that starts the Oracle NoSQL container for testing purposes: Java public enum DatabaseContainer { INSTANCE; private final GenericContainer<?> container = new GenericContainer<> (DockerImageName.parse("ghcr.io/oracle/nosql:latest-ce")) .withExposedPorts(8080); { container.start(); } public DatabaseManager get(String database) { DatabaseManagerFactory factory = managerFactory(); return factory.apply(database); } public DatabaseManagerFactory managerFactory() { var configuration = DatabaseConfiguration.getConfiguration(); Settings settings = Settings.builder() .put(OracleNoSQLConfigurations.HOST, host()) .build(); return configuration.apply(settings); } public String host() { return "http://" + container.getHost() + ":" + container.getFirstMappedPort(); } } This container ensures that we’re using the Oracle NoSQL database, which is running inside a Docker container, thereby mimicking a production-like environment while remaining fully isolated for testing purposes. Step 4: Injecting the DatabaseManager We need to inject the DatabaseManager into our CDI context. For this, we create a ManagerSupplier class that ensures the DatabaseManager is available to our application: Java @ApplicationScoped @Alternative @Priority(Interceptor.Priority.APPLICATION) public class ManagerSupplier implements Supplier<DatabaseManager> { @Produces @Database(DatabaseType.DOCUMENT) @Default public DatabaseManager get() { return DatabaseContainer.INSTANCE.get("hotel"); } } Step 5: Writing Data-Driven Tests With @ParameterizedTest in JUnit 5 In this step, we focus on how to write data-driven tests using JUnit 5's @ParameterizedTest annotation, and specifically dive into the types used in the RoomServiceTest. We’ll explore the @EnumSource and @MethodSource annotations, all of which help run the same test method multiple times with different sets of input data. Let’s look at the types used in the RoomServiceTest class in detail: Java @ParameterizedTest(name = "should find rooms by type {0}") @EnumSource(RoomType.class) void shouldFindRoomByType(RoomType type) { List<Room> rooms = this.repository.findByType(type.name()); SoftAssertions.assertSoftly(softly -> softly.assertThat(rooms).allMatch(room -> room.getType().equals(type))); } The @EnumSource(RoomType.class) annotation is used to automatically provide each enum constant from the RoomType enum to the test method. In this case, the RoomType enum contains values like VIP_SUITE, STANDARD, SUITE, etc. This annotation causes the test method to run once for each value in the RoomType enum. Each time the test runs, the type parameter is assigned one of the enum values, and the test checks that all rooms returned by the repository match the RoomType provided. This is especially useful when you want to run the same test logic for all possible values of an enum. It ensures that your code works consistently across all variants of the enum type, minimizing redundant test cases. Java @ParameterizedTest @MethodSource("room") void shouldSaveRoom(Room room) { Room updateRoom = this.repository.newRoom(room); SoftAssertions.assertSoftly(softly -> { softly.assertThat(updateRoom).isNotNull(); softly.assertThat(updateRoom.getId()).isNotNull(); softly.assertThat(updateRoom.getNumber()).isEqualTo(room.getNumber()); softly.assertThat(updateRoom.getType()).isEqualTo(room.getType()); softly.assertThat(updateRoom.getStatus()).isEqualTo(room.getStatus()); softly.assertThat(updateRoom.getCleanStatus()).isEqualTo(room.getCleanStatus()); softly.assertThat(updateRoom.isSmokingAllowed()).isEqualTo(room.isSmokingAllowed()); }); } The @MethodSource("room") annotation specifies that the test method should be run with data provided by the room() method. This method returns a stream of Arguments containing different Room objects. The room() method generates random room data using Faker and assigns random values to room attributes like roomNumber, type, status, etc. These randomly generated rooms are passed to the test method one at a time. The test checks that the room saved in the repository matches the original room’s attributes, ensuring that the save operation works as expected. @MethodSource is a great choice when you need to provide complex or custom test data. In this case, we use random data generation to simulate different room configurations, ensuring our code can handle a wide range of inputs without redundancy. Conclusion In this article, we've explored the importance of data-driven testing and how to implement it effectively using JUnit 5 (Jupiter). We demonstrated how to leverage parameterized tests to run the same test multiple times with different inputs, making our testing process more efficient, comprehensive, and scalable. By using annotations like @EnumSource, @MethodSource, and @ArgumentsSource, we can easily pass multiple sets of data to our test methods, ensuring that our application works as expected across a wide range of input conditions. We focused on @EnumSource iterating over enum constants and @MethodSource generating custom data for our tests. These tools, alongside JUnit 5’s rich variety of parameterized test sources, such as @ValueSource, @CsvSource, and @ArgumentsSource, give us the flexibility to design tests that cover a broader spectrum of data variations. By incorporating these techniques, we ensure that our repository methods (and other components) are robust, adaptable, and thoroughly tested with diverse real-world data. This approach significantly improves software quality, reduces test code duplication, and accelerates the testing process. Data-driven testing isn’t just about automating tests; it’s about making those tests more meaningful by accounting for the variety of real-world conditions your software might face. It’s a valuable strategy for building resilient applications, and with JUnit 5, the possibilities for enhancing test coverage are vast and customizable.
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Ram Lakshmanan
yCrash - Chief Architect