DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Premium Content
Trend Report
Low-Code Development
Low-Code Development
Refcard #216
Java Caching Essentials
Java Caching Essentials
Refcard #400
Java Application Containerization and Deployment
Java Application Containerization and Deployment

DZone's Featured Java Resources

Top 7 Mistakes When Testing JavaFX Applications

Top 7 Mistakes When Testing JavaFX Applications

By Catherine Edelveis
JavaFX is a versatile tool for creating rich enterprise-grade GUI applications. Testing these applications is an integral part of the development lifecycle. However, Internet sources are very scarce when it comes to defining best practices and guidelines for testing JavaFX apps. Therefore, developers must rely on commercial offerings for JavaFX testing services or write their test suites following trial-and-error approaches. This article summarises the seven most common mistakes programmers make when testing JavaFX applications and ways to avoid them. Scope and Baseline Two projects were used for demonstrating JavaFX testing capabilities: RaffleFX and SolfeggioFX. The latter uses Spring Boot in addition to JavaFX. Note that these projects don’t contain JavaFX dependencies because they are developed based on open source Liberica JDK with integrated JavaFX support. JDK version: 21 TestFX was used as a testing framework. It is actively developed, open-source, and with a wide variety of features. RobotFX, a TestFX class, was used for interacting with the UI. Other libraries and tools used: JUnit5, AssertJ, JavaFX Monocle for headless testing in CI. Mistake 1: Updating UI Off the FX Thread JavaFX creates an application thread upon application start, and only this thread can render the UI elements. This is one of the most common pitfalls in JavaFX testing because the tests run on the JUnit thread, not on the FX application thread, and it is easy to forget to perform specific actions explicitly on the FX thread, such as writing to or reading from the UI. Take a look at this code snippet: Java List<String> names = List.of("Alice", "Mike", "Linda"); TextArea area = fxRobot.lookup("#text") .queryAs(TextArea.class); area.setText(String.join(System.lineSeparator(), names)); Here, we are trying to update the UI off the application thread. As a result, another thread is created and tries to perform actions on UI elements. This results in Thrown java.lang.IllegalStateException: Not on FX application thread;Random NPEs inside skins,Deadlocks,States that never update. What can we do? Write to the UI to mutate controls or fire handlers on the FX thread. If you use the FxRobot class, you can achieve that by wrapping mutations in robot.interact(() -> { ... }). Java List<String> names = List.of("Alice", "Mike", "Linda"); TextArea area = fxRobot.lookup("#text") .queryAs(TextArea.class); fxRobot.interact(() -> area.setText(String.join(System.lineSeparator(), names))); Read from the UI to get text, snapshot pixels, or query layout on the FX thread and return a value: Java private static Color samplePixel(Canvas canvas, Point2D p) throws Exception { return WaitForAsyncUtils.asyncFx(() -> { WritableImage img = canvas.snapshot(new SnapshotParameters(), null); PixelReader pr = img.getPixelReader(); int x = (int) Math.round(p.getX()); int y = (int) Math.round(p.getY()); x = Math.max(0, Math.min(x, (int) canvas.getWidth() - 1)); y = Math.max(0, Math.min(y, (int) canvas.getHeight() - 1)); return pr.getColor(x, y); }).get(); } On the other hand, the input, such as pressing, clicking, or releasing, should happen on the test thread. Do not wrap it in robot.interact(): Java robot.press(KeyCode.Q); Mistake 2: Bootstrapping Tests and FXML ClassLoader Incorrectly When you combine JavaFX/TestFX with a framework such as Spring Boot, it is easy to boot the application the wrong way. The thing is that TestFX owns the Stage, but Spring owns the beans. So, if you boot Spring without giving it the TestFX Stage, the beans will not be able to use it. On the other hand, if you call Application.start(...) directly, you can end up with two contexts. Another mistake is related to the same situation of using JavaFX with Spring. FXMLLoader uses a different classloader than Spring. Therefore, controllers Spring creates aren’t the same “type” as the ones FXML asks for. Incorrect bootstrapping results in: NoSuchBeanDefinitionException: ...Controller even though it’s a @Component.Random NPEs from the custom FxmlLoader because applicationContext is null.Stack traces mention exceptions related to ClassLoader or “can’t find bean for controller X”. What can we do? Make FXMLoader use the same class loader as Spring in the application code: Java public Parent load(String fxmlPath) throws IOException { FXMLLoader loader = new FXMLLoader(); loader.setLocation(getClass().getResource(fxmlPath)); loader.setClassLoader(getClass().getClassLoader()); return loader.load(); } Use @Start to wire up a real Stage, and Dependency Injection to inject fakes. Don’t call new FxApplication.start(stage) if this code boots Spring internally. Java @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE) @ExtendWith(ApplicationExtension.class) class PianoKeyboardColorTest { @Autowired private ConfigurableApplicationContext context; @Start public void start(Stage stage) throws Exception { FxmlLoader loader = new FxmlLoader(context); Parent rootNode = loader.load("/fxml/in-build-keyboard.fxml"); stage.setScene(new Scene(rootNode, 800, 600)); stage.show(); WaitForAsyncUtils.waitForFxEvents(); } Mistake 3: Confusing Handler Wiring with Real User Input When you try to trigger the UI behavior by calling the controller methods directly, you are testing the code wiring, but not the real event path the user takes, such as focusing, clicking, pressing, etc. As a result, your tests may pass but miss bugs. Alternatively, the test may hang and fail because the input never fired. Another side of this coin is triggering the UI event that can’t happen, such as going full screen in the headless mode somewhere in CI. In this case, the assertions will time out waiting for the event that will never happen. For example, we can trigger the button action with robot.clickOn() and button.fire(), but these methods are not equivalent. The robot.clickOn() simulates a real mouse click by moving the mouse, pressing, and releasing. The button.fire() triggers the button’s action programmatically and skips the mouse events entirely. What can we do? Don’t mix integration and interaction tests, i.e., avoid calling controller methods directly in the UI tests. Use robot.clickOn() or similar FxRobot’s methods to test user interaction and UI behaviour: pressed/hover visuals, etc. Note that this method runs on the test thread, so you don’t have to wrap it in interact(): Java Canvas canvas = robot.lookup("#keyboard").queryAs(Canvas.class); robot.interact(canvas::requestFocus); robot.press(KeyCode.Q); Use button.fire() or similar control methods to assert handler effects without relying on real pointer semantics. Note that these methods run on the FX thread, so they must be wrapped in interact(): Java Button btn = fxRobot.lookup("#startButton").queryButton(); fxRobot.interact(btn::fire); Assert by changes in the UI, such as the presence of a node in the new scene, label text change, button visibility mode, not by assuming the service call succeeded. Java WaitForAsyncUtils.waitFor(3, SECONDS, () -> robot.lookup("#startPane").tryQuery().isPresent()); In headless mode, if the platform can’t do something like going full screen, assert a proxy signal (pseudo-classes, button state). Mistake 4: Racing the FX Event Queue As JavaFX is a single-thread kit, all UI events happen on the FX Application Thread, and so, events like animations, layout, etc., get queued. If you assert in tests before the queue is drained, you are testing the UI that doesn’t exist yet: You fire an action and immediately assert. As a result, your check runs before the handler executes.You query the scene right after a scene switch when the new nodes aren’t attached yet.You read pixels or control state from the test thread while JavaFX is mid-layout. Therefore, tests pass or fail unpredictably depending on CPU, CI, and whatnot. What can we do? In the case of simple changes, use WaitForAsyncUtils.waitForFxEvents() for the event queue of the JavaFX Application Thread to be completed: Java @Start public void start(Stage stage) throws Exception { FxmlLoader loader = new FxmlLoader(context); Parent rootNode = loader.load("/fxml/in-build-keyboard.fxml"); stage.setScene(new Scene(rootNode, 800, 600)); stage.show(); WaitForAsyncUtils.waitForFxEvents(); } In the case you are waiting for observable outcomes, use WaitForAsyncUtils.waitFor() to wait for some conditions to be met: Java @Test void shouldChangeSceneWhenContinueButtonIsClicked(FxRobot fxRobot) throws TimeoutException { Parent oldRoot = stage.getScene().getRoot(); Button btn = fxRobot.lookup("#continueButton").queryButton(); fxRobot.interact(btn::fire); WaitForAsyncUtils.waitFor(3, TimeUnit.SECONDS, () -> stage.getScene().getRoot() != oldRoot); assertThat(stage.getScene().getRoot()).isNotSameAs(oldRoot); assertThat( fxRobot.lookup("#startButton") .queryAs(Button.class)).isNotNull(); } The same approach should be applied when dealing with animations. Wait for the state to change, not the duration the animation is supposed to run: Java @Test void shouldHideAndDisableButtonsWhenRaffling(FxRobot fxRobot) throws TimeoutException { Button start = fxRobot.lookup("#startButton").queryButton(); Button repeat = fxRobot.lookup("#repeatButton").queryButton(); fxRobot.interact(start::fire); WaitForAsyncUtils.waitFor(5, TimeUnit.SECONDS, () -> WaitForAsyncUtils.asyncFx(() -> repeat.isVisible() && !repeat.isDisabled() ).get() ); assertThat(repeat.isVisible()).isFalse(); assertThat(repeat.isDisabled()).isTrue(); } Mistake 5: Assuming Pixel-Perfect Equality Across Platforms The pixel colors in the JavaFX applications may differ slightly on various platforms due to various reasons: CI uses Monocle, whereas Prism SW and the laptop use a GPU pipeline, or one machine uses LCD subpixel text and another uses grayscale. If the tests assess exact RGB equality on all platforms, the tests may pass locally and fail in CI or on another local machine. What exactly happens? JavaFX apps can run with different DPI scaling on various displays / in various environments: see release notes, bugs, javadoc proving that. On HiDPI and retina displays, JavaFX renders at a scale >1, so logical coordinates don’t map 1:1 to physical pixels. As a result, antialiasing and rounding shift colors slightly, breaking pixel-perfect assertions.Headless Monocle uses software Prism, not the desktop GPU, leading to slightly different composites.The FontSmoothingType enum in JavaFX specifies the preferred mechanism for smoothing the edges of fonts: sub-pixel LCD or GRAY. Due to this fact, the pixels may vary depending on the mode used by the system. Even if the mode is set in the application, JavaFx may fall back on a different mode if the first one is not supported by the system. See the proof for macOS and Linux as an example. What can we do? Don’t assert the exact color. Compare baseline vs changed and allow for some tolerance in color and pixel density. For example, in SolfeggioFX, to test that the key color on the virtual piano has changed when the corresponding key was pressed, we can calculate pixel indices using Math.round() to tolerate the fractional positions in the case of HiDPI and Math.max()/min() to avoid sampling outside the image in case the Point2D value is near the edge: Java private static Color samplePixel(Canvas canvas, Point2D p) throws Exception { return WaitForAsyncUtils.asyncFx(() -> { WritableImage img = canvas.snapshot(new SnapshotParameters(), null); PixelReader pr = img.getPixelReader(); int x = (int) Math.round(p.getX()); int y = (int) Math.round(p.getY()); x = Math.max(0, Math.min(x, (int) canvas.getWidth() - 1)); y = Math.max(0, Math.min(y, (int) canvas.getHeight() - 1)); return pr.getColor(x, y); }).get(); } In addition, we can allow for a small absolute difference when comparing colors: Java private static boolean colorsClose(Color a, Color b) { double eps = 0.02; // tolerate small AA differences (~2%) return Math.abs(a.getRed() - b.getRed()) < eps && Math.abs(a.getGreen() - b.getGreen()) < eps && Math.abs(a.getBlue() - b.getBlue()) < eps; } @Test void shouldHighLightPressedKey(FxRobot robot) throws Exception { Point2D point = Objects.requireNonNull(centers.get('Q')); Color before = samplePixel(canvas, point); robot.press(KeyCode.Q); WaitForAsyncUtils.waitFor(1500, TimeUnit.MILLISECONDS, () -> !colorsClose(WaitForAsyncUtils.asyncFx(() -> samplePixel(canvas, point)).get(), before)); Color duringPress = samplePixel(canvas, point); assertThat(before.equals(duringPress)).isFalse(); } Sample pixels inside the shape, not near borders, to avoid having a different color if borders blend with the background. In SolfeggioFX, we stored per-key centers in the Canvas properties when drawing the virtual piano, and used this data in the tests to sample pixels near the key center: Java // production code canvas.getProperties().put("keyCenters", Map<Character, Point2D> centers); // tests Point2D point = Objects.requireNonNull(centers.get('Q')); Mistake 6: Misconfiguring Headless CI Running JavaFX tests in CI differs from the standard testing process. The tests must run in headless mode and be backed by Monocle, an implementation of the Glass windowing component of JavaFX for embedded systems. But simply adding the dependency on Monocle won’t help much, and tests that pass locally may fail in CI due to multiple factors: UI tests run in parallel.Required modules are locked down, but Monocle uses com.sun.glass.ui reflectively. As a result, you get exceptions like IllegalAccessError: module javafx.graphics does not export com.sun.glass.ui or InaccessibleObjectException: … does not "opens com.sun.glass.ui"Tests assert platform features that don’t exist in headless, for instance, Stage.setFullScreen(true). So, the tests hang and finally fail with the TimeoutException What can we do? Add the Monocle dependency and set all necessary flags to run the tests in headless mode. In addition, open the required modules with --add-opens. Add the Monocle dependency first: XML <dependency> <groupId>org.pdfsam</groupId> <artifactId>javafx-monocle</artifactId> <version>21</version> <scope>test</scope> </dependency> Then, specify all required flags in a separate plugin: set the headless mode, disable parallelism, etc. Note that the --add-opens are specific to the RaffleFX application used for demonstration, in your case, the modules may be different. This application is developed and compiled in CI using a Java runtime with bundled JavaFX modules, but if you add the dependencies on JavaFX modules manually, you may have to use the additional --add-exports flag that allows compile-time access to Glass internals: XML <profile> <id>headless-ci</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.5.3</version> <configuration> <forkCount>1</forkCount> <reuseForks>true</reuseForks> <argLine> --add-opens=javafx.graphics/com.sun.javafx.application=ALL-UNNAMED --add-opens=javafx.graphics/com.sun.glass.ui=ALL-UNNAMED --add-opens=javafx.graphics/com.sun.javafx.util=ALL-UNNAMED --add-opens=javafx.base/com.sun.javafx.logging=ALL-UNNAMED --add-opens=javafx.graphics/com.sun.glass.ui.monocle=ALL-UNNAMED -Dtestfx.robot=glass -Dtestfx.headless=true -Dglass.platform=Monocle -Dmonocle.platform=Headless -Dprism.order=sw -Dprism.text=t2k -Djava.awt.headless=true </argLine> </configuration> </plugin> </plugins> </build> </profile> Adjust the tests that wait for stage.isFullScreen() and assert a proxy signal or skip these tests in CI. In the workflow file, make sure to install all necessary native libraries for JavaFX and run the tests with the correct profile. The file below uses Liberica JDK 21 with JavaFX in the setup-java action, so no additional dependencies on FX are required: YAML name: Tests on: push: paths-ignore: - 'docs/**' - '**/*.md' branches: [ main ] jobs: test_linux_headless: name: UI tests (Ubuntu + Monocle) runs-on: ubuntu-latest steps: - name: Install Linux packages for JavaFX run: | sudo apt-get update sudo apt-get install -y \ libasound2-dev libavcodec-dev libavformat-dev libavutil-dev \ libgl-dev libgtk-3-dev libpango1.0-dev libxtst-dev - uses: actions/checkout@v4 - uses: actions/setup-java@v5 with: distribution: 'liberica' java-version: '21' java-package: 'jdk+fx' cache: maven - name: Run tests (headless with Monocle) run: ./mvnw -B -Pheadless-ci test - name: Upload surefire reports if: always() uses: actions/upload-artifact@v4 with: name: surefire-reports path: | **/target/surefire-reports/* **/target/failsafe-reports/* Mistake 7: Entangling Business Logic with UI (Non-Determinism) Last but not least, testing business logic with UI is not the best practice. Just as you separate controllers and service tests for web apps, the domain logic tests should not coexist with UI tests in one class. In the worst-case scenario, the tests become slow and yield inconsistent results. What can we do? The best solution would be to move business logic to ViewModels and test it with plain JUnit. This way, you don’t depend on animations and other UI events, and make sure that your tests are always deterministic. Conclusion JavaFX applications need testing just like any other program. On the one hand, you verify that the application functions exactly as expected. On the other hand, you make it more maintainable in the long term. Nevertheless, the unfamiliar process of JavaFX testing may result in numerous exceptions during test runs or ‘mysterious’ test failures. Luckily, developers can navigate these unknown waters safely, keeping an eye on the following waymarks: FX thread vs test thread: Mutate UI and read from UI on the FX Application thread, send input from the test thread.Correct bootstrap: If you use frameworks such as Spring, make sure to start Spring/TestFX in the right order and make FXMLLoader use Spring’s class loader.FX event queue: Wait until the FX queue is drained before making assertions and assert by state, not duration.No pixel-perfect assertions: Keep in mind that the environment and platform may affect the visuals slightly, so allow for tolerance when testing colors and take samples closer to the element center. CI headless configuration: Configure the headless testing with Monocle, open required Glass internals, and avoid asserting platform features Monocle can’t emulate. Testing JavaFX may seem complicated, and this article covers the most common pitfalls. But following these pieces of advice, you will be able to build a reliable testing foundation for your JavaFX program. More
Converting ActiveMQ to Jakarta (Part III: Final)

Converting ActiveMQ to Jakarta (Part III: Final)

By Matt Pavlovich
Advanced Technical Approach Some Java frameworks have taken on the complexity of supporting both javax and jakarta package namespaces simultaneously. This approach makes sense for frameworks and platform services, such as Jetty and ActiveMQ, where the core development team needs to move the code base forward to support newer JDKs, while also providing a way for application developers to adopt Jakarta EE gradually. This simplifies the support for open-source frameworks, as there are fewer releases to manage, and in the event of a security bug, being able to release one mainline branch vs having to go back and backport across past versions. However, supporting both javax and jakarta namespaces simultaneously in a single application is complicated and time-consuming. Additionally, it opens additional scenarios that may lead to errors and security gaps for enterprise applications. This limits the ability to set up verification checks and source code scanning to block pre-Jakarta libraries from being used or accidentally pulled in through transitive dependencies. It creates a lot of ambiguity and reduces the effectiveness of DevOps teams in providing pre-approved SDKs to be used by enterprise developers. With the pitfalls outweighing the benefits, enterprise projects should not need to support both javax and jakarta namespaces simultaneously in most scenarios. Special Consideration for Exception Handling for Remote Operations The one caveat to this best practice for enterprise applications is that there may be a need to support mapping exceptions between javax and jakarta package namespaces to support clients making remote calls to a service or API. The server-side either needs to be able to detect javax clients and translate, or a thin client-side wrapper is needed to handle any jakarta exceptions received by remote services. Apache ActiveMQ handles exception namespace mapping appropriately for all client release streams (starting with v6.1.0, v5.19.0, v5.18.4, v5.17.7, and v5.16.8), so no additional handling is required by applications when using Jakarta Messaging. Jakarta EE Updates and Nothing Else A key factor to ActiveMQ’s success was that the scope of change was limited to only what was necessary for the upgrade to Jakarta EE. The change of underlying frameworks naturally brought new minimum JDK version requirements and other changes, as Jakarta EE specifications brought forward their own set of changes. No protocol changes, no data format, or configuration changes were made to ActiveMQ to support backwards compatibility with javax clients and to support roll-forward and rollback during upgrades. Developers should resist the urge to tackle other refactoring, data model, or business functionality changes when making the upgrade to Jakarta EE. These upgrades should be structured as technical debt-only releases to ensure the best outcomes. Jakarta Migration Planning Guide Team Impact: Organizing Changes for Code Review For an enterprise taking on a similar migration of a large and established code base, I highly recommend following this next piece of advice to lower the time and level of effort. Enforce an organizational policy that requires git commits related to package naming to be separated. There should be two types that are clearly labeled in the comments: Java package import namespace-only changesCode changes Namespace-only changes involve updating the file from “import javax.” to “import jakarta.” These text changes may live in Java code files, Spring XML, config properties, or other non-Java code artifacts used by the application. Code changes are updates required due to fixes, technical debt, supporting Jakarta EE specification changes, or framework API changes (such as Spring or Jetty). By separating these changes, you will greatly reduce the time required to review and approve changes. Java package namespace-only changes will have hundreds to thousands of files changed, and thousands to tens of thousands of lines changed. For the most part, these changes can be approved quickly, without the need for a deep code review. The actual impacting code changes should impact fewer files and fewer lines of code change. The code reviews on these changes will require a closer look, and by reducing the scope, you will greatly reduce the time required for code reviews. Practical Tips for Jakarta Migration Drop end-of-life and deprecated modules from your code base.Migrate or drop end-of-life and deprecated dependencies.Upgrade code to use in-Java features where commons-* dependencies are no longer needed.Upgrade to current non-Jakarta affecting dependencies you may have been putting off (Log4j v2, JUnit v5, etc.).Where possible, release JDK 17 changes first (Upgrade JDK using LTS versions 8 -> 11 -> 17).Release a tech-debt update of your product or application. This allows for supporting two modern release streams—non-Jakarta and Jakarta.Update frameworks to Jakarta EE versions.Break-up commits to have import-only changes for faster review.For complex in-house ‘framework’ type components, consider releasing support for both javax and jakarta at the same time.Add support for client-side Jakarta EE module alongside existing modules in the javax release stream.Break-up commits to have import-only changes for faster reviews. In Summary Apache ActiveMQ was successful in its migration to Jakarta EE by tackling necessary technical debt and putting off the urge to incorporate too many changes. The transition was successful, and users were able to quickly adopt the ActiveMQ 6.x releases in their Jakarta EE projects. Additionally, since the wire protocol, configuration, and data formats did not change, older javax applications (and non-Java applications) were able to work seamlessly through an upgrade. This is an exciting time for Java developers as the ecosystem is rapidly adopting awesome new features and great language improvements. I’m interested in your feedback as you tackle Jakarta EE and JDK upgrades for projects of all sizes. Reference Material change typeestimated level of effortNamespace change from “import javax…” to “import jakarta..”LowUpgrade to JDK 17MediumUpdate Maven tooling to align with JDK 17MediumUpdate and refactor code to use updated Jakarta specifications APIsMediumUpdate and refactor code to use current dependencies that implement updated specification APIsHighPay down technical debt HighUpdate and refactor code to drop any dependencies that are not current with Jakarta, JDK 17 or transitive dependencies HighTeam impacts - Managing change across the enterprise High ActiveMQ's Jakarta Migration Metrics The following statistics are provided as a reference for the level of effort required in migrating a medium-sized, mature Java project to Jakarta EE. PRs1Commits25 (the number of intermediate commits is over 100)Files changed1,425Lines added9,514Lines removed8,091Modules dropped2* (1 is the transition module, which got a relocation)Dependencies re-homed2Frameworks dropped2Deprecated J2EE specifications dropped1PR work tasks28CI build jobs80 Apache ActiveMQ 6.0.0 Jakarta Messaging 3.1.0 Release Summary Permanently dropped module: activemq-partition (drop deprecated Apache ZooKeeper test dependency)Jakarta APIs: Jakarta MessagingJakarta XMLJakarta ServletJakarta TransactionUpgrade key dependencies: Jetty v11Spring v6Java JDK 17+Maven modulesDrop JEE API specs that do not have a Jakarta version: j2ee-management (interfaces re-implemented locally to Apache ActiveMQ)Re-homed test dependencies: stompjms Java STOMP clientjoram-jms-tests JMS test utilitiesTemporarily dropped dependencies that did not have Jakarta support at the time. Note: Both have been added back in as of ActiveMQ 6.1.x. JolokiaApache Camel More
Think in Graphs, Not Just Chains: JGraphlet for TaskPipelines
Think in Graphs, Not Just Chains: JGraphlet for TaskPipelines
By Shaaf Syed
How to Migrate from Java 8 to Java 17+ Using Amazon Q Developer
How to Migrate from Java 8 to Java 17+ Using Amazon Q Developer
By Prabhakar Mishra
Secure Your Spring Boot Apps Using Keycloak and OIDC
Secure Your Spring Boot Apps Using Keycloak and OIDC
By Gunter Rotsaert DZone Core CORE
Monitoring Java Microservices on EKS Using New Relic APM and Kubernetes Metrics
Monitoring Java Microservices on EKS Using New Relic APM and Kubernetes Metrics

Amazon EKS makes running containerized applications easier, but it doesn’t give you automatic visibility into JVM internals like memory usage or garbage collection. For Java applications, observability requires two levels of integration: Cluster-level monitoring for pods, nodes, and deploymentsJVM-level APM instrumentation for heap, GC, threads, latency, etc. New Relic provides both via Helm for infrastructure metrics, and a lightweight Java agent for full JVM observability. In containerized environments like Kubernetes, surface-level metrics (CPU, memory) aren’t enough. For Java apps, especially those built on Spring Boot, the real performance story lies inside the JVM. Without insight into heap usage, GC behavior, and thread activity, you're flying blind. New Relic bridges this gap by combining infrastructure-level monitoring (via Prometheus and kube-state-metrics) with application-level insights from the JVM agent. This dual visibility helps teams reduce mean time to resolution (MTTR), avoid OOMKilled crashes, and tune performance with confidence. This tutorial covers: Installing New Relic on EKS via HelmInstrumenting your Java microservice with New Relic’s Java agentJVM tuning for container environmentsMonitoring GC activity and memory usageCreating dashboards and alerts in New RelicOptional values.yaml file, YAML bundle, and GitHub repo Figure 1: Architecture of JVM monitoring on Amazon EKS using New Relic. The Java microservice runs inside an EKS pod with the New Relic JVM agent attached. It sends GC, heap, and thread telemetry to New Relic APM. At the same time, Prometheus collects Kubernetes-level metrics, which are forwarded to New Relic for unified observability. Prerequisites Amazon EKS cluster with kubectl and helm configuredA Java-based app (e.g., Spring Boot) deployed in EKSNew Relic account (free tier is enough)Basic understanding of JVM flags and Kubernetes manifests Install New Relic’s Kubernetes Integration (Helm) This installs the infrastructure monitoring components for cluster, pod, and container-level metrics. Step 1: Add the New Relic Helm repository Shell helm repo add newrelic https://helm-charts.newrelic.com helm repo update Step 2: Install the monitoring bundle Shell helm install newrelic-bundle newrelic/nri-bundle \ --set global.licenseKey=<NEW_RELIC_LICENSE_KEY> \ --set global.cluster=<EKS_CLUSTER_NAME> \ --namespace newrelic --create-namespace \ --set newrelic-infrastructure.enabled=true \ --set kube-state-metrics.enabled=true \ --set prometheus.enabled=true Replace <NEW_RELIC_LICENSE_KEY> and <EKS_CLUSTER_NAME> with your actual values. Instrument Your Java Microservice With the New Relic Agent Installing the Helm chart sets up cluster-wide observability, but to monitor JVM internals like heap usage, thread activity, or GC pauses, you need to attach the New Relic Java agent. This gives you: JVM heap, GC, thread metricsResponse times, error rates, transaction tracesGC pauses and deadlocks Dockerfile (add agent): Dockerfile ADD https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip /opt/ RUN unzip /opt/newrelic-java.zip -d /opt/ JVM startup args: Shell -javaagent:/opt/newrelic/newrelic.jar Required environment variables: YAML - name: NEW_RELIC_APP_NAME value: your-app-name - name: NEW_RELIC_LICENSE_KEY valueFrom: secretKeyRef: name: newrelic-license key: license_key Create the secret: Shell kubectl create secret generic newrelic-license \ --from-literal=license_key=<YOUR_NEW_RELIC_LICENSE_KEY> Capture Kubernetes Metrics New Relic Helm install includes: newrelic-infrastructure → Node, pod, container metricskube-state-metrics → Kubernetes objectsprometheus-agent → Custom metrics support Verify locally: Shell kubectl top pods kubectl top nodes In New Relic UI, go to: Infrastructure → Kubernetes JVM Tuning for GC and Containers To avoid OOMKilled errors and track GC behavior, tune your JVM for Kubernetes: Recommended JVM Flags: Shell -XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XshowSettings:vm -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/tmp/gc.log Make sure /tmp is writable or mount it via emptyDir. Pod resources: YAML resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "1Gi" cpu: "500m" Align MaxRAMPercentage with limits.memory. Why JVM Monitoring Matters in Kubernetes Kubernetes enforces resource limits on memory and CPU, but by default, the JVM doesn’t respect those boundaries. Without proper tuning, the JVM might allocate more memory than allowed, triggering OOMKilled errors. Attaching the New Relic Java agent gives you visibility into GC pauses, heap usage trends, and thread health all of which are critical in autoscaling microservice environments. With these insights, you can fine-tune JVM flags like `MaxRAMPercentage`, detect memory leaks early, and make data-driven scaling decisions. Dashboards and Alerts in New Relic Create an alert for GC pause time: Go to Alerts & AI → Create alertSelect metric: JVM > GC > Longest GC pauseSet threshold: e.g., pause > 1000 ms Suggested Dashboards: JVM heap usageGC pause trendsPod CPU and memory usageError rate and latency Use New Relic’s dashboard builder or import JSON from your repo. Forwarding GC Logs to Amazon S3 While New Relic APM provides GC summary metrics, storing full GC logs is helpful for deep memory analysis, tuning, or post-mortem debugging. Since container logs are ephemeral, the best practice is to forward these logs to durable storage like Amazon S3. Why S3? Persistent log storage beyond pod restartsUseful for memory tuning, forensic reviews, or auditsCost-effective compared to real-time log ingestion services Option: Use Fluent Bit with S3 Output Plugin 1. Enable GC logging with: Shell -Xloggc:/tmp/gc.log 2. Mount /tmp with emptyDir in your pod 3. Deploy Fluent Bit as a sidecar or DaemonSet Make sure your pod or node has an IAM role with s3:PutObject permission to the target bucket. This setup ensures your GC logs are continuously shipped to S3 for safe, long-term retention even after the pod is restarted or deleted. Troubleshooting Tips Problem Fix APM data not showing Verify license key, agent path, app traffic JVM metrics missing Check -javaagent setup and environment vars GC logs not collected Check -Xloggc path, permissions, volume mount Kubernetes metrics missing Ensure Prometheus is enabled in Helm values Check logs with: Shell kubectl logs <pod-name> --container <container-name> Conclusion New Relic allows you to unify infrastructure and application observability in Kubernetes environments. With JVM insights, GC visibility, and proactive alerts, DevOps and SRE teams can detect and resolve performance issues faster. After setting up JVM and Kubernetes monitoring, consider enabling distributed tracing to get visibility across service boundaries. You can also integrate New Relic alerts with Slack, PagerDuty, or Opsgenie to receive real-time incident notifications. Finally, use custom dashboards to compare performance across dev, staging, and production environments, helping your team catch regressions early and optimize for reliability at scale.

By Praveen Chaitanya Jakku
Prototype for a Java Database Application With REST and Security
Prototype for a Java Database Application With REST and Security

Many times, while developing at work, I needed a template for a simple application from which to start adding specific code for the project at hand. In this article, I will create a simple Java application that connects to a database, exposes a few rest endpoints and secures those endpoints with role based access. The purpose is to have a minimal and fully working application that can then be customized for a particular task. For the databases, we will use PostgreSQL and for security, we will go with Keycloak, both deployed in containers. During development, I used podman to test that the containers are created correctly (an alternative to docker—they are interchangeable for the most part) as a learning experience. The application itself is developed using the Spring Boot framework with Flyway for database versioning. All of these technologies are industry standards in the Java EE world with a high chance to be used in a project. The requirement around which to build our prototype is a library application that exposes REST endpoints allowing the creation of authors, books, and the relationships between them. This will allow us to implement a many-to-many relationship that can then be expanded for any purpose imaginable. The fully working application can be found at https://github.com/ghalldev/db_proto The code snippets in this article are taken from that repository. Before creating the containers be sure to define the following environment variables with your preferred values (they are ommitted on purpose in the tutorial to avoid propagating default values used by multiple users): Shell DOCKER_POSTGRES_PASSWORD DOCKER_KEYCLOAK_ADMIN_PASSWORD DOCKER_GH_USER1_PASSWORD Configure PostgreSQL: Shell docker container create --name gh_postgres --env POSTGRES_PASSWORD=$DOCKER_POSTGRES_PASSWORD --env POSTGRES_USER=gh_pguser --env POSTGRES_INITDB_ARGS=--auth=scram-sha-256 --publish 5432:5432 postgres:17.5-alpine3.22 docker container start gh_postgres Configure Keycloak: first is the container creation and start: Shell docker container create --name gh_keycloak --env DOCKER_GH_USER1_PASSWORD=$DOCKER_GH_USER1_PASSWORD --env KC_BOOTSTRAP_ADMIN_USERNAME=gh_admin --env KC_BOOTSTRAP_ADMIN_PASSWORD=$DOCKER_KEYCLOAK_ADMIN_PASSWORD --publish 8080:8080 --publish 8443:8443 --publish 9000:9000 keycloak/keycloak:26.3 start-dev docker container start gh_keycloak after the container is up and running, we can go ahead and create the realm, user and roles (these command must to be run inside the running container): Shell cd $HOME/bin ./kcadm.sh config credentials --server http://localhost:8080 --realm master --user gh_admin --password $KC_BOOTSTRAP_ADMIN_PASSWORD ./kcadm.sh create realms -s realm=gh_realm -s enabled=true ./kcadm.sh create users -s username=gh_user1 -s email="[email protected]" -s firstName="gh_user1firstName" -s lastName="gh_user1lastName" -s emailVerified=true -s enabled=true -r gh_realm ./kcadm.sh set-password -r gh_realm --username gh_user1 --new-password $DOCKER_GH_USER1_PASSWORD ./kcadm.sh create roles -r gh_realm -s name=viewer -s 'description=Realm role to be used for read-only features' ./kcadm.sh add-roles --uusername gh_user1 --rolename viewer -r gh_realm ./kcadm.sh create roles -r gh_realm -s name=creator -s 'description=Realm role to be used for create/update features' ./kcadm.sh add-roles --uusername gh_user1 --rolename creator -r gh_realm ID_ACCOUNT_CONSOLE=$(./kcadm.sh get clients -r gh_realm --fields id,clientId | grep -B 1 '"clientId" : "account-console"' | grep -oP '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}') ./kcadm.sh update clients/$ID_ACCOUNT_CONSOLE -r gh_realm -s 'fullScopeAllowed=true' -s 'directAccessGrantsEnabled=true' The user gh_user1 is created in the realm gh_realm with the roles viewer and creator. You may have noticed that instead of creating a new client we are using one of the default clients that come with Keycloak: account-console. This is for convenience reasons, in a real scenario you would create a specific client which would then be updated to have fullScopeAllowed(causes the realm roles to be added to the token - not added by default) and directAccessGrantsEnabled(allows token to be generated by using the openid-connect/token endpoint, from Keycloak, in our case with curl). The created roles can then be used inside the Java application to restrict access to certain functionality according to our agreed contract—the viewer can only access read-only operations while creator can do create, update and delete. Of course, in the same style all kinds of roles can be created for whatever reasons, as long as the agreed contract is well-defined and understood by everyone. The roles can be further added to groups, but that is not included in this tutorial. But before being able to actually use the roles, we have to tell the Java application how to extract the roles—this is needed since the way Keycloak adds the roles to the JWT is particular to it so we have to write a piece of custom code to translate them in something Spring Security can use: Java @Bean public JwtAuthenticationConverter jwtAuthenticationConverter() { //follow the same pattern as org.springframework.security.oauth2.server.resource.authentication.JwtGrantedAuthoritiesConverter Converter<Jwt, Collection<GrantedAuthority>> keycloakRolesConverter = new Converter<>() { private static final String DEFAULT_AUTHORITY_PREFIX = "ROLE_"; //https://github.com/keycloak/keycloak/blob/main/services/src/main/java/org/keycloak/protocol/oidc/TokenManager.java#L901 private static final String KEYCLOAK_REALM_ACCESS_CLAIM_NAME = "realm_access"; private static final String KEYCLOAK_REALM_ACCESS_ROLES = "roles"; @Override public Collection<GrantedAuthority> convert(Jwt source) { Collection<GrantedAuthority> grantedAuthorities = new ArrayList<>(); Map<String, List<String>> realmAccess = source.getClaim(KEYCLOAK_REALM_ACCESS_CLAIM_NAME); if (realmAccess == null) { logger.warn("No " + KEYCLOAK_REALM_ACCESS_CLAIM_NAME + " present in the JWT"); return grantedAuthorities; } List<String> roles = realmAccess.get(KEYCLOAK_REALM_ACCESS_ROLES); if (roles == null) { logger.warn("No " + KEYCLOAK_REALM_ACCESS_ROLES + " present in the JWT"); return grantedAuthorities; } roles.forEach( role -> grantedAuthorities.add(new SimpleGrantedAuthority(DEFAULT_AUTHORITY_PREFIX + role))); return grantedAuthorities; } }; JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter(); jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(keycloakRolesConverter); return jwtAuthenticationConverter; } There are other important configurations done in the AppConfiguration class, like enabling method security and disabling csrf. Now we can use the annotation org.springframework.security.access.prepost.PreAuthorize in the REST controller to restrict access: Java @PostMapping("/author") @PreAuthorize("hasRole('creator')") public void addAuthor(@RequestParam String name, @RequestParam String address) { authorService.add(new AuthorDto(name, address)); } @GetMapping("/author") @PreAuthorize("hasRole('viewer')") public String getAuthors() { return authorService.allInfo(); } In this way, only users that are authenticated successfully and have the roles list in hasRole can call the endpoints, otherwise they will get an HTTP 403 Forbidden error. After the containers have started and are configured, the Java application can start but not before adding the database password—this can be done with an env variable (below is a linux shell example): Shell export SPRING_DATASOURCE_PASSWORD=$DOCKER_POSTGRES_PASSWORD And now, if all is up and running corectly, we can use curl to test out application(all the commands below are linux shell). Logging in with the previously created user gh_user1 and extracting the authentication token: Shell KEYCLOAK_ACCESS_TOKEN=$(curl -d 'client_id=account-console' -d 'username=gh_user1' -d "password=$DOCKER_GH_USER1_PASSWORD" -d 'grant_type=password' 'http://localhost:8080/realms/gh_realm/protocol/openid-connect/token' | grep -oP '"access_token":"\K[^"]*') Creating a new author (this will test that the creator role works): Shell curl -X POST --data-raw 'name="GH_name1"&address="GH_address1"' -H "Authorization: Bearer $KEYCLOAK_ACCESS_TOKEN" 'localhost:8090/library/author' Retrieving all the authors in the library (this will test that the viewer role works): Shell curl -X GET -H "Authorization: Bearer $KEYCLOAK_ACCESS_TOKEN" 'localhost:8090/library/author' And with this you should have all that is needed to create your own Java application, expanding and configuring it as needed.

By George Pod
Exploring QtJambi: A Java Wrapper for Qt GUI Development—Challenges and Insights
Exploring QtJambi: A Java Wrapper for Qt GUI Development—Challenges and Insights

I recently experimented with QtJambi, a Java wrapper for the well-known Qt C++ library used to build GUIs. Here are some initial thoughts, remarks and observations: Building a QtJambi project can be somewhat challenging. It requires installing the Qt framework, configuring system paths to Qt’s native libraries, and setting proper JVM options. Although it is possible to bundle native libraries within the wrapper JARs, I haven’t tried this yet.The overall development approach is clean and straightforward. You create windows or dialogs, add layouts, place widgets (components or controls) into those layouts, configure widgets and then display the window or dialog to the user. This model should feel familiar to anyone with GUI experience.Diving deeper, QtJambi can become quite complex, comparable to usual Java Swing development. The API sometimes feels overly abstracted with many layers that could potentially be simplified.There is an abundance of overloaded methods and constructors, which can make it difficult to decide which ones to use. For example, the QShortcut class has 34 different constructors. This likely comes from a direct and not fully optimized mapping from the C++ Qt API.Like Swing, QtJambi is not thread-safe. All GUI updates must occur on the QtJambi UI thread only. Ignoring this can cause crashes, not just improper UI refresh like in Swing.There is no code reuse between Java Swing and QtJambi. Even concepts that appear close and reusable are not shared. QtJambi is essentially a projection of C++ Qt’s architecture and design patterns into Java, so learning it from scratch is necessary even for experienced Swing developers.Using AI tools to learn QtJambi can be tricky. AI often mixes Java Swing concepts with QtJambi, resulting in code that won’t compile. It can also confuse Qt’s C++ idioms when translating them directly to Java, which doesn’t always fit.Despite being a native wrapper, QtJambi has some integration challenges, especially on macOS. For example, handling the application Quit event works differently and only catching window-close events behaves properly out of the box. In contrast, native Java QuitHandler support is easier and more reliable there, but it doesn't work with QtJambi.Mixing Java AWT with QtJambi is problematic. This may leads to odd behaviors or crashes. The java.awt.Desktop class also does not function in this context.If you want a some times challenging Java GUI framework with crashes and quirks, QtJambi fits the bill! It brings a lot of power but also some of complexity and instability compared to standard Java UI options.There is a GUI builder that works with Qt, but it is possible to use its designs in QtJambi, generating source code or loading designs at runtime. The only issue: the cost starts from $600 per year for small businesses to >$5,000 per year for larger companies. Notable Applications Built With QtJambi Notable applications built with QtJambi are few. One example is the Interactive Brokers desktop trading platform (IBKR Desktop), which uses QtJambi for its user interface. Beyond this, well-known commercial or open-source projects created specifically with QtJambi are scarce and often not widely publicized. Most QtJambi usage tends to be in smaller-scale or internal tools rather than major flagship applications. This limited visibility can make it challenging to pitch QtJambi adoption to decision-makers. QtJambi Licensing QtJambi doesn’t have a separate commercial license; it inherits Qt’s licensing model. Qt can be used under free LGPL/GPL licenses if you comply with their terms, or under paid commercial licenses that provide additional advantages and fewer restrictions. Make sure to check your ability to comply with LGPL/GPL or your need for commercial licensing before proceeding. Should You Consider QtJambi For Your Desktop Apps? There are three strong contenders for desktop applications: Java Swing, JavaFX, and QtJambi. There is also SWT, but I would prefer to avoid it. If you already have stable, well-functioning Java Swing applications and lack the resources or justification to rewrite them, staying with Swing is usually the best approach. The effort and risk of migrating large, mature codebases often outweigh the benefits unless there is a strong business case. For new desktop projects, both JavaFX and QtJambi are worth evaluating. JavaFX is typically safest when you want: A well-supported, familiar Java-based framework.Easier development, packaging, and deployment.Powerful animations, modern UIs, and broad tools and community support.Reliable, long-term support for business needs without high-performance or native integration demands. QtJambi is a strong choice if your application requires: Superior graphics performance and efficient rendering.A native look and feel across platforms.Responsive, complex interfaces, or advanced custom widget support. Be prepared for a steeper learning curve, more complex build processes, possible native library management, and licensing issues. Summary QtJambi is a performant and powerful, yet sometimes complex, Java wrapper for the Qt GUI framework, providing a native look and feel along with a wide range of advanced widgets. While it is well-suited for high-performance, native-like applications, it comes with a steep learning curve, more intricate setup requirements, and limited community support compared to JavaFX or Swing. Despite these challenges, QtJambi is worth considering for developers who need cross-platform consistency, efficient rendering, and access to Qt’s rich feature set.

By Gregory Ledenev
Spring Cloud Gateway With Service Discovery Using HashiCorp Consul
Spring Cloud Gateway With Service Discovery Using HashiCorp Consul

This article will explain some basics of the HashiCorp Consul service and its configurations. It is a service networking solution that provides service registry and discovery capabilities, which integrate seamlessly with Spring Boot. You may have heard of Netflix Eureka; here, Consul works similarly but offers many additional features. Notably, it supports the modern reactive programming paradigm. I will walk you through with the help of some applications. Used Libraries Spring BootSpring Cloud GatewaySpring Cloud ConsulSpring Boot Actuator The architecture includes three main components: ConsulService applicationGateway 1. Consul We have to download and install the Consul service in the system from the Hashicorp Consul official website. For development purposes, we have to start it using a command in PowerShell (in Windows). PowerShell consul agent -dev Consul Dashboard This is the place where we can see all the applications registered with Consul. The default port for accessing the Consul dashboard is 8500. Once it starts successfully, you will see something like below. The next step is to register the Gateway and Service applications to Consul. Once those are added, they will appear in this same dashboard. When multiple instances of the same service are running, Consul continuously monitors their health using "Actuator." If any of them report an unhealthy status, Consul will automatically deregister them from the registry. 2. Service Application It is a simple service application for exposing the APIs. We added an @EnableDiscoveryClient annotation in the main class to register the service in Consul for service discovery. If you run the application under multiple ports then you can see multiple instances in consul dashboard. Used the Actuator to expose the health status. Main Class Java @SpringBootApplication @EnableDiscoveryClient public class ServiceApp { public static void main(String[] args) { SpringApplication.run(ServiceApp.class, args); } } Maven Configuration XML <properties> <java.version>21</java.version> <spring.cloud.version>2023.0.4</spring.cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-consul-all</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> </dependencies> Application Property File Properties files # Assigning a unique name for the service spring.application.name=service-app # Application will use random ports server.port=0 spring.webflux.base-path=/userService logback.log.file.path=./logs/service # ~~~ Consul Configuration ~~~ # It assigns a unique ID to each instance of the service when running multiple instances, # allowing them to be registered individually in Consul for service discovery. spring.cloud.consul.discovery.instance-id=${spring.application.name}-${server.port}-${random.int[1,99]} # To access centralized configuration data from Consul spring.cloud.consul.config.enabled=false # To register the service in Consul using its IP address instead of the hostname. spring.cloud.consul.discovery.prefer-ip-address=true # The service will register itself in Consul under this name, which the gateway will use for service discovery while routing requests. spring.cloud.consul.discovery.service-name=${spring.application.name} # Ip to communicate with consul server spring.cloud.consul.host=localhost # Consul runs on port 8500 by default, unless it is explicitly overridden in the configuration. spring.cloud.consul.port=8500 # Remapping the Actuator URL in Consul since a base path has been added. spring.cloud.consul.discovery.health-check-path=${spring.webflux.base-path}/actuator/health # Time interval to check the health of service. spring.cloud.consul.discovery.health-check-interval=5s # Time need to wait for the health check response before considering it as timed out spring.cloud.consul.discovery.health-check-timeout=5s # The maximum amount of time a service can remain in an unhealthy state before Consul marks it as critical and removes it from the service catalog. #spring.cloud.consul.discovery.health-check-critical-timeout=1m Sample API Java @GetMapping(value = "getStatus", produces = MediaType.APPLICATION_JSON_VALUE) public Mono<ResponseEntity<Object>> healthCheck() { logger.info("<--- Service to get status request : received --->"); logger.info("<--- Service to get status response : given --->"); return Mono.just(ResponseEntity.ok("Success from : " + portListener.getPort())); } 3. Gateway It is developed with the help of Spring Cloud Gateway. And it consists of the same libraries as the Service application. Consul is used for registering and service discovery of the application. Used the Actuator to expose the health status. Main Class Java @SpringBootApplication @EnableDiscoveryClient public class GatewayApp { public static void main(String[] args) { SpringApplication.run(GatewayApp.class, args); } } Maven Configuration Java <properties> <java.version>21</java.version> <spring.cloud.version>2023.0.4</spring.cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-consul-all</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> </dependencies> Application Property File Properties files # Assigning a unique name for the service spring.application.name=gateway-app server.port=3000 logback.log.file.path=./logs/gateway # ~~~ Consul Configuration ~~~ # It is used in Spring Cloud Gateway to handle automatic route discovery from a service registry # When we are configuring as false, we have to explicitly configure routing of each API requests. spring.cloud.gateway.discovery.locator.enabled=false spring.cloud.consul.discovery.instance-id=${spring.application.name}-${server.port}-${random.int[1,99]} spring.cloud.consul.config.enabled=false spring.cloud.consul.discovery.prefer-ip-address=true spring.cloud.consul.discovery.service-name=${spring.application.name} spring.cloud.consul.host=localhost spring.cloud.consul.port=8500 Since we have set spring.cloud.gateway.discovery.locator.enabled to false, we need to explicitly configure the routing for each API request as shown below. For the routing destination URL, instead of specifying the actual URL of the service application, we map it to the load-balanced (lb) URL provided by Consul using the service name. In normal gateway spring.cloud.gateway.routes[0].uri=http://192.168.1.10:5000In service discovery enabled gateway spring.cloud.gateway.routes[0].uri=lb://service-app Properties files #~~~ Example for a url routing ~~~ spring.cloud.gateway.routes[0].id=0 # Instead of configuring the actual url of service application, we are mapping in to the lb url of "consul" with service name. spring.cloud.gateway.routes[0].uri=lb://service-app # Rest of the configuration will keep as same as spring cloud gateway configuration spring.cloud.gateway.routes[0].predicates[0]=Path=/userService/** spring.cloud.gateway.routes[0].filters[0]=RewritePath=/userService/(?<segment>.*), /userService/${segment} spring.cloud.gateway.routes[0].filters[1]=PreserveHostHeader Final Consul Dashboard Here, we can see one instance of gateway-app and two instances of service-app, as I am running two instances of the service app under different ports. Testing Let's test it by calling a sample API through the gateway to verify that it's working. Upon the first API call: Upon the second API call:We can see that each time the API returns a response from a different instance. GitHub Please check here to get the full project. Thanks for reading!

By Vishnu Viswambharan
Java 21 Virtual Threads vs Cached and Fixed Threads
Java 21 Virtual Threads vs Cached and Fixed Threads

Introduction Concurrent programming remains a crucial part of building scalable, responsive Java applications. Over the years, Java has steadily enhanced its multithreaded programming capabilities. This article reviews the evolution of concurrency from Java 8 through Java 21, highlighting important improvements and the impactful addition of virtual threads introduced in Java 21. Starting with Java 8, the concurrency API saw significant enhancements such as Atomic Variables, Concurrent Maps, and the integration of lambda expressions to enable more expressive parallel programming. Key improvements introduced in Java 8 include: Threads and ExecutorsSynchronization and LocksAtomic Variables and ConcurrentMap Java 21, released in late 2023, brought a major evolution with virtual threads, fundamentally changing how Java applications can handle large numbers of concurrent tasks. Virtual threads enable higher scalability for server applications, while maintaining the familiar thread-per-request programming model. Probably, the most important feature in Java 21 is Virtual Threads. In Java 21, the basic concurrency model of Java remains unchanged, and the Stream API is still the preferred way to process large data sets in parallel. With the introduction of Virtual Threads, the Concurrent API now delivers better performance. In today’s world of microservices and scalable server applications, the number of threads must grow to meet demand. The main goal of Virtual Threads is to enable high scalability for server applications, while still using the simple thread-per-request model. Virtual Threads Before Java 21, the JDK’s thread implementation used thin wrappers around operating system (OS) threads. However, OS threads are expensive: If each request consumes an OS thread for its entire duration, the number of threads quickly becomes a scalability bottleneck.Even when thread pools are used, throughput is still limited because the actual number of threads is capped. The aim of Virtual Threads is to break the 1:1 relationship between Java threads and OS threads. A virtual thread applies a concept similar to virtual memory. Just like virtual memory maps a large address space to a smaller physical memory, Virtual Threads allow the runtime to create the illusion of having many threads by mapping them to a small number of OS threads. Platform threads (traditional threads) are thin wrappers around OS threads. Virtual Threads, on the other hand, are not tied to any specific OS thread. A virtual thread can execute any code that a platform thread can run. This is a major advantage—existing Java code can often run on virtual threads with little or no modification. Virtual threads are hosted by platform threads ("carriers"), which are still scheduled by the OS. For example, you can create an executor with virtual threads like this: Java ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); Example With Comparison Virtual threads only consume OS threads while actively performing CPU-bound tasks. A virtual thread can be mounted or unmounted on different carrier threads throughout its lifecycle. Typically, a virtual thread will unmount itself when it encounters a blocking operation (such as I/O or a database call). Once that blocking task is complete, the virtual thread resumes execution by being mounted on any available carrier thread. This mounting and unmounting process occurs frequently and transparently—without blocking OS threads. Example — Source Code Example01CachedThreadPool.java In this example, an executor is created using a Cached Thread Pool: Java var executor = Executors.newCachedThreadPool() Java package threads; import java.time.Duration; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example01CachedThreadPool { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newCachedThreadPool()' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (var executor = Executors.newCachedThreadPool()) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example01CachedThreadPoolTest { @Test @Order(1) public void test_1000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example01CachedThreadPool example01CachedThreadPool = new Example01CachedThreadPool(); example01CachedThreadPool.executeTasks(1_000_000); } } Test results on my PC: Example02FixedThreadPool.java Executor is created using Fixed Thread Pool: Java var executor = Executors.newFixedThreadPool(500) Java package threads; import java.time.Duration; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example02FixedThreadPool { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newFixedThreadPool(500)' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (var executor = Executors.newFixedThreadPool(500)) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example02FixedThreadPoolTest { @Test @Order(1) public void test_1000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example02FixedThreadPool example02FixedThreadPool = new Example02FixedThreadPool(); example02FixedThreadPool.executeTasks(1_000_000); } } Test results on my PC: Example03VirtualThread.java Executor is created using Virtual Thread Per Task Executor: Java var executor = Executors.newVirtualThreadPerTaskExecutor() Java package threads; import java.time.Duration; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.stream.IntStream; /** * * @author Milan Karajovic <[email protected]> * */ public class Example03VirtualThread { public void executeTasks(final int NUMBER_OF_TASKS) { final int BLOCKING_CALL = 1; System.out.println("Number of tasks which executed using 'newVirtualThreadPerTaskExecutor()' " + NUMBER_OF_TASKS + " tasks each."); long startTime = System.currentTimeMillis(); try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) { IntStream.range(0, NUMBER_OF_TASKS).forEach(i -> { executor.submit(() -> { // simulate a blicking call (e.g. I/O or db operation) Thread.sleep(Duration.ofSeconds(BLOCKING_CALL)); return i; }); }); } catch (Exception e) { throw new RuntimeException(e); } long endTime = System.currentTimeMillis(); System.out.println("For executing " + NUMBER_OF_TASKS + " tasks duration is: " + (endTime - startTime) + " ms"); } } Java package threads; import org.junit.jupiter.api.MethodOrderer; import org.junit.jupiter.api.Order; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.TestMethodOrder; /** * * @author Milan Karajovic <[email protected]> * */ @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class Example03VirtualThreadTest { @Test @Order(1) public void test_1000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(1000); } @Test @Order(2) public void test_10_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(10_000); } @Test @Order(3) public void test_100_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(100_000); } @Test @Order(4) public void test_1_000_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(1_000_000); } @Test @Order(5) public void test_2_000_000_tasks() { Example03VirtualThread example03VirtualThread = new Example03VirtualThread(); example03VirtualThread.executeTasks(2_000_000); } } Test results on my PC: Conclusion You can clearly see the difference in execution time (in milliseconds) between the various executor implementations used to process all NUMBER_OF_TASKS. It's worth experimenting with different values for NUMBER_OF_TASKS to observe how performance varies. The advantage of virtual threads becomes especially noticeable with large task counts. When NUMBER_OF_TASKS is set to a high number—such as 1,000,000—the performance gap is significant. Virtual threads are much more efficient at handling a large volume of tasks, as demonstrated in the table below: I'm confident that after this clarification, if your application processes a large number of tasks using the concurrent API, you'll strongly consider moving to Java 21 and taking advantage of virtual threads. In many cases, this shift can significantly improve the performance and scalability of your application. Source code: GitHub Repository – Comparing Threads in Java 21

By Milan Karajovic
Building a Real-Time Data Mesh With Apache Iceberg and Flink
Building a Real-Time Data Mesh With Apache Iceberg and Flink

If you’ve ever tried to scale your organization’s data infrastructure beyond a few teams, you know how fast a carefully planned “data lake” can degenerate into an unruly “data swamp.” Pipelines are pushing files nonstop, tables sprout like mushrooms after a rainy day, and no one is quite sure who owns which dataset. Meanwhile, your real-time consumers are impatient for fresh data, your batch pipelines crumble on every schema change, and governance is an afterthought at best. At that point, someone in a meeting inevitably utters the magic word: data mesh. Decentralized data ownership, domain-oriented pipelines, and self-service access all sound perfect on paper. But in practice, it can feel like you’re trying to build an interstate highway system while traffic is already barreling down dirt roads at full speed. This is where Apache Iceberg and Apache Flink come to the rescue. Iceberg delivers database-like reliability on top of your data lake, while Flink offers real-time, event-driven processing at scale. Together, they form the backbone of a Data Mesh that actually works — complete with time travel, schema evolution, and ACID guarantees. Best of all, you don’t need to sign away your soul to a proprietary vendor ecosystem. The Data Mesh Pain Points Before diving into the solution, let’s be brutally honest about what happens when organizations adopt Data Mesh without robust infrastructure: Unclear ownership – Multiple teams write to the same tables, creating chaos.Schema drift – An upstream service silently adds or changes a column, and downstream consumers break without warning.Inconsistent data states – Real-time pipelines read half-written data while batch jobs rewrite partitions mid-flight.Governance nightmares – Regulators ask what data you served last quarter, and your only answer is a nervous shrug. The dream of self-service analytics quickly devolves into constant firefighting. Teams need real-time streams, historical replay, and reproducible datasets, yet traditional data lakes weren’t designed for these requirements. They track files, not logical datasets, and they lack strong consistency or concurrency control. Why Iceberg + Flink Changes the Game Apache Iceberg: Reliability Without Lock-In Time travel lets you query historical table states — no more guesswork about last month’s data.Schema evolution enables adding, renaming, or promoting columns without breaking readers.ACID transactions prevent race conditions and ensure readers never see partial writes.Open table format works with Spark, Flink, Trino, Presto, or even plain SQL — no vendor lock-in. Apache Flink: True Real-Time Processing Exactly-once semantics for event streams ensure clean, accurate writes.Unified streaming and batch in one engine eliminates separate pipeline maintenance.Stateful processing supports building materialized views and aggregations directly over streams. Together, they allow domain-oriented teams to produce real-time, governed data products that behave like versioned datasets rather than fragile event logs. Iceberg Fundamentals for a Real-Time Mesh Time Travel for Debugging and Auditing Iceberg snapshots track every table change. Need to see your sales table during Black Friday? Just run: SQL SELECT * FROM sales_orders FOR SYSTEM_VERSION AS OF 1234567890; This isn’t just a convenience for analysts — it’s essential for regulatory compliance and operational debugging. Schema Evolution Without Breaking Pipelines Iceberg assigns stable column IDs and supports type promotion. Adding fields to Flink sink tables won’t disrupt downstream jobs: SQL ALTER TABLE customer_data ADD COLUMN preferred_language STRING; Even renaming columns is safe, since logical identity is decoupled from physical layout. ACID Transactions to Prevent Data Races In a true Data Mesh, multiple teams may publish into adjacent partitions. Iceberg ensures isolation, so readers never see half-written data — even when concurrent Flink jobs perform upserts or CDC ingestion. Flink + Iceberg in Action Consider a real-time product inventory domain: Step 1: Define an Iceberg Table for Product Events SQL CREATE TABLE product_events ( product_id BIGINT, event_type STRING, quantity INT, warehouse STRING, event_time TIMESTAMP, ingestion_time TIMESTAMP ) USING ICEBERG PARTITIONED BY (days(event_time)); Step 2: Stream Updates With Flink Flink ingests from Kafka (or any source), transforms data, and writes directly into Iceberg: SQL TableDescriptor icebergSink = TableDescriptor.forConnector("iceberg") .option("catalog-name", "my_catalog") .option("namespace", "inventory") .option("table-name", "product_events") .format("parquet") .build(); table.executeInsert(icebergSink); Every commit becomes an Iceberg snapshot — no more wondering if your table is consistent. Step 3: Build Derived Domain Tables Another Flink job aggregates events into a fresh inventory table: SQL CREATE TABLE current_inventory ( product_id BIGINT, total_quantity INT, last_update TIMESTAMP ) USING ICEBERG PARTITIONED BY (product_id); Data Mesh Superpowers With Iceberg + Flink Reproducibility – Run analytics against any historical table snapshot.Decentralized ownership – Each domain team owns its tables, yet they remain queryable mesh-wide.Unified real-time and batch – Flink handles both streaming ingestion and historical backfills.Interoperability – Iceberg tables are queryable via Spark, Trino, Presto, or standard SQL engines. Operational Best Practices Partition on real query dimensions (often temporal). Avoid tiny files and over-partitioning.Automate compaction and snapshot cleanup to maintain predictable performance.Validate schema changes in CI/CD pipelines to catch rogue columns early.Monitor metadata – Iceberg exposes metrics on partition pruning, file size, and snapshot lineage. Lessons Learned from Production Start small – Migrate one domain at a time to avoid a “big bang” failure.Automate governance – Use table metadata to track ownership without adding manual overhead.Use snapshot tags for milestones – Quarterly closes, product launches, or audit checkpoints become easy to reproduce.Document partitioning strategies – Your future self will thank you when query performance needs tuning. The Bottom Line Apache Iceberg and Apache Flink give you the building blocks for a real-time Data Mesh that actually scales and stays sane. With time travel, schema evolution, and ACID guarantees, you can replace brittle pipelines and ad hoc governance with a stable, future-proof platform. You no longer need to choose between speed and reliability or sacrifice flexibility for vendor lock-in. The result? Teams deliver data products faster.Analysts trust the numbers.

By Subrahmanyam Katta
Filtering Java Stack Traces With MgntUtils Library
Filtering Java Stack Traces With MgntUtils Library

Introduction: Problem Definition and Suggested Solution Idea This article is a a technical article for Java developers that suggest a solution for a major pain point of analyzing very long stack traces searching for meaningful information in a pile of frameworks related stack trace lines. The core idea of the solution is to provide a capability to intelligently filter out irrelevant parts of the stack trace without losing important and meaningful information. The benefits are two-fold: 1. Making stack trace much easier to read and analyze, making it more clear and concise 2. Making stack trace much shorter and saving space Stack trace is a lifesaver when debugging or trying to figure out what went wrong in your application. However, when working with logs on the server side you can come across huge stack trace that contains the longest useless tail of various frameworks and Application Server related packages. And somewhere in this pile, there are several lines of a relevant trace and they may be in different segments separated by useless information. It becomes a nightmare to search for a relevant stuff. Here is a link, "Filtering the Stack Trace From Hell" that describes the same problem with real-life examples (not for the fainthearted :)). Despite the obvious value of this capability, the Java ecosystem offers very few, if any, libraries with built-in support for stack trace filtering out of the box. Developers often resort to writing custom code or regex filters to parse and shorten stack traces—an ad-hoc and fragile solution that’s hard to maintain and reuse. Some logging frameworks such as Log4J and Logback might provide basic filtering options based on log levels or format, but they don't typically allow for the granular control over stack trace How the Solution Works and How to Use It The Utility is provided as part of Open Source Java library called MgntUtils. It is available on Maven Central as well as on Github (including source code and Javadoc). Here is a direct link to Javadoc. The solution implementation is provided in class TextUtils in method getStacktrace() with several overridden signatures. Here is the direct Javadoc link to getStacktrace() method with detailed explanation of the functionality. So the solution is that user can set a relevant package prefix (or multiple prefixes srting with MgntUtils library version 1.7.0.0) of the packages that are relevant. The stack trace filtering will work based on the provided prefixes in the following way: 1. The error message is always printed. 2. The first lines of the stack trace are always printed as well until the first line matching one of the prefixes is found. 3. Once the first line matching one of the prefixes is found this and all the following lines that ARE matching one of the prefixes will be printed 4. Once the first line that is NOT matching any of the prefixes is found - this first non-matching line is printed but all the following non-matching lines are replaced with single line ". . ." 5. If at some point another line matching one of the prefixes is found this and all the following matching lines will be printed. and now the logic just keep looping between points 4 and 5 Stack trace could consist of several parts such as the main section, "Caused by" section and "Supressed" Section. Each part is filtered as a separate section according to the logic described above. Also, the same utility (starting from version 1.5.0.3) has method getStacktrace() that takes CharSequence interface instead of Throwable and thus allows to filter and shorten stackt race stored as a string the same way as a stack trace extracted from Throwable. So, essentially stack traces could be filtered "on the fly" at run time or later on from any text source such as a log. (Just to clarify - the utility does not support parsing and modifying the entire log file. It supports filtering just a stack trace that as passed as a String. So if anyone wants to filter exceptions in a log file they would have to parse the log file and extract stack trace(s) as separate strings and then can use this utility to filter each individual stack trace). Here is a usage example. Note that the first parameter of getStacktrace() method in this example is Throwable. Let's say your company's code always resides in packages that start with "com.plain.*" So you set such a prefix and do this: Java logger.info(TextUtils.getStacktrace(e,true,"com.plain.")); This will filter out all the useless parts of the trace according to the logic described above, leaving you with very concise stack trace. Also, user can pre-set the prefix (or multiple prefixes) and then just use the convenience method: Java TextUtils.getStacktrace(e); It will do the same. To preset the prefix just use the method: Java TextUtils.setRelevantPackage("com.plain."); Method setRelevantPackage() supports setting multiple prefixes, so you can use it like this: Java TextUtils.setRelevantPackage("com.plain.", "com.encrypted."); If you would like to pre-set this value by configuration then starting with the library version 1.1.0.1 you can set Environment Variable "MGNT_RELEVANT_PACKAGE" or System Property "mgnt.relevant.package" to value "com.plain." and the property will be set to that value without you invoking method TextUtils.setRelevantPackage("com.plain."); explicitly in your code. Note that System property value would take precedence over the environment variable if both were set. Just a reminder that with System property you can add it in your command line using -D flag: "-Dmgnt.relevant.package=com.plain." Note that System property value would take precedence over environment variable if both are set. IMOPRTANT: Note that for both environment variable and system property if multiple prefixes needed to be set than list them one after another separated by semicolon (;) For Example: "com.plain.;com.encrypted." There is also a flexibility here: If you do have pre-set prefixes but for some particular case you would wish to filter according to different set of prefixes you can use the method signature that takes prefixes as parameter and it will override the globally pre-set prefixes just for this time: Java logger.info(TextUtils.getStacktrace(e,true,"org.alternative.")); Here is an example of filtered vs unfiltered stack trace. You will get the following filtered stack trace: Plain Text com.plain.BookListNotFoundException: Internal access error at com.plain.BookService.listBooks() at com.plain.BookService$$FastClassByCGLIB$$e7645040.invoke() at net.sf.cglib.proxy.MethodProxy.invoke() ... at com.plain.LoggingAspect.logging() at sun.reflect.NativeMethodAccessorImpl.invoke0() ... at com.plain.BookService$$EnhancerByCGLIB$$7cb147e4.listBooks() at com.plain.web.BookController.listBooks() instead of the unfiltered version: Plain Text com.plain.BookListNotFoundException: Internal access error at com.plain.BookService.listBooks() at com.plain.BookService$$FastClassByCGLIB$$e7645040.invoke() at net.sf.cglib.proxy.MethodProxy.invoke() at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed() at com.plain.LoggingAspect.logging() at sun.reflect.NativeMethodAccessorImpl.invoke0() at sun.reflect.NativeMethodAccessorImpl.invoke() at sun.reflect.DelegatingMethodAccessorImpl.invoke() at java.lang.reflect.Method.invoke() at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs() at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod() at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.interceptor.AbstractTraceInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.transaction.interceptor.TransactionInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke() at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed() at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept() at com.plain.BookService$$EnhancerByCGLIB$$7cb147e4.listBooks() at com.plain.web.BookController.listBooks() In Conclusion The MgntUtils library is written and maintained by me. If you require any support or have any question or would like a short demo you can contact me through LinkedIn - send me a message or request connection. I will do my best to respond

By Michael Gantman
Java JEP 400 Explained: Why UTF-8 Became the Default Charset
Java JEP 400 Explained: Why UTF-8 Became the Default Charset

A JDK Enhancement Proposal (JEP) is a formal process used to propose and document improvements to the Java Development Kit. It ensures that enhancements are thoughtfully planned, reviewed, and integrated to keep the JDK modern, consistent, and sustainable over time. Since its inception, many JEPs have introduced significant language and runtime features that shape the evolution of Java. One such important proposal, JEP 400, introduced in JDK 18 in 2022, standardizes UTF-8 as the default charset, addressing long-standing issues with platform-dependent encoding and improving Java’s cross-platform reliability. Traditionally, Java’s I/O API, introduced in JDK 1.1, includes classes like FileReader and FileWriter that read and write text files. These classes rely on a Charset to correctly interpret byte data. When a charset is explicitly passed to the constructor, like in: Java public FileReader(File file, Charset charset) throws IOException public FileWriter(String fileName, Charset charset) throws IOException the API uses that charset for file operations. However, these classes also provide constructors that don’t take a charset: Java public FileReader(String fileName) throws IOException public FileWriter(String filename) throws IOException In these cases, Java defaults to the platform’s character set. As per the JDK 17 documentation: "The default charset is determined during virtual-machine startup and typically depends upon the locale and charset of the underlying operating system." This behavior can lead to bugs when files are written and read using different character sets—especially across environments. To address this inconsistency, JEP 400 proposed using UTF-8 as the default charset when none is explicitly provided. This change makes Java applications more predictable and less error-prone, especially in cross-platform environments. As noted in the JDK 18 API: "The default charset is UTF-8, unless changed in an implementation-specific manner." Importantly, this update doesn’t remove the ability to specify a charset. Developers can still set it via constructors or the JVM flag -Dfile.encoding. Lets see the problem under discussion using an example: Java package com.jep400; import java.io.FileWriter; import java.io.IOException; import java.nio.charset.Charset; public class WritesFiles { public static void main(String[] args) { System.out.println("Current Encoding: " + Charset.defaultCharset().displayName()); writeFile(); } private static void writeFile() { try (FileWriter fw = new FileWriter("fw.txt")){ fw.write("résumé"); System.out.println("Completed file writing."); } catch (IOException e) { e.printStackTrace(); } } } In the method writeFile, we used a FileWriter constructor that does not take a character set as a parameter. As a result, the JDK falls back on the default character set, which is either specified via the -Dfile.encoding JVM argument or derived from the platform’s locale. The program writes a file containing some text. To simulate a character set mismatch, we run the program with a specific encoding: Java -Dfile.encoding=ISO-8859-1 com.jep400.WritesFiles Here, we’re explicitly setting the character set to ISO-8859-1 to mimic running the program on a system where the default charset is ISO-8859-1 and no charset is passed programmatically. When executed, the program produces the following output: Java Output: Current Encoding: ISO-8859-1 Completed file writing. Consider the following file that reads the same file but with different encoding After the above program completes, it creates a file named fw.txt. Next, let’s look at a program that reads the fw.txt file created by the previous program. Java import java.io.FileReader; import java.io.IOException; import java.nio.charset.Charset; public class ReadsFiles { public static void main(String[] args) { System.out.println("Current Encoding: " + Charset.defaultCharset().displayName()); readFile(); } private static void readFile() { try(FileReader fr = new FileReader("fw.txt")) { int character; while ((character = fr.read()) != -1) { System.out.print((char) character); } } catch (IOException e) { e.printStackTrace(); } } } In the readFile method, we use a FileReader constructor that does not specify a character set. To simulate running the program on a platform with a different default character set, we pass a VM argument: java -Dfile.encoding=UTF-8 com.jep400.ReadsFiles The following output will be displayed when running this command: Java Current Encoding: UTF-8 r�sum� The output shows text that does not match what the first program wrote. This highlights the problem of not explicitly specifying the character set when reading and writing files, instead relying on the platform’s default character set. This mismatch can cause the same incorrect output in the following scenarios: When the programs run on different machines with different default character sets.When upgrading to JDK 18 or later, which changes the default charset behavior. Now, let’s see how the output looks when running the same programs in a JDK 18+ environment. When running the first program, this output is observed: Java Current Encoding: UTF-8 Completed file writing. When the second program is run, the output appears as follows: Java Current Encoding: UTF-8 résumé We can see that the data is written and read using the standard UTF-8 character set, effectively resolving the character-set issues encountered earlier. Conclusion Since its introduction in JDK 18, JEP 400’s adoption of UTF-8 as the default charset has become a foundational improvement for Java applications worldwide. By standardizing on UTF-8, it effectively eliminates many charset-related issues that developers faced when running code across different platforms. While not a new change, its continued impact ensures better consistency and fewer bugs in modern Java projects. Developers should still specify charsets explicitly when necessary, but relying on UTF-8 as the default enhances cross-platform compatibility and helps future-proof applications as the Java ecosystem rapidly evolves. While not always required, aligning with this default supports consistency across diverse environments.

By Ramana Singaperumal
How to Configure a Jenkins Job for a Maven Project
How to Configure a Jenkins Job for a Maven Project

Jenkins is a widely used automation server that plays a big role in modern software development. It helps teams streamline their continuous integration and continuous delivery (CI/CD) processes by automating tasks like building, testing, and deploying applications. One of the key strengths of Jenkins is its flexibility. It easily integrates with a wide range of tools and technologies, making it adaptable to different project needs. In the previous articles, we learnt about setting up Jenkins and a Jenkins agent using Docker Compose. In this tutorial blog, we will learn what Jenkins jobs are and how to set them up for a Maven project. What Are Jenkins Jobs? In Jenkins, a job is simply a task or process, like building, testing, or deploying an application. It can also help in running automated tests for the project with specific steps and conditions. With a Jenkins Job, the tests can be automatically run whenever there’s a code change. Jenkins will clone the source code from version control like Git, compile the code, and run the tests based on the requirements. These jobs can also be scheduled for a later run, providing flexibility to run the tests on demand. This helps make testing faster and more consistent. Jenkins jobs can also be triggered using webhooks whenever a commit is pushed to the remote repository, enabling seamless Continuous Integration and Development. Different Types of Jenkins Jobs Jenkins supports multiple types of job items, each designed for different purposes. Depending on the complexity of the project and its requirements, we can choose the type of Jenkins job that best fits our needs. Let’s quickly discuss the different types of jobs available in Jenkins: Freestyle Project This is the standard job type in Jenkins that is popular and widely used. It pulls code from one SCM, runs the build steps serially, and then does follow-up tasks like saving artifacts and sending email alerts. Pipeline Jenkins Pipeline is a set of tools that help build, test, and deploy code automatically by creating a continuous delivery workflow right inside Jenkins. Pipelines define the entire build and deployment process as code, using Pipeline domain-specific language (DSL) syntax. Pipeline provides the tools to model and manage simple or complex workflows directly with Jenkins. The definition of a Jenkins Pipeline is written into a text file called jenkinsfile that can be committed to a project’s source control repository. Multi-Configuration Project A multi-configuration project is best for projects that require running multiple setups, such as testing on different environments or creating builds for specific platforms. It’s helpful in cases where builds share many similar steps, which would otherwise need to be repeated manually. The Configuration Matrix allows us to define which steps to reuse and automatically creates a multi-axis setup for different build combinations. Multibranch Pipeline The Multibranch Pipeline lets us set up different Jenkinsfiles for each branch of the project. Jenkins automatically searches and runs the correct pipeline for each branch if it has a Jenkinsfile in the code repository. This is very helpful as Jenkins handles managing separate pipelines for each branch. Organization Folders With Organization Folders, Jenkins can watch over an entire organization on GitHub, Bitbucket, GitLab, or Gitea. Whenever it finds a repository with branches or pull requests that include a Jenkinsfile, it automatically sets up a Multibranch Pipeline. Maven Project With the Maven Project, Jenkins can seamlessly build a Maven project. Jenkins makes use of the POM file to automatically handle the setup by greatly reducing the need for manual configuration. How to Set Up a Jenkins Job for a Maven Project It is essential to configure and set up the Maven Integration plugin in Jenkins before proceeding with configuring the Jenkins Job for the Maven project. Initially, the option for the Maven project is not displayed unless the Maven Integration plugin is installed. Installing Maven Integration Plugin in Jenkins The Maven Integration plugin can be installed using the following steps: Step 1 Log in to Jenkins and navigate to the Manage Jenkins > Manage Plugins page. Step 2 On the Manage Plugins Page, select the “Available plugins” from the left-hand menu and search for “Maven integration plugin”. Select the Plugin and click on the “Install” button on the top right of the page. Step 3 After successful installation, restart Jenkins. Navigate to the Manage Jenkins > Manage Plugins page and select “Installed plugins” from the left-hand menu. The Maven integration plugin should be listed in the installed plugin list. After the Maven integration plugin is installed successfully, we need to configure Maven in Jenkins. Configure Maven in Jenkins Maven can be configured in Jenkins by following these steps: Step 1 Navigate to Manage Jenkins > Tools window. Step 2 Scroll down to the Maven installations section. Step 3 Click on the Add Maven button. Fill in the mandatory field for Name with an appropriate name(I have updated the name to “Maven_Latest”). Next, tick the “Install automatically” checkbox and select the appropriate version of Maven to install. We will be using the latest version(3.9.10) in the Version dropdown. Click on “Apply” and “Save” to save the configuration. With this, we have configured Maven in Jenkins and are now fully set to create a Jenkins job for the Maven Project. Configuring a Jenkins Job for a Maven Project We will be setting up a Jenkins job for the test automation project that will run API automation tests. This Maven project available on GitHub contains API automation tests written using the REST Assured library in Java. A Jenkins job for a Maven project can be configured with the following steps: Step 1 Click on “New Item” on the homepage of Jenkins. Step 2 Select the Maven project from the list of job names and add a name for the job. Click on the OK button to continue. Next, Jenkins will take us to the Job’s configuration page. Step 3 Select “Git” in the Source Code Management section. Enter the repository URL (https://github.com/mfaisalkhatri/rest-assured-examples.git). This URL should be the one that we select to clone the repository.The respective branch name for the job. We’ll add the branch name as “*/master” as we need to pull the code from the master branch to run the tests. Make sure we add the correct branch name here. Step 4 In the Pre-Steps section, update the Root POM, Goals, and Options. Root POM: Add the path to the POM.xml file. Generally, it is in the root folder. So, the value should be the default “pom.xml”.Goals and Options: In this text box, we need to provide the Maven command to run the project. As the API tests are organized in different testng.xml files, and it is available in the test-suite folder in the project, we need to provide the following command to execute: Plain Text clean install -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml Decoding the Maven Command clean install: The clean command tells Maven to clean the project. It will delete the target/ directory, erasing all the previously built and compiled artifacts. The install command will compile the code and run the tests. -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml “-D” uses the System Property of “suite-xml” and passes the value “test-suite/restfulbookersuitejenkins.xml” to it. In the POM.xml, the “suite-xml” property is defined within the Maven Surefire plugin. The default value for this “suite-xml” is set to “test-suite/testng.xml” in the Properties section in POM.xml. However, as we have multiple testng.xml files and need to run the tests from a specific “restfulbookersuitejenkins.xml,” we will be overwriting the suite-xml using -Dsuite-xml=test-suite/restfulbookersuitejenkins.xml. Given below are the contents of the restfulbookersuitejenkins.xml XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Restful Booker Test Suite" thread-count="3" parallel="tests"> <listeners> <listener class-name="in.reqres.utility.TestListener"/> </listeners> <test name="Restful Booker tests on Jenkins"> <parameter name="agent" value="jenkins"/> <classes> <class name="com.restfulbooker.RestfulBookerE2ETests"> <methods> <include name="createBookingTest"/> <include name="getBookingTest"/> <include name="testTokenGeneration"/> <include name="updateBookingTest"/> <include name="updatePartialBookingTest"/> <include name="deleteBookingTest"/> <include name="checkBookingIsDeleted"/> </methods> </class> </classes> </test> </suite> This file contains end-to-end tests for the RESTful Booker Demo APIs. Step 4 Click on the Apply and Save button. This will save the configuration for the job. The Jenkins job for the Maven project has been configured successfully. Running the Jenkins Job There are multiple ways to run the Jenkins Job as mentioned below: By manually clicking on the “Build Now” button.Using Webhooks to run the build as soon as the developer pushes code to the remote repository. Let’s run the Jenkins Job manually by clicking on the “Build Now” button. Once the job is started, a progress bar is displayed on the left-hand side of the screen. On clicking the Job #, as shown in the screenshot above, Jenkins will take us to the Job details page, where the Job’s details and its progress are displayed. The live console logs of the job can be verified by clicking on the Console Output link on the left-hand menu. Similarly, the Job’s more granular details can be viewed by scrolling down on the Console output page. The console output shows that a total of 7 test cases were run and all of them passed. Finally, it shows that the build was generated successfully. These detailed logging allows us to check the minute details of the job execution and its progress. In case the test within the Maven project fails, it can be verified here, and accordingly, an action can be taken. Verifying the Job Status The job’s status can be checked after the execution is complete. It provides a historical view of the Job runs, which can help in analysing the test failures to check the stability of the project. A graphical representation of the historical data is also provided, which shows us that the build failed for the first three runs, and passed after the 5th and 6th runs. Job Dashboard Jenkins provides us with the Job Dashboard, which helps us know the current status of the job. Summary Jenkins is a powerful tool for setting up the CI/CD pipeline to automate the different stages of deployment. It offers multiple options to set up jobs, one of which is using the Maven Project. We can easily configure the Jenkins job for a Maven project by installing the Maven integration plugin and setting up Maven in Jenkins. The job pulls the code from the SCM and executes the goal as provided by the user in the job’s configuration. Jenkins provides detailed job execution status on the dashboard. It provides more granular details on the Job’s console output. The historical graph and job run details help stakeholders verify the stability of the build and take further actions.

By Faisal Khatri DZone Core CORE
Introduction to Data-Driven Testing With JUnit 5: A Guide to Efficient and Scalable Testing
Introduction to Data-Driven Testing With JUnit 5: A Guide to Efficient and Scalable Testing

When discussing the history of software development, we can observe an increase in software complexity, characterized by more rules and conditions. When it comes to modern applications that rely heavily on databases, testing how the application interacts with its data becomes equally important. It is where data-driven testing plays a crucial role. Data-driven testing helps increase software quality by enabling tests with multiple data sets, which means the same test runs multiple times with different data inputs. Automating these tests also ensures scalability and repeatability across your test suite, reducing human error, boosting productivity, saving time, and guaranteeing that the same mistake doesn't happen twice. Modern applications often depend on databases to store and manipulate critical data; indeed, the data is the soul of any modern application. Thus, it's essential to validate that these operations function correctly across a range of scenarios. Writing traditional unit tests often falls short because they don't account for the variability of data that real-world applications encounter. This is where data-driven testing shines. When we talk about data-driven tests, it gives you the capability to automate those tests with different inputs, including several cases, to check if your application keeps up. Exploring this approach ensures that your application handles data consistently and reliably, helping you avoid bugs that may only appear with specific data types, formats, or combinations of data. Data-driven testing is a strategy where the same test is run multiple times with different sets of input data. Rather than writing separate test cases for each data variation, you use one test method and provide other data sets to test against. Exploring more of the data-driven testing goes beyond reducing redundancy in your test code and also improves test coverage by ensuring the system behaves as expected across all types of data. Data-driven test flow In this article, we will explore this capability with Java and Jupiter. Live Session: Implementing Data-Driven Testing With Jakarta NoSQL and Jakarta Data In this section, we will walk through a live example using Java SE, Jakarta NoSQL, and Jakarta Data to demonstrate data-driven testing in action. For our example, we will build a simple hotel management system that tracks room status and integrates with Oracle NoSQL as the database. Prerequisites Before diving into the code, ensure you have Oracle NoSQL running either on the cloud or locally using Docker. You can quickly start Oracle NoSQL by running the following command: Shell docker run -d --name oracle-instance -p 8080:8080 ghcr.io/oracle/nosql:latest-ce Once the database is up and running, we're ready to start building the project. You can also find the full project on GitHub: Data-Driven Test with Oracle NoSQL Step 1: Structure the Entity We begin by defining the Room entity, which represents a hotel room in our system. This entity is mapped to the database using the @Entity annotation, and each field corresponds to a column in the database: Java @Entity public class Room { @Id private String id; @Column private int number; @Column private RoomType type; @Column private RoomStatus status; @Column private CleanStatus cleanStatus; @Column private boolean smokingAllowed; @Column private boolean underMaintenance; } Step 2: Room Repository Next, we create the RoomRepository interface, which uses Jakarta Data and NoSQL annotations to define queries for various room-related operations: Java @Repository public interface RoomRepository { @Query("WHERE type = 'VIP_SUITE' AND status = 'AVAILABLE' AND underMaintenance = false") List<Room> findVipRoomsReadyForGuests(); @Query("WHERE type <> 'VIP_SUITE' AND status = 'AVAILABLE' AND cleanStatus = 'CLEAN'") List<Room> findAvailableStandardRooms(); @Query("WHERE cleanStatus <> 'CLEAN' AND status <> 'OUT_OF_SERVICE'") List<Room> findRoomsNeedingCleaning(); @Query("WHERE smokingAllowed = true AND status = 'AVAILABLE'") List<Room> findAvailableSmokingRooms(); @Save void save(List<Room> rooms); @Save Room newRoom(Room room); void deleteBy(); @Query("WHERE type = :type") List<Room> findByType(@Param("type") String type); } In this repository, we define several queries to retrieve rooms based on different conditions, such as finding available rooms, rooms that need cleaning, or rooms that allow smoking. We also include methods for saving, deleting, and querying rooms by type. To test our repository, we want to ensure that we are using a test container instead of a production environment. For this, we set up a DatabaseContainer singleton that starts the Oracle NoSQL container for testing purposes: Java public enum DatabaseContainer { INSTANCE; private final GenericContainer<?> container = new GenericContainer<> (DockerImageName.parse("ghcr.io/oracle/nosql:latest-ce")) .withExposedPorts(8080); { container.start(); } public DatabaseManager get(String database) { DatabaseManagerFactory factory = managerFactory(); return factory.apply(database); } public DatabaseManagerFactory managerFactory() { var configuration = DatabaseConfiguration.getConfiguration(); Settings settings = Settings.builder() .put(OracleNoSQLConfigurations.HOST, host()) .build(); return configuration.apply(settings); } public String host() { return "http://" + container.getHost() + ":" + container.getFirstMappedPort(); } } This container ensures that we’re using the Oracle NoSQL database, which is running inside a Docker container, thereby mimicking a production-like environment while remaining fully isolated for testing purposes. Step 4: Injecting the DatabaseManager We need to inject the DatabaseManager into our CDI context. For this, we create a ManagerSupplier class that ensures the DatabaseManager is available to our application: Java @ApplicationScoped @Alternative @Priority(Interceptor.Priority.APPLICATION) public class ManagerSupplier implements Supplier<DatabaseManager> { @Produces @Database(DatabaseType.DOCUMENT) @Default public DatabaseManager get() { return DatabaseContainer.INSTANCE.get("hotel"); } } Step 5: Writing Data-Driven Tests With @ParameterizedTest in JUnit 5 In this step, we focus on how to write data-driven tests using JUnit 5's @ParameterizedTest annotation, and specifically dive into the types used in the RoomServiceTest. We’ll explore the @EnumSource and @MethodSource annotations, all of which help run the same test method multiple times with different sets of input data. Let’s look at the types used in the RoomServiceTest class in detail: Java @ParameterizedTest(name = "should find rooms by type {0}") @EnumSource(RoomType.class) void shouldFindRoomByType(RoomType type) { List<Room> rooms = this.repository.findByType(type.name()); SoftAssertions.assertSoftly(softly -> softly.assertThat(rooms).allMatch(room -> room.getType().equals(type))); } The @EnumSource(RoomType.class) annotation is used to automatically provide each enum constant from the RoomType enum to the test method. In this case, the RoomType enum contains values like VIP_SUITE, STANDARD, SUITE, etc. This annotation causes the test method to run once for each value in the RoomType enum. Each time the test runs, the type parameter is assigned one of the enum values, and the test checks that all rooms returned by the repository match the RoomType provided. This is especially useful when you want to run the same test logic for all possible values of an enum. It ensures that your code works consistently across all variants of the enum type, minimizing redundant test cases. Java @ParameterizedTest @MethodSource("room") void shouldSaveRoom(Room room) { Room updateRoom = this.repository.newRoom(room); SoftAssertions.assertSoftly(softly -> { softly.assertThat(updateRoom).isNotNull(); softly.assertThat(updateRoom.getId()).isNotNull(); softly.assertThat(updateRoom.getNumber()).isEqualTo(room.getNumber()); softly.assertThat(updateRoom.getType()).isEqualTo(room.getType()); softly.assertThat(updateRoom.getStatus()).isEqualTo(room.getStatus()); softly.assertThat(updateRoom.getCleanStatus()).isEqualTo(room.getCleanStatus()); softly.assertThat(updateRoom.isSmokingAllowed()).isEqualTo(room.isSmokingAllowed()); }); } The @MethodSource("room") annotation specifies that the test method should be run with data provided by the room() method. This method returns a stream of Arguments containing different Room objects. The room() method generates random room data using Faker and assigns random values to room attributes like roomNumber, type, status, etc. These randomly generated rooms are passed to the test method one at a time. The test checks that the room saved in the repository matches the original room’s attributes, ensuring that the save operation works as expected. @MethodSource is a great choice when you need to provide complex or custom test data. In this case, we use random data generation to simulate different room configurations, ensuring our code can handle a wide range of inputs without redundancy. Conclusion In this article, we've explored the importance of data-driven testing and how to implement it effectively using JUnit 5 (Jupiter). We demonstrated how to leverage parameterized tests to run the same test multiple times with different inputs, making our testing process more efficient, comprehensive, and scalable. By using annotations like @EnumSource, @MethodSource, and @ArgumentsSource, we can easily pass multiple sets of data to our test methods, ensuring that our application works as expected across a wide range of input conditions. We focused on @EnumSource iterating over enum constants and @MethodSource generating custom data for our tests. These tools, alongside JUnit 5’s rich variety of parameterized test sources, such as @ValueSource, @CsvSource, and @ArgumentsSource, give us the flexibility to design tests that cover a broader spectrum of data variations. By incorporating these techniques, we ensure that our repository methods (and other components) are robust, adaptable, and thoroughly tested with diverse real-world data. This approach significantly improves software quality, reduces test code duplication, and accelerates the testing process. Data-driven testing isn’t just about automating tests; it’s about making those tests more meaningful by accounting for the variety of real-world conditions your software might face. It’s a valuable strategy for building resilient applications, and with JUnit 5, the possibilities for enhancing test coverage are vast and customizable.

By Otavio Santana DZone Core CORE

Top Java Experts

expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Ram Lakshmanan

yCrash - Chief Architect

Want to become Java Performance Expert? Attend my master class: https://ycrash.io/java-performance-training

The Latest Java Topics

article thumbnail
Infusing AI into Your Java Applications
An introductory tutorial for Java developers on writing AI-infused applications using Quarkus with LangChain4j. You don't need Python to write AI apps.
October 10, 2025
by Don Bourne
· 2,741 Views · 8 Likes
article thumbnail
Diving into JNI: My Messy Adventures With C++ in Android
JNI is powerful but tricky. Automate boilerplate with generators, carefully manage references, test with CheckJNI, and embrace the chaos; it gets satisfying.
October 10, 2025
by Ruslan Vidzert
· 1,163 Views · 1 Like
article thumbnail
Introduction to Spring Data Elasticsearch 5.5
Getting started with the latest version of Spring Data Elasticsearch 5.5 and Elasticsearch 8.18 as a NoSQL database for our data storage.
October 10, 2025
by Arnošt Havelka DZone Core CORE
· 1,425 Views · 2 Likes
article thumbnail
Building Realistic Test Data in Java: A Hands-On Guide for Developers
Learn how to build a simple API that delivers believable fake users, perfect for testing, demos, or UI prototyping. No more “John Doe” data, finally, real-feel mocks.
October 10, 2025
by Wallace Espindola
· 1,141 Views · 5 Likes
article thumbnail
Efficiently Reading Large Excel Files (Over 1 Million Rows) Using the Open-Source Sjxlsx Java API
The primary objective of this article is to prevent the "out of memory-Java heap issue" when reading large Excel files using the open-source "sjxlsx" library.
October 9, 2025
by Mahendran Chinnaiah
· 1,568 Views · 6 Likes
article thumbnail
Converting ActiveMQ to Jakarta (Part III: Final)
This is the final blog post in a series covering the conversion of Apache ActiveMQ to Jakarta EE and JDK 17 to share best practices with enterprise software developers.
October 8, 2025
by Matt Pavlovich
· 1,170 Views
article thumbnail
Building a Real-Time Data Mesh With Apache Iceberg and Flink
Build a real-time data mesh using Apache Iceberg for scalable, versioned table storage and Apache Flink for continuous stream processing across domains.
September 26, 2025
by Subrahmanyam Katta
· 1,709 Views · 1 Like
article thumbnail
Top 7 Mistakes When Testing JavaFX Applications
Testing JavaFX programs may seem non-trivial at first. This article describes the most common mistakes when testing desktop apps, their causes, and solutions.
September 24, 2025
by Catherine Edelveis
· 2,637 Views
article thumbnail
Think in Graphs, Not Just Chains: JGraphlet for TaskPipelines
JGraphlet is a tiny, zero-dependency Java library for building task pipelines. It uses a graph model where you define tasks as nodes and connect them.
September 22, 2025
by Shaaf Syed
· 1,008 Views · 1 Like
article thumbnail
Spring Boot WebSocket: Building a Multichannel Chat in Java
This is a step‑by‑step guide to a reactive Spring Boot WebSocket chat with WebFlux and MongoDB, including config, handlers, and manual tests.
September 19, 2025
by Bartłomiej Żyliński DZone Core CORE
· 2,303 Views · 4 Likes
article thumbnail
How to Migrate from Java 8 to Java 17+ Using Amazon Q Developer
Learn how Amazon Q developer is speeding up application modernization and other benefits it offers to enterprises and developers
September 16, 2025
by Prabhakar Mishra
· 2,574 Views · 1 Like
article thumbnail
Spring Cloud Gateway With Service Discovery Using HashiCorp Consul
This article introduces HashiCorp Consul, a service registry and discovery tool that integrates well with Spring Boot and supports reactive programming.
September 15, 2025
by Vishnu Viswambharan
· 3,480 Views · 4 Likes
article thumbnail
Secure Your Spring Boot Apps Using Keycloak and OIDC
This blog explores integrating Spring Security with Keycloak using OpenID Connect. It also provides examples and unit tests.
September 9, 2025
by Gunter Rotsaert DZone Core CORE
· 2,443 Views · 4 Likes
article thumbnail
Monitoring Java Microservices on EKS Using New Relic APM and Kubernetes Metrics
Monitor Java microservices on Amazon EKS using New Relic APM. Set up JVM agents, tune GC settings, and track Kubernetes metrics with dashboards and alerts.
September 2, 2025
by Praveen Chaitanya Jakku
· 1,832 Views · 2 Likes
article thumbnail
Prototype for a Java Database Application With REST and Security
Prototype for a Java database application with REST and security using Spring Boot and containers for testing, using Keycloak for security and PostgreSQL for persistence.
September 2, 2025
by George Pod
· 2,958 Views · 5 Likes
article thumbnail
Exploring QtJambi: A Java Wrapper for Qt GUI Development—Challenges and Insights
This article shares initial impressions, remarks, and observations on QtJambi, a Java wrapper for Qt library used for building graphical user interfaces.
September 1, 2025
by Gregory Ledenev
· 1,563 Views · 2 Likes
article thumbnail
Java 21 Virtual Threads vs Cached and Fixed Threads
Discover how Java concurrency improved from Java 8’s enhancements to Java 21’s virtual threads, enabling lightweight, scalable, and efficient multithreading.
August 26, 2025
by Milan Karajovic
· 6,069 Views · 9 Likes
article thumbnail
Filtering Java Stack Traces With MgntUtils Library
Long stack traces make debugging painful. MgntUtils offers a simple Java utility to filter irrelevant lines, letting you focus on what matters in your logs.
August 20, 2025
by Michael Gantman
· 2,749 Views · 4 Likes
article thumbnail
Java JEP 400 Explained: Why UTF-8 Became the Default Charset
JEP 400 standardizes UTF-8 as Java’s default charset from JDK 18 onward, ensuring consistent file encoding across platforms and fewer cross-OS bugs.
August 15, 2025
by Ramana Singaperumal
· 2,327 Views · 5 Likes
article thumbnail
Scoped Values: Revolutionizing Java Context Management
ScopedValue in Java offers safe, immutable context propagation with clear scoping and minimal overhead—ideal for structured concurrency and virtual threads.
August 12, 2025
by Ammar Husain
· 3,270 Views · 5 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends:

Morty Proxy This is a proxified and sanitized view of the page, visit original site.