Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

I was doing the analysis between r2dbc and jdbc with a springboot application.
Springboot3.2 + jooq(3.18)+ r2bc + r2dbc pool+ postgres (16.1)

  • implementation("io.r2dbc:r2dbc-spi:1.0.0.RELEASE")

  • implementation("io.r2dbc:r2dbc-pool:1.0.1.RELEASE")

  • implementation("org.postgresql:r2dbc-postgresql:1.0.2.RELEASE")

  • I have created an end to end nonblocking spring boot application

  • Which uses the suspend function from controller to repository

r2dbc Configuration

package com.example.kotlin.config

import io.r2dbc.pool.PoolingConnectionFactoryProvider.*
import io.r2dbc.postgresql.PostgresqlConnectionFactoryProvider.*
import io.r2dbc.spi.ConnectionFactories
import io.r2dbc.spi.ConnectionFactory
import io.r2dbc.spi.ConnectionFactoryOptions
import io.r2dbc.spi.ConnectionFactoryOptions.*
import org.springframework.boot.autoconfigure.r2dbc.R2dbcProperties
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.context.annotation.Primary
import org.springframework.r2dbc.connection.R2dbcTransactionManager
import org.springframework.r2dbc.core.DatabaseClient
import org.springframework.transaction.ReactiveTransactionManager
import org.springframework.transaction.annotation.EnableTransactionManagement
import java.time.Duration


@Configuration
class R2DBCConfiguration(
    private val properties: R2dbcProperties
) {

    @Bean
    fun databaseClient(
        connectionFactory: ConnectionFactory
    ): DatabaseClient = DatabaseClient.builder()
        .connectionFactory(connectionFactory)
        .build()
    @Bean
    @Primary
    fun connectionFactory(properties: R2dbcProperties): ConnectionFactory {
        val options: MutableMap<String, String> = HashMap()
        return ConnectionFactories.get(
            ConnectionFactoryOptions.builder()
                .option(PROTOCOL, "postgresql")
                .option<String>(DRIVER, "pool")
                .option<String>(HOST, "postgres")
                .option<Int>(PORT, 5432)
                .option<String>(USER, "postgres")
                .option(PASSWORD, "pass123")
                .option<String>(DATABASE, "postgres")
                .option(MAX_SIZE, 20)
                .option(INITIAL_SIZE, 20)
                .option(SCHEMA, "public")
                .option(MAX_ACQUIRE_TIME, Duration.ofSeconds(30))
                .option(MAX_IDLE_TIME, Duration.ofSeconds(30))
                .build()
        )
    }

    @Bean
    fun transactionManager(connectionFactory: ConnectionFactory): ReactiveTransactionManager {
        return R2dbcTransactionManager(connectionFactory)
    }
}

Repository

package com.example.kotlin.repository

import com.example.generated.Tables.PRODUCT
import com.example.kotlin.model.Product
import kotlinx.coroutines.flow.Flow
import kotlinx.coroutines.flow.map
import kotlinx.coroutines.reactive.asFlow
import kotlinx.coroutines.reactive.awaitFirst
import kotlinx.coroutines.reactive.awaitFirstOrNull
import org.jooq.DSLContext
import org.springframework.cache.annotation.Cacheable
import org.springframework.context.annotation.ComponentScan
import org.springframework.stereotype.Repository
import org.springframework.transaction.reactive.TransactionalOperator
import org.springframework.transaction.reactive.executeAndAwait
import java.util.*

@Repository
@ComponentScan("com.example.generated.tables.daos")
class ProductRepository(
    private val jooq: DSLContext,
    private val transactionalOperator: TransactionalOperator
) {
    suspend fun save(product: Product): UUID =
        transactionalOperator.executeAndAwait { reactiveTransaction ->
            runCatching {
                println("******reached here********")
                jooq
                    .insertInto(PRODUCT)
                    .columns(PRODUCT.ID, PRODUCT.NAME)
                    .values(product.id, product.name)
                    .awaitFirst()
            }.onFailure {
                // Rollback the transaction in case of an exception
                reactiveTransaction.setRollbackOnly()
            }.getOrThrow().also {
                println("******completed********")
            }
            product.id
        }
}

analysis
I have done load testing using k6 to see how many parallel requests it can process.

R2dbc analysis

  • when configured MaxSIze=20 and IntiialSize=10
  • The application is only able to process 20 parallel requests at a time.
  • When anything above wenty success rate started to decline
  • CPU spike started increasing like anything- attaching the screenshot

whereas JDBC with Hikari pool was able to handle 120 plus parallel requests smoothly with the same configuration.

repository: https://github.com/arjunbalussery/kotlin-async

Run setup.sh bash file to make the application running

./setup.sh

you can use k6 run k6/loadtest.js command to run load testing

k6 run k6/loadtest.js

Run stop.sh bash file to stop the application

./stop.sh

Please let me know if any further information is needed.
Screenshot 2024-01-01 at 17 38 43

You must be logged in to vote

Replies: 1 comment · 3 replies

Comment options

Generally speaking, your observations are in line with expectations. R2DBC uses non-blocking API hence the infrastructure maximizes CPU utilization for available work.

Also, please note that reactive programming adds a lot of overhead. Any processes that yield zero or one result are at least 25% slower than their imperative counterpart. Reactive programming brings performance to applications that consume large result sets though. We've seen similar results with Imperative MongoDB vs Reactive MongoDB.

whereas JDBC with Hikari pool was able to handle 120 plus parallel requests smoothly with the same configuration

The question is really how much of that arrives on the database as the databases cannot scale indefinitely, instead, an ideal configuration permits a max number of concurrency in the area of number of CPU cores + discounted number of drives in a RAID array. I have my doubts about whether your test system runs on 120 CPU cores.

You must be logged in to vote
3 replies
@arjunbalussery
Comment options

@mp911de
Thanks for your answer. clarify a lot. Sorry for the delayed reply.

Also, please note that reactive programming adds a lot of overhead. Any processes that yield zero or one result are at least 25% slower than their imperative counterpart. Reactive programming brings performance to applications that consume large result sets though. We've seen similar results with Imperative MongoDB vs Reactive MongoDB.

I have some doubts here

  • So you are saying that it is always better to use blocking connection if the result set size is small or the query takes less time to execute
  • if there are queries which take considerable time or like want to fetch a huge result set it is better to use a reactive connections.
  • Am I missing some perspective or need to consider some other factors to decide upon when selecting JDBC or RDBC ?

I have my doubts about whether your test system runs on 120 CPU cores.

I don't think so. Both applications are running on docker. I ran application in using multicontainer. So infrastructure remain same for both reactive and blocking application.

@mp911de
Comment options

So you are saying that it is always better to use blocking connection if the result set size is small or the query takes less time to execute

No. I'm saying that R2DBC introduces more overhead in comparison to blocking API and therefore with small result sizes, the overhead has the most weight when benchmarking.

@yarosman
Comment options

We've seen similar results with Imperative MongoDB vs Reactive MongoDB.

@mp911de You mentioned imperative MongoDb versus reactive MongoDb. Can you provide posts, articles, or other resources that discuss these cases?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
🙏
Q&A
Labels
None yet
3 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.