Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

@srh
Copy link
Contributor

@srh srh commented Jun 13, 2017

Description

This changes the cache's flush strategy for soft durability writes to be to accumulate changes, and then flush the changes every t seconds. It has the effect of greatly reducing the number of write operations.

The feature is described in the notes for https://github.com/srh/rethinkdb/releases/tag/v2.3.5-srh-extra except that the configuration option is moved to a top-level flush_interval table config. Also, the default interval is 1.0 seconds.

This addresses some suggestions in #1771. It's easy to imagine people finding something to complain about here, so I'm going to leave this up for a while.

@danielmewes
Copy link
Member

Haven't looked at the code, but from the description of it, this should be awesome to reduce the amount of write amplification! I've always been somewhat unhappy with how bad RethinkDB was at that in some scenarios.

@srh srh mentioned this pull request Jun 22, 2017
1 task
@adallow
Copy link

adallow commented Jul 11, 2017

This would be very useful! Can it be part of the next release?

@srh
Copy link
Contributor Author

srh commented Jul 11, 2017

It's released as 2.3.5-extra, link in the OP. And 2.4 won't have it, but I will release a 2.4-extra. Then 2.5 will (hopefully) have it.

@AtnNn AtnNn added this to the 2.5 milestone Jul 13, 2017
srh added 5 commits October 13, 2017 08:11
This adds a "flush_interval" top-level config to the table config,
telling how much time should pass between flushes.

This allows multiple writes to be combined into a single flush,
reducing disk bandwidth, disk lifetime, and for many workloads, write
throughput.

Eviction now works such that if a table shard is using too much memory
(and none are evictable bufs), the whole shard initiates a flush.
(Previously, it _waited_ for enough active flushes to complete -- now
it must initiate the flush.)  It can't yet incrementally evict bufs,
because that is trickier to implement correctly than you'd think.
@srh srh force-pushed the sam/nextdelayed branch from b72700e to 011bea8 Compare October 13, 2017 17:00
@srh srh merged commit 011bea8 into rethinkdb:next Oct 13, 2017
@srh
Copy link
Contributor Author

srh commented Oct 13, 2017

I see no complaints -- in next with commit 011bea8.

@srh srh deleted the sam/nextdelayed branch October 13, 2017 17:02
adamierymenko added a commit to adamierymenko/rethinkdb that referenced this pull request Nov 8, 2017
Add credits for PR rethinkdb#6392
@adamierymenko adamierymenko mentioned this pull request Nov 8, 2017
1 task
srh pushed a commit that referenced this pull request Nov 17, 2017
Add credits for PR #6392
@adamierymenko
Copy link

FYI -- we finally got around to taking this live. Our writes dropped from ~200 MEGABYTES per second to around 400 KILOBYTES per second. That's with only two tables configured to have delayed flush, so the gain is not entirely from configuring that. This patch improves I/O write load so enormously that the previous level of write load could almost be considered a bug.

@adallow
Copy link

adallow commented Jan 30, 2018

Hi any idea when this will be released officially for rethink? (not sure if this is the right place to ask, but we would love this feature!). I assume it comes when what is in next becomes a release, but do you have any indication as to when that may be? And many thanks for the great work on this.

@lbguilherme
Copy link
Contributor

Looks like current next will become v2.5. As v2.4 has no ETA yet... this is the only way to go: https://github.com/srh/rethinkdb/releases/tag/v2.3.6-srh-extra

@srh
Copy link
Contributor Author

srh commented Jan 30, 2018

Note that a pretty nasty memory leak has been reported, so you might want to hold off trying it until an update.

@adallow
Copy link

adallow commented Jan 30, 2018

@srh thanks for the heads up, will keep an eye out for updates. Appreciate your work.

@thelinuxlich
Copy link

Does this helps with single-write performances, and those who are using it in production, did you step into the memory leak?

@adallow
Copy link

adallow commented May 17, 2018

Any news on this feature, would be cool to have it in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

7 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.