Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

beaver dramatically falls behind watched logfile  #432

Copy link
Copy link
@nburunova

Description

@nburunova
Issue body actions

I have a log file that is updated about 60 times per second with big lines, about 160 KB per line. I increased read chunk size from 4096 to 40960 (10 times, file tail.py, method _run_pass), beaver read new lines faster, but not enough. The lag between last log entry and line processing by beaver increases - 10 seconds delay per minute of working.
I've tested with tranport type = stdout, format: raw

Increasing number_of_consumer_processes to 2 or 4 doesn't help.
Increasing reading chunk size to 500 KB doesn't help.

It is not a disk issue (SSDin this server and some stat about IO operations in Grafana looks good).

Why beaver reading lines from log can be so slow?
Any hints how speed it up?
Can increasing discover_interval help?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.