Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Poor performance in MultiProcessCollector with frequently changing PIDs #204

Copy link
Copy link
@manics

Description

@manics
Issue body actions

I'm using django-prometheus in multiprocess mode, and I've noticed the time to fetch metrics increases the longer the server has been running. Currently I've got 5414 *.db files in prometheus_multiproc_dir.

MultiProcessCollector.collect reads all db files which I suspect is the bottleneck.

mark_process_dead only removes gauge files.

Do you think it'd be feasible to remove all files by copying the contents of type_{pid}.db files into a single type_old.db file? Or is this something that should be solved in django-prometheus?

Reactions are currently unavailable

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.