-
Notifications
You must be signed in to change notification settings - Fork 158
Finite automaton conversion #230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
masklinn
merged 6 commits into
ua-parser:master
from
masklinn:finite-automaton-conversion
Oct 29, 2024
Merged
Finite automaton conversion #230
masklinn
merged 6 commits into
ua-parser:master
from
masklinn:finite-automaton-conversion
Oct 29, 2024
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We're not handling multiple versions of pypy (anymore) or graal so don't bother with version numbers.
We're supposed to use `--extra-checks` instead but it doesn't have a documented config setting. Just remove it. Cf python/mypy#16189
As I've discovered a while ago, finite automaton engines are not very fond of large bounded repetitions. In re2 and regex, that mostly translates to increased memory consumption (e.g. in their default modes, converting `.*` to `.{0,500}` increases the pattern's size by 115x in re2 and 84x in regex, if a capture is added on top then regex balloons to 219x), there is a performance impact but it's high single digit to low double, in regex at least (didn't test re2). However as it turns out Graal uses a JIT-ed DFA, and it *really* doesn't like these patterns, it spends a lot of time JIT-compiling (this is apparently the source of the extra 300% CPU use I could observe on what are purely single-threaded workloads, the JIT desperately trying to optimise regexes) them with no gain in performance: down-converting the regex back to the sensible increases performances by ~25%, though it doesn't seem to impact memory use... So... do that: `fa_simplifier` is the same idea as ua-parser/uap-rust@29b9195 but from the Python side, and applied to graal and re2 (not regex because it does that internally as linked above). Also switch Graal over to the lazy builtins, it kinda spreads the cost but it seems stupid to compile the regexes only to immediately swap (fa_simplifier) and recompile them... so don't do that, especially as I couldn't be arsed to make the replacement conditional (so every eager regex is recompiled, even though only those which actually got modified by `fa_simplifier` need it...). Fixes ua-parser#228
TRegex JIT-ing really doesn't like the UAP workload one bit. It basically uses 3 cores worth of CPU to do nothing as the runtime is the same with and without compilation. Sadge. Might as well cut it off, no sense wasting CPU time on the runners.
ea735eb
to
2588f8b
Compare
Not sure why I didn't do that when I merged it, but I think it's the best default, if available, which is unlikely for the reason that it requires a completely bespoke dep.
Also bump cache up: on `bench` the `basic` resolver high water marks as: - 40MB with no cache, averaging 455µs/line - 40.7MB with a 200 entries s3fifo, averaging 324µs/line - 42.4MB with a 2000 entries s3fifo, averaging 191µs/line - 44.2MB with a 5000 entries s3fifo, averaging 155µs/line - 47.2MB with a 10000 entries s3fifo, averaging 134µs/line - 53MB with a 2000 entries s3fifo, averaging 123µs/line Either 2000 or 5000 seem like pretty good defaults, the gains taper afterwards as memory use increases sharply. Bump to 2000 to stay on the conservative side.
masklinn
added a commit
to masklinn/uap-python
that referenced
this pull request
Feb 1, 2025
Reported by @Rafiot: the lazy parser is not memoised, this has limited effect on the basic / pure Python parser as its initialisation is trivial, but it *significantly* impact the re2 and regex parsers as they need to process regexes into a filter tree. The memoization was mistakenly removed in ua-parser#230: while refactoring initialisation I removed the setting of the `parser` global. - add a test to ensure the parser is correctly memoized, not re-instantiated every time - reinstate setting the global - add a mutex on `__getattr__`, it should only be used on first access and avoids two threads creating an expensive parser at the same time (which is a waste of CPU) Fixes ua-parser#253
masklinn
added a commit
that referenced
this pull request
Feb 1, 2025
Reported by @Rafiot: the lazy parser is not memoised, this has limited effect on the basic / pure Python parser as its initialisation is trivial, but it *significantly* impact the re2 and regex parsers as they need to process regexes into a filter tree. The memoization was mistakenly removed in #230: while refactoring initialisation I removed the setting of the `parser` global. - add a test to ensure the parser is correctly memoized, not re-instantiated every time - reinstate setting the global - add a mutex on `__getattr__`, it should only be used on first access and avoids two threads creating an expensive parser at the same time (which is a waste of CPU) Fixes #253
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.