-
-
Notifications
You must be signed in to change notification settings - Fork 734
Add MessagePackPrimitives
class
#1986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
8a63bce
to
97a95c3
Compare
The perf is significantly slower after this change. I haven't studied the ETL trace, but my guess is we won't find an efficient way to call from the reader/writer structs into the |
@neuecc Do you have any time and interest in mitigating the perf regressions here? Should we abandon the PR? |
The It's the |
Thanks, I'll check now. |
I added [Benchmark(OperationsPerInvoke = BufferLength)]
[BenchmarkCategory("2.0")]
public void ReadBytePrimitiveInt32()
{
int offset = 0;
for (int i = 0; i < this.buffer.Length; i++)
{
newmsgpack::MessagePack.MessagePackPrimitives.TryReadInt32(this.buffer.AsSpan(offset), out int value, out var readSize);
offset += readSize;
}
} I think the overhead simply doubled. The contents of v1 pushed the branching into array and virtual method calls, making the method body very small. [MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int ReadInt32(byte[] bytes, int offset, out int readSize)
{
return int32Decoders[bytes[offset]].Read(bytes, offset, out readSize);
} I tried various approaches, but it seems difficult to achieve performance equivalent to this... Note that for the benchmark measurements, I was using WarmupCount 1 and IterationCount 1 to speed up the execution iterations. class BenchmarkConfig : ManualConfig
{
public BenchmarkConfig()
{
AddJob(Job.ShortRun.WithWarmupCount(1).WithIterationCount(1));
}
}
[Config(typeof(BenchmarkConfig))]
[GroupBenchmarksBy(BenchmarkLogicalGroupRule.ByCategory)]
[CategoriesColumn]
public class MessagePackReaderBenchmark By the way, the excessive overloading specification was difficult to use. |
@neuecc Excellent perf hints. Thank you. I have a prototype with these numbers:
That's better than v2's original perf before my change. Yay. |
| Method | Categories | Mean | Error | StdDev | Ratio | RatioSD | |----------------------- |----------- |---------:|----------:|----------:|------:|--------:| | ReadByte10 | 1.0 | 1.085 ns | 0.0053 ns | 0.0047 ns | ? | ? | | | | | | | | | | ReadBytePrimitiveInt32 | 2.0 | 1.243 ns | 0.0025 ns | 0.0019 ns | 0.48 | 0.00 | | ReadByte20 | 2.0 | 2.582 ns | 0.0072 ns | 0.0060 ns | 1.00 | 0.00 |
I've updated the benchmark tables in the PR description. Read speed is awesome. Write speed is regressed. Based on what I've learned in speeding up read speeds, I think I know why. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good to go at this point. Some of the higher level perf tables suggest regressions in 'after', but you'll note that the whole table is regressed including irrelevant things to the changes such as Hyperion, so if you take that into account and adjust, it looks about right. And the microbenchmarks look either better or close to original.
This avoids a name conflict with the result of `PipeReader.ReadAsync`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
I was thinking of not doing this rewrite since it would be a hassle, so I'm happy to see it implemented.
If this is going in, then there's probably no need to rush to include #2054 which has some incomplete behavior (although Span support should be added eventually)
Closes #1985
And use it from
MessagePackWriter
andMessagePackReader
.Trivia:
16 files changed, 3301 insertions(+), 1139 deletions(-)
Perf comparisons
All of these are on .NET 8
Deserialize
Before
After
Serialize
Before
After
MessagePackReader
Before
After
MessagePackWriter
Before
After