-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
BUG: Converting large integer to np.longdouble can raise ValueError #28722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
I don't know why some of the tests are failing, I think it is because of rounding errors, but I didn't change anything that would affect what the test is testing |
numpy/_core/tests/test_longdouble.py
Outdated
def test_longdouble_from_int_rounding(sign, int_val): | ||
int_val *= sign | ||
flt_val = float(int_val) | ||
assert np.isclose(np.longdouble(int_val), flt_val) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a minor comment in passing--for testing, usage of assert_allclose()
will typically give you better error messages when there is a failure (since it is purpose-built for testing of course).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, will do
Thanks, but to be honest, this all looks overly complicated to me for something that interest isn't even very high. Why is it so complicated? Do you even bother to do anything on the bit-level? I can understand working with the mantissa and exponent of the integer. But I would hope we can keep it at that at most. If this doesn't condense down to a fraction of the complexity, I think this will be a very hard to impossible sell. (There are other issues also in the mantissa code, but we can work on that later.) |
I understand the interest is not that high, but it was something that I have never worked with before and I wanted to challenge myself.
I also understand that the code is complicated, because this is a complicated problem. Most of my issues came from reading the
I don't understand this question. I do use bit-level operations, although the general case doesn't because there is no standard of what a
I understand, but it would be a big help if you could point out the more complex parts. But I do feel this is less complex than the code before.
I would love to know the issues you've found, as I know the code is not perfect. Side comment: The bitwise code was made before the general one, as I made it to try to understand how it all works; Then during testing I found it was up to 3x faster than the general code. Thanks for the feedback @seberg, I'm still kind of new to contributing to numpy so all feedback is welcome |
For one, you are neither checking for Python errors, nor doing reference counting.
OK, I see you have a generic version, but I suspect the take-away should be that the generic version is doing something odd if it is 3x slower. For small numbers, it's a trivial cast (basically) and for large numbers all that Python work you are doing should be far slower anyway. I.e. I would expect the Python computation there to be the interesting part that we should spend time on thinking about. For example, use |
@seberg I can remove the part that has the bitwise operations, and just leave the general case if that would be best? |
Yes, I think I was clear I don't want to consider any non-generic solution, it'll just be a potential bug magnet with very little gain. You might convince someone in a follow-up PR if there is a massive speed-up to a real world use-case. |
@seberg I removed any non generic solutions |
Fixes issue #28639
Now
np.longdouble
when converting from integers raises Overflow error if it is too big for the platform. For 80 and 128 bit platforms can handle more than 4300 digits. It does not convert to a string anymore, so now it is faster.All Python functions that were used are from the CPython Stable ABI.