Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

BUG: Converting large integer to np.longdouble can raise ValueError #28722

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: main
Choose a base branch
Loading
from

Conversation

Tontonio3
Copy link
Contributor

Fixes issue #28639

Now np.longdouble when converting from integers raises Overflow error if it is too big for the platform. For 80 and 128 bit platforms can handle more than 4300 digits. It does not convert to a string anymore, so now it is faster.

All Python functions that were used are from the CPython Stable ABI.

@Tontonio3
Copy link
Contributor Author

Tontonio3 commented Apr 16, 2025

I don't know why some of the tests are failing, I think it is because of rounding errors, but I didn't change anything that would affect what the test is testing

def test_longdouble_from_int_rounding(sign, int_val):
int_val *= sign
flt_val = float(int_val)
assert np.isclose(np.longdouble(int_val), flt_val)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor comment in passing--for testing, usage of assert_allclose() will typically give you better error messages when there is a failure (since it is purpose-built for testing of course).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion, will do

@seberg
Copy link
Member

seberg commented Apr 24, 2025

Thanks, but to be honest, this all looks overly complicated to me for something that interest isn't even very high.

Why is it so complicated? Do you even bother to do anything on the bit-level? I can understand working with the mantissa and exponent of the integer. But I would hope we can keep it at that at most. If this doesn't condense down to a fraction of the complexity, I think this will be a very hard to impossible sell.

(There are other issues also in the mantissa code, but we can work on that later.)

@Tontonio3
Copy link
Contributor Author

Thanks, but to be honest, this all looks overly complicated to me for something that interest isn't even very high.

I understand the interest is not that high, but it was something that I have never worked with before and I wanted to challenge myself.

Why is it so complicated?

I also understand that the code is complicated, because this is a complicated problem. Most of my issues came from reading the python long, and extracting the correct information from an arbitrarily sized number (I've seen a python long of 3 kilo bytes). That's why each helper function has a clear purpose.

Do you even bother to do anything on the bit-level?

I don't understand this question. I do use bit-level operations, although the general case doesn't because there is no standard of what a long double is and how it is stored (Just means something bigger than a double).

If this doesn't condense down to a fraction of the complexity, I think this will be a very hard to impossible sell.

I understand, but it would be a big help if you could point out the more complex parts. But I do feel this is less complex than the code before.

(There are other issues also in the mantissa code, but we can work on that later.)

I would love to know the issues you've found, as I know the code is not perfect.

Side comment: The bitwise code was made before the general one, as I made it to try to understand how it all works; Then during testing I found it was up to 3x faster than the general code.

Thanks for the feedback @seberg, I'm still kind of new to contributing to numpy so all feedback is welcome

@seberg
Copy link
Member

seberg commented Apr 25, 2025

I would love to know the issues you've found, as I know the code is not perfect.

For one, you are neither checking for Python errors, nor doing reference counting.

Then during testing I found it was up to 3x faster than the general code.

OK, I see you have a generic version, but I suspect the take-away should be that the generic version is doing something odd if it is 3x slower. For small numbers, it's a trivial cast (basically) and for large numbers all that Python work you are doing should be far slower anyway. I.e. I would expect the Python computation there to be the interesting part that we should spend time on thinking about.

For example, use ldexpl, rather than expl. But even if huge numbers were 3x slower, I wouldn't care, the goal is for this to be much faster and avoid a crash. But if it is much more complex then there is little point.

@Tontonio3
Copy link
Contributor Author

@seberg I can remove the part that has the bitwise operations, and just leave the general case if that would be best?

@seberg
Copy link
Member

seberg commented May 13, 2025

@seberg I can remove the part that has the bitwise operations, and just leave the general case if that would be best?

Yes, I think I was clear I don't want to consider any non-generic solution, it'll just be a potential bug magnet with very little gain. You might convince someone in a follow-up PR if there is a massive speed-up to a real world use-case.

@Tontonio3
Copy link
Contributor Author

@seberg I removed any non generic solutions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.