-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
ENH: Implement np.floating.as_integer_ratio
#10741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
61cbcf1
to
38924fb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beyond the comment about the test, I think this needs an entry in the release notes - I presume this is mostly there for compatibility with cpython float?
38924fb
to
49455c9
Compare
Updated with docstring and release notes |
I assume this is inspired by the debate raging in python core:
The argument there is actually about adding The whole topic seems very unimportant to me, because who even cares about I'm wincing at adding more incompatibilities between scalars and 0d arrays. OTOH I don't know that eliminating those is actually realistic, and we already have this on It's also unusual and surprising that this returns Python ints rather than numpy ints, but OTOH I guess this is necessary because the returned values might not fit into an So kinda mixed feelings about this overall. I guess my overall impression is an emphatic meh? Am I missing important use cases? |
Oh, on further examination it looks like that started out as a debate about the
|
I saw something like that elsewhere, but decided to wait to see what CPython does first. As they point out, most of the value is in consistency - and what we do right now is consistent with CPython!
I don't think this is true for
Yep, otherwise we may as well implement
I'm hoping that eventually we'll solve the dtype-specific-methods-on-arrays problem |
Wait, but wasn't that the whole point of the Dragon4 work, that
I don't see how any solution is really possible, since if we allow arbitrary methods to be added to arrays then it means we can never add any new methods to arrays ourselves without potentially breaking backcompat :-/. |
Sorry, I realized this is probably just spitting out noise, but since I had started writing, I did not want to delete it again ;). My (probably naive) gut feeling on dtype specific methods is to allow something like an The main "oddity" about compatibility is in my opinion not the 0D arrays do not behave like scalars, but the part that scalars have to pretend they are 0D arrays because we convert 0D arrays to scalars for good reason almost everywhere. It seems to me that this incompatibility is the other way around though, so I do not really see much of an issue ;). I would almost like something like:
Now, will people be able to block our methods? Sure, but only if they subclass our types, and then we could possibly even implement a warning that a new dtype method is being/will be shadowed. |
No, I think the point was that For example: >>> from fractions import Fraction
>>> f16 = np.float16(2.1)
>>> str(f16)
2.1
>>> str(np.longdouble(f16))
2.099609375 # this is the exact value, but we got 2.1 above because that was enough for uniqueness
>>> f_bad = Fraction(str(f16)); f_bad
Fraction(21, 10)
>>> f_bad == f16
False
>>> f_good = Fraction(*f16.as_integer_ratio()); f_good
Fraction(1075, 512)
>>> f_good == f16
True |
One of the incentives behind this PR was to be able to write better tests for |
Any more thoughts on this? I don't think that thinking about dtype-specific array methods is relevant here - we already have The only argument I can see against this is maintenance cost - and I think that's paid for by a better ability to debug precision issues with |
I think we might as well have it. |
@charris, any opinions? CI failure is garbage here |
Should/Did this hit the mailing list? Seems like a simple case of adapting to CPython's float type. In any case it needs updating for 1.16 |
69fd573
to
0a015e5
Compare
This comment has been minimized.
This comment has been minimized.
Python is now getting |
This comment has been minimized.
This comment has been minimized.
Guess I now need to update this for 1.17... |
I'd be quite happy to merge this once rebased. |
This matches the builtin `float.as_integer_ratio` and (in recent python versions) `int.as_integer_ratio`.
47c8700
to
79799b3
Compare
Rebased, let's see if tests still pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation looks all OK. For the tests, I'd prefer replacing the 1000 always-the-same random samples with a few well-chosen ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started refactoring the test a little locally based on Marten's feedback. Maybe I'll push a separate commit for Eric to revise.
I pushed in a draft commit that breaks the huge / nested loop test into smaller tests for clarity with respect to reporting failures and having less nesting in a single test. Not sure if Eric will like that but can always just gut the commit. I didn't try to tackle the case generation yet though. |
I think a lot of the test structure was designed to resemble the upstream tests this emulates. I'll have to check how much I modified them |
This is the test I copied from: def test_floatasratio(self):
for f, ratio in [
(0.875, (7, 8)),
(-0.875, (-7, 8)),
(0.0, (0, 1)),
(11.5, (23, 2)),
]:
self.assertEqual(f.as_integer_ratio(), ratio)
for i in range(10000):
f = random.random()
f *= 10 ** random.randint(-100, 100)
n, d = f.as_integer_ratio()
self.assertEqual(float(n).__truediv__(d), f)
R = fractions.Fraction
self.assertEqual(R(0, 1),
R(*float(0.0).as_integer_ratio()))
self.assertEqual(R(5, 2),
R(*float(2.5).as_integer_ratio()))
self.assertEqual(R(1, 2),
R(*float(0.5).as_integer_ratio()))
self.assertEqual(R(4728779608739021, 2251799813685248),
R(*float(2.1).as_integer_ratio()))
self.assertEqual(R(-4728779608739021, 2251799813685248),
R(*float(-2.1).as_integer_ratio()))
self.assertEqual(R(-2100, 1),
R(*float(-2100.0).as_integer_ratio()))
self.assertRaises(OverflowError, float('inf').as_integer_ratio)
self.assertRaises(OverflowError, float('-inf').as_integer_ratio)
self.assertRaises(ValueError, float('nan').as_integer_ratio) |
Hmm, yes, I think I complained about the random number test before :-; which is probably why you have only 1000, not 10000. Still do not see any point in it. |
If we're breaking up the test, it would make sense to me to create a |
229fecc
to
ddea169
Compare
I've done this too now. I suspect Eric or Marten will still have some thoughts on those hard-coded test examples I've added in now though. At first I thought the integer cases didn't seem well-sampled, but I think it is just deceptive because of |
Ah, Windows failures on the hardcoded test values. |
ddea169
to
92f0187
Compare
Well, at least we're seeing something useful from the ppc64le CI here |
the case that fails: |
92f0187
to
4bebf05
Compare
Thanks for the test cleanup @tylerjereddy - I should do some more reading on the hypothesis module you're using to help with these. I amended your last commit to copy the Feel free to tweak them some more, but I think either way this looks good to me. |
Ok, the code changes were already approved by a core dev modulo testing refinements, which are now complete. So, in it goes, thanks Eric |
Thanks Tyler! |
The original incentive here was to port tests from CPython for #9963
Fixes #9969