Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

@Fokko
Copy link
Contributor

@Fokko Fokko commented Jan 21, 2025

Second attempt of #1539

This was already being discussed back here: #208 (comment)

This PR changes from doing a sort, and then a single pass over the table to the approach where we determine the unique partition tuples filter on them individually.

Fixes #1491

Because the sort caused buffers to be joined where it would overflow in Arrow. I think this is an issue on the Arrow side, and it should automatically break up into smaller buffers. The combine_chunks method does this correctly.

Now:

0.42877754200890195
Run 1 took: 0.2507691659993725
Run 2 took: 0.24833179199777078
Run 3 took: 0.24401691700040828
Run 4 took: 0.2419595829996979
Average runtime of 0.28 seconds

Before:

Run 0 took: 1.0768639159941813
Run 1 took: 0.8784021250030492
Run 2 took: 0.8486490420036716
Run 3 took: 0.8614017910003895
Run 4 took: 0.8497851670108503
Average runtime of 0.9 seconds

So it comes with a nice speedup as well :)

This was already being discussed back here:

apache#208 (comment)

This PR changes from doing a sort, and then a single pass over the
table to the the approach where we determine the unique partition tuples
then filter on them one by one.

Fixes apache#1491

Because the sort caused buffers to be joined where it would overflow
in Arrow. I think this is an issue on the Arrow side, and it should
automatically break up into smaller buffers. The `combine_chunks`
method does this correctly.

Now:

```
0.42877754200890195
Run 1 took: 0.2507691659993725
Run 2 took: 0.24833179199777078
Run 3 took: 0.24401691700040828
Run 4 took: 0.2419595829996979
Average runtime of 0.28 seconds
```

Before:

```
Run 0 took: 1.0768639159941813
Run 1 took: 0.8784021250030492
Run 2 took: 0.8486490420036716
Run 3 took: 0.8614017910003895
Run 4 took: 0.8497851670108503
Average runtime of 0.9 seconds
```

So it comes with a nice speedup as well :)
@Fokko Fokko requested a review from kevinjqliu January 21, 2025 14:30
@Fokko Fokko force-pushed the fd-fix-overflowing-buffer branch from 855d121 to cafd39d Compare January 21, 2025 14:32
@kevinjqliu
Copy link
Contributor

make: *** [Makefile:55: test-integration] Aborted (core dumped)

uh oh

Copy link
Contributor

@kevinjqliu kevinjqliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

tests/integration/test_writes/test_partitioned_writes.py Outdated Show resolved Hide resolved
tests/integration/test_writes/test_partitioned_writes.py Outdated Show resolved Hide resolved
tests/integration/test_writes/test_partitioned_writes.py Outdated Show resolved Hide resolved
@Fokko Fokko force-pushed the fd-fix-overflowing-buffer branch from 19eeb45 to 9bf6cda Compare January 21, 2025 19:26
@Fokko Fokko force-pushed the fd-fix-overflowing-buffer branch from 9bf6cda to 7442c41 Compare January 21, 2025 19:29
@Fokko
Copy link
Contributor Author

Fokko commented Jan 21, 2025

@kevinjqliu I think the test is a bit too much, according to your comment here #1539 (comment) the test allocates almost 5gb 😀

@kevinjqliu
Copy link
Contributor

2^32 (4_294_967_296) is around 4GB, we just need to test a scenario greater than that

Comment on lines 416 to 419
if not isinstance(value, int):
# When adding files, it can be that we still need to convert from logical types to physical types
value = _to_partition_representation(iceberg_type, value)
transformed_value = partition_field.transform.transform(iceberg_type)(value)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is causing bugs, I'm going to revisit this to fix it properly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, so I got to the bottom of it. It has to do with the return types of the transforms. eg. When we apply the bucket transform, the result is always an int, which is great. The problem is with the identity transform where the destination type is equal to the source type. So when a date comes in, it also comes out.

I think in the end it is better to remove the _to_partition_representation and see if we can consolidate this somewhere, but that's a different PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So when a date comes in, it also comes out.

is it due to not having support for datetime literal? #1542

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also if its just for adding files, perhaps we can do something special just for that path

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also if its just for adding files, perhaps we can do something special just for that path

Yes, that's exactly what I went for. I think we can simplify the logic in subsequent PRs :)

@Fokko Fokko merged commit 36d383d into apache:main Jan 23, 2025
7 checks passed
@Fokko Fokko deleted the fd-fix-overflowing-buffer branch January 23, 2025 06:50
Fokko pushed a commit that referenced this pull request Jan 24, 2025
Following up #1555, which commented out tests in
`tests/integration/test_partitioning_key.py`
This PR uncomment those tests; they can run succesfully
Fokko added a commit that referenced this pull request Apr 16, 2025
<!--
Thanks for opening a pull request!
-->

<!-- In the case this PR will resolve an issue, please replace
${GITHUB_ISSUE_ID} below with the actual Github issue id. -->
<!-- Closes #${GITHUB_ISSUE_ID} -->

# Rationale for this change

Found out I broke this myself after doing a `git bisect`:

```
36d383d is the first bad commit
commit 36d383d
Author: Fokko Driesprong <fokko@apache.org>
Date:   Thu Jan 23 07:50:54 2025 +0100

    PyArrow: Avoid buffer-overflow by avoid doing a sort (#1555)
    
    Second attempt of #1539
    
    This was already being discussed back here:
    #208 (comment)
    
    This PR changes from doing a sort, and then a single pass over the table
    to the approach where we determine the unique partition tuples filter on
    them individually.
    
    Fixes #1491
    
    Because the sort caused buffers to be joined where it would overflow in
    Arrow. I think this is an issue on the Arrow side, and it should
    automatically break up into smaller buffers. The `combine_chunks` method
    does this correctly.
    
    Now:
    
    ```
    0.42877754200890195
    Run 1 took: 0.2507691659993725
    Run 2 took: 0.24833179199777078
    Run 3 took: 0.24401691700040828
    Run 4 took: 0.2419595829996979
    Average runtime of 0.28 seconds
    ```
    
    Before:
    
    ```
    Run 0 took: 1.0768639159941813
    Run 1 took: 0.8784021250030492
    Run 2 took: 0.8486490420036716
    Run 3 took: 0.8614017910003895
    Run 4 took: 0.8497851670108503
    Average runtime of 0.9 seconds
    ```
    
    So it comes with a nice speedup as well :)
    
    ---------
    
    Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com>

 pyiceberg/io/pyarrow.py                    |  129 ++-
 pyiceberg/partitioning.py                  |   39 +-
 pyiceberg/table/__init__.py                |    6 +-
 pyproject.toml                             |    1 +
 tests/benchmark/test_benchmark.py          |   72 ++
 tests/integration/test_partitioning_key.py | 1299 ++++++++++++++--------------
 tests/table/test_locations.py              |    2 +-
 7 files changed, 805 insertions(+), 743 deletions(-)
 create mode 100644 tests/benchmark/test_benchmark.py
```

Closes #1917


# Are these changes tested?

# Are there any user-facing changes?

<!-- In the case of user-facing changes, please add the changelog label.
-->
Fokko added a commit that referenced this pull request Apr 17, 2025
<!--
Thanks for opening a pull request!
-->

<!-- In the case this PR will resolve an issue, please replace
${GITHUB_ISSUE_ID} below with the actual Github issue id. -->
<!-- Closes #${GITHUB_ISSUE_ID} -->

Found out I broke this myself after doing a `git bisect`:

```
36d383d is the first bad commit
commit 36d383d
Author: Fokko Driesprong <fokko@apache.org>
Date:   Thu Jan 23 07:50:54 2025 +0100

    PyArrow: Avoid buffer-overflow by avoid doing a sort (#1555)

    Second attempt of #1539

    This was already being discussed back here:
    #208 (comment)

    This PR changes from doing a sort, and then a single pass over the table
    to the approach where we determine the unique partition tuples filter on
    them individually.

    Fixes #1491

    Because the sort caused buffers to be joined where it would overflow in
    Arrow. I think this is an issue on the Arrow side, and it should
    automatically break up into smaller buffers. The `combine_chunks` method
    does this correctly.

    Now:

    ```
    0.42877754200890195
    Run 1 took: 0.2507691659993725
    Run 2 took: 0.24833179199777078
    Run 3 took: 0.24401691700040828
    Run 4 took: 0.2419595829996979
    Average runtime of 0.28 seconds
    ```

    Before:

    ```
    Run 0 took: 1.0768639159941813
    Run 1 took: 0.8784021250030492
    Run 2 took: 0.8486490420036716
    Run 3 took: 0.8614017910003895
    Run 4 took: 0.8497851670108503
    Average runtime of 0.9 seconds
    ```

    So it comes with a nice speedup as well :)

    ---------

    Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com>

 pyiceberg/io/pyarrow.py                    |  129 ++-
 pyiceberg/partitioning.py                  |   39 +-
 pyiceberg/table/__init__.py                |    6 +-
 pyproject.toml                             |    1 +
 tests/benchmark/test_benchmark.py          |   72 ++
 tests/integration/test_partitioning_key.py | 1299 ++++++++++++++--------------
 tests/table/test_locations.py              |    2 +-
 7 files changed, 805 insertions(+), 743 deletions(-)
 create mode 100644 tests/benchmark/test_benchmark.py
```

Closes #1917

<!-- In the case of user-facing changes, please add the changelog label.
-->
gabeiglio pushed a commit to Netflix/iceberg-python that referenced this pull request Aug 13, 2025
<!--
Thanks for opening a pull request!
-->

<!-- In the case this PR will resolve an issue, please replace
${GITHUB_ISSUE_ID} below with the actual Github issue id. -->
<!-- Closes #${GITHUB_ISSUE_ID} -->

# Rationale for this change

Found out I broke this myself after doing a `git bisect`:

```
36d383d is the first bad commit
commit 36d383d
Author: Fokko Driesprong <fokko@apache.org>
Date:   Thu Jan 23 07:50:54 2025 +0100

    PyArrow: Avoid buffer-overflow by avoid doing a sort (apache#1555)
    
    Second attempt of apache#1539
    
    This was already being discussed back here:
    apache#208 (comment)
    
    This PR changes from doing a sort, and then a single pass over the table
    to the approach where we determine the unique partition tuples filter on
    them individually.
    
    Fixes apache#1491
    
    Because the sort caused buffers to be joined where it would overflow in
    Arrow. I think this is an issue on the Arrow side, and it should
    automatically break up into smaller buffers. The `combine_chunks` method
    does this correctly.
    
    Now:
    
    ```
    0.42877754200890195
    Run 1 took: 0.2507691659993725
    Run 2 took: 0.24833179199777078
    Run 3 took: 0.24401691700040828
    Run 4 took: 0.2419595829996979
    Average runtime of 0.28 seconds
    ```
    
    Before:
    
    ```
    Run 0 took: 1.0768639159941813
    Run 1 took: 0.8784021250030492
    Run 2 took: 0.8486490420036716
    Run 3 took: 0.8614017910003895
    Run 4 took: 0.8497851670108503
    Average runtime of 0.9 seconds
    ```
    
    So it comes with a nice speedup as well :)
    
    ---------
    
    Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com>

 pyiceberg/io/pyarrow.py                    |  129 ++-
 pyiceberg/partitioning.py                  |   39 +-
 pyiceberg/table/__init__.py                |    6 +-
 pyproject.toml                             |    1 +
 tests/benchmark/test_benchmark.py          |   72 ++
 tests/integration/test_partitioning_key.py | 1299 ++++++++++++++--------------
 tests/table/test_locations.py              |    2 +-
 7 files changed, 805 insertions(+), 743 deletions(-)
 create mode 100644 tests/benchmark/test_benchmark.py
```

Closes apache#1917


# Are these changes tested?

# Are there any user-facing changes?

<!-- In the case of user-facing changes, please add the changelog label.
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Error in overwrite(): pyarrow.lib.ArrowInvalid: offset overflow with large dataset (~3M rows)

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.