Description
Pandas version checks
-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
-
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import random
import string
import pandas as pd
import pyarrow as pa
# Gen ~1m bytes of text
txt = "".join(random.choices(string.printable, k=int(1e6)))
# Gen dataframes
data = {"c": [txt] * int(5e3)}
df_v0 = pd.DataFrame(data, dtype="string[python]")
df_v1 = pd.DataFrame(data, dtype="string[pyarrow]")
# Write to parquets using schema
schema = pa.schema([pa.field("c", pa.string())])
df_v0.to_parquet(path="df_v0.parquet", schema=schema)
df_v1.to_parquet(path="df_v1.parquet", schema=schema)
Issue Description
Writing to a parquet file fails when dtype
is string[pyarrow]
but not for string[python]
.
Expected Behavior
I believe df_v1
should write to a parquet file like df_v0
does.
Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.11.0-1012-azure
Version : #12~24.04.1-Ubuntu SMP Mon Mar 10 19:00:39 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None