id
int64
number
int64
title
string
state
string
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
html_url
string
is_pull_request
bool
pull_request_url
string
pull_request_html_url
string
user_login
string
comments_count
int64
body
string
labels
list
reactions_plus1
int64
reactions_minus1
int64
reactions_laugh
int64
reactions_hooray
int64
reactions_confused
int64
reactions_heart
int64
reactions_rocket
int64
reactions_eyes
int64
comments
list
3,237,012,125
61,879
DOC: Document that str.match accepts a regular expression
open
2025-07-16T18:52:17
2025-07-26T07:35:54
null
https://github.com/pandas-dev/pandas/pull/61879
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61879
https://github.com/pandas-dev/pandas/pull/61879
hamdanal
0
Similar to str.fullmatch and other methods that accept regular expressions - [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs", "API Design", "API - Consistency" ]
0
0
0
0
0
0
0
0
[]
3,236,918,446
61,878
DOC: update Parquet IO user guide on index handling and type support across engines
closed
2025-07-16T18:16:55
2025-07-17T05:33:35
2025-07-16T21:29:00
https://github.com/pandas-dev/pandas/pull/61878
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61878
https://github.com/pandas-dev/pandas/pull/61878
jorisvandenbossche
1
It seems this section of our documentation was quite outdated. Have updated it to the best of my knowledge and based on some testing.
[ "Docs", "IO Parquet" ]
0
0
0
0
0
0
0
0
[ "Thanks @jorisvandenbossche " ]
3,236,839,383
61,877
DOC: show Parquet examples with default engine (without explicit pyarrow/fastparquet engine keyword)
closed
2025-07-16T17:54:14
2025-07-17T05:33:38
2025-07-16T21:27:45
https://github.com/pandas-dev/pandas/pull/61877
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61877
https://github.com/pandas-dev/pandas/pull/61877
jorisvandenbossche
1
Encountered this in https://github.com/pandas-dev/pandas/pull/61864, but in general for the readability of our doc page, I feel that it is not needed to show every single code example in this section with both pyarrow and fastparquet (certainly because in practice the fastparquet result is then ignored, and we only show the resulting dtypes for the pyarrow one). We already mention in the text itself the engine keyword and the different options.
[ "Docs", "IO Parquet" ]
0
0
0
0
0
0
0
0
[ "Thanks @jorisvandenbossche " ]
3,236,796,639
61,876
ERR: improve exception message from timedelta64-datetime64
closed
2025-07-16T17:41:05
2025-07-16T21:32:56
2025-07-16T21:30:16
https://github.com/pandas-dev/pandas/pull/61876
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61876
https://github.com/pandas-dev/pandas/pull/61876
jbrockmendel
1
- [x] closes #59571 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,236,633,105
61,875
API: IncompatibleFrequency subclass TypeError
closed
2025-07-16T16:42:14
2025-07-18T14:37:42
2025-07-18T00:53:24
https://github.com/pandas-dev/pandas/pull/61875
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61875
https://github.com/pandas-dev/pandas/pull/61875
jbrockmendel
1
- [x] closes #55782 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,236,428,918
61,874
API: np.isinf on Index return Index[bool]
closed
2025-07-16T15:32:29
2025-07-16T17:09:03
2025-07-16T16:26:06
https://github.com/pandas-dev/pandas/pull/61874
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61874
https://github.com/pandas-dev/pandas/pull/61874
jbrockmendel
1
- [x] closes #52676 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Index" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,235,888,714
61,873
BUG:float_precision type hints differ in release version from github and docs pandas==2.3.1
closed
2025-07-16T13:01:08
2025-07-22T12:26:03
2025-07-22T12:26:03
https://github.com/pandas-dev/pandas/issues/61873
true
null
null
mas-4
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python python -m venv venv source ./venv/scripts/activate python -m pip install pandas==2.3.1 Open a modern ide like pycharm and type pd.read_csv(path, float_precision='round_trip') and you will see type check erroring because the code is different. ``` ### Issue Description This is probably a bug in distribution. I currently have installed on my windows system pandas 2.3.1. When I open ``` .venv/Lib/site-packages/pandas/io/parsers/readers.py ``` I see the following line in 3 different definitions for read_csv: ``` float_precision: Literal["high", "legacy"] | None = None, ``` However, the documentation specifies a third option, 'round_trip', and so does the code here on github https://github.com/pandas-dev/pandas/blob/1d153bb1a4c6549958a20e04508967e2ed45159f/pandas/io/parsers/readers.py#L141 I don't understand how this line is different in a pip installed latest version, but not on github.com. This code was fixed back at the beginning of 2024, 18+ months ago. https://github.com/pandas-dev/pandas/commit/37d7db4a1a1f6928a1541eaab05f51318d1d3344 Why does it not appear in pip installable distributions? ### Expected Behavior I expect the line float_precision: Literal["high", "legacy"] | None = None, in pandas/io/parsers/readers.py to read float_precision: Literal["high", "legacy", "round_trip"] | None = ..., ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.13.2 python-bits : 64 OS : Windows OS-release : 11 Version : 10.0.26100 machine : AMD64 processor : Intel64 Family 6 Model 170 Stepping 4, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.3.1 numpy : 2.2.1 pytz : 2024.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : 9.2.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.5 lxml.etree : 5.3.0 matplotlib : 3.10.0 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : 2.9.10 pymysql : None pyarrow : None pyreadstat : None pytest : 8.3.4 python-calamine : None pyxlsb : None s3fs : None scipy : 1.14.1 sqlalchemy : 2.0.36 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None </details>
[ "Bug", "IO CSV", "Typing", "Closing Candidate" ]
0
0
0
0
0
0
0
0
[ "The referenced commit will not be released until pandas 3.0.\n\nhttps://github.com/pandas-dev/pandas/pull/56915\n\nI believe this will resolve the issue." ]
3,235,305,025
61,872
TST: add test for `dtype` argument in `str.decode`
closed
2025-07-16T10:08:10
2025-07-28T17:24:11
2025-07-28T17:24:05
https://github.com/pandas-dev/pandas/pull/61872
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61872
https://github.com/pandas-dev/pandas/pull/61872
hippowm
1
This PR adds a test case for `str.decode` and ensures it correctly infers the string datatype, when `dtype=None` and the option `future.infer_string` is used. This argument was introduced in PR https://github.com/pandas-dev/pandas/pull/60940 but has no test for None. This test adds coverage to the following line: ```py def decode( self, encoding, errors: str = "strict", dtype: str | DtypeObj | None = None ): ... if dtype is not None and not is_string_dtype(dtype): raise ValueError(f"dtype must be string or object, got {dtype=}") if dtype is None and get_option("future.infer_string"): dtype = "str" #✅ NOW COVERED # TODO: Add a similar _bytes interface. if encoding in _cpython_optimized_decoders: # CPython optimized implementation f = lambda x: x.decode(encoding, errors) ... ``` Note: Parts of this test have been automatically generated by a novel technique that we're currently developing as part of an academic research project aiming at improving test coverage. *To not waste developer time, two researchers manually checked the test before submitting it.* We appreciate the developers' time and any feedback is welcomed.
[ "Testing" ]
0
0
0
0
0
0
0
0
[ "Thanks @hippowm " ]
3,234,645,403
61,871
BUG FIX: None of the included dtypes present in df will raise ValueError with clear error message.
open
2025-07-16T06:22:56
2025-07-26T16:36:32
null
https://github.com/pandas-dev/pandas/pull/61871
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61871
https://github.com/pandas-dev/pandas/pull/61871
khemkaran10
3
Before Fix: ```python >>> df = pd.DataFrame({"a": [1, 2, 3]}) >>> df.describe(include=["datetime"]) ... ValueError: No objects to concatenate ``` After Fix: ```python >>> df = pd.DataFrame({"a": [1, 2, 3]}) >>> df.describe(include=["datetime"]) ... ValueError: No columns match the specified include or exclude data types ``` - [x] closes #61863 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
[ "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "pre-commit.ci autofix", "```\r\ndf = pd.DataFrame({\"a\": [1, 2, 3]})\r\ndf.describe(exclude=[np.int64])\r\n...\r\nValueError: None of the included dtypes are present in the DataFrame\r\n```\r\n\r\nAfter the fix, this example will also show the same error. Should we change the error message to something like, \r\n_No columns match the specified include or exclude data types_", "@mroeschke I have made the requested changes. can you please review it." ]
3,234,406,455
61,870
BUG: Fix inconsistency with DateOffset near DST
open
2025-07-16T04:29:43
2025-08-21T21:17:38
null
https://github.com/pandas-dev/pandas/pull/61870
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61870
https://github.com/pandas-dev/pandas/pull/61870
arthurlw
3
- [x] closes #61862 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~ - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. This fix ensures that `pd.offsets.DateOffset(1)` and `pd.offsets.DateOffset(days=1)` return the same value near a DST transition.
[ "Frequency", "Stale" ]
0
0
0
0
0
0
0
0
[ "can you add a whatsnew note", "pre-commit.ci autofix", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,234,352,456
61,869
BUG: Fix logical method Non 1D Extension Arrays
closed
2025-07-16T03:54:25
2025-07-16T15:57:22
2025-07-16T15:41:18
https://github.com/pandas-dev/pandas/pull/61869
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61869
https://github.com/pandas-dev/pandas/pull/61869
tisjayy
1
- [x] closes #61866 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "Now, ExtensionArrays handle these cases like NumPy arrays do by returning NotImplemented when they can’t perform the operation.\r\nSo if the other operand can handle it, it takes over and the operation succeeds instead of just showing error and stopping execution for the entire code.\r\n\r\n\r\n" ]
3,233,935,527
61,868
DOC: Add Raises section to to_numeric docstring
closed
2025-07-15T23:14:46
2025-07-16T16:29:53
2025-07-16T16:29:47
https://github.com/pandas-dev/pandas/pull/61868
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61868
https://github.com/pandas-dev/pandas/pull/61868
tisjayy
2
- [x] closes #61811 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "I don't think any problem should occur with the checks. ", "Thanks @tisjayy " ]
3,233,725,225
61,867
Fix logical operations broadcasting for 2D ExtensionArrays
closed
2025-07-15T21:16:38
2025-07-15T21:44:05
2025-07-15T21:39:48
https://github.com/pandas-dev/pandas/pull/61867
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61867
https://github.com/pandas-dev/pandas/pull/61867
tisjayy
1
- [ ] closes #61866 - [ ] Tests added and passed (local tests not run due to environment setup, but CI will run them) - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] No entry added to whatsnew since this is an internal bug fix
[]
0
0
0
0
0
0
0
0
[ "GitHub says the diff is almost 9 million lines. What’s happening here?" ]
3,233,525,321
61,866
BUG: Operations not implemented for non-1D ExtensionArrays
closed
2025-07-15T19:55:29
2025-08-14T22:39:02
2025-08-14T22:39:02
https://github.com/pandas-dev/pandas/issues/61866
true
null
null
eicchen
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python # mypy: ignore-errors import pandas as pd import numpy as np import pandas._testing as tm df = pd.DataFrame(np.arange(50).reshape(10, 5)).notna().values # -> works NP_array = pd.array([i for i in range(10)], dtype=tm.SIGNED_INT_NUMPY_DTYPES[0]).reshape(10,1) #dtype: NumpyExtensionArray # -> doesnt work (NotImplemented) EA_array = pd.array([i for i in range(10)], dtype=tm.SIGNED_INT_EA_DTYPES[0]).reshape(10,1) #dtype: IntExtensionArray print(df * NP_array) # NotImplementedError: can only perform ops with 1-d structures print(df * EA_array) ``` ### Issue Description I was working on creating test cases for ExtensionArrays following comments on PR #61828 when I realized that I could not use the '&' operation on EAs like I could with NP arrays. After a bit of digging around, it appears they both call self._logical_method, but whereas NP returns NotImplemented and continues operation, EA raises an error. If someone wants to take a look while I work on something else, they are more than welcome to, otherwise I can work out a fix when I come back to it. I have found that to be the case for both '*' and '&' so it's probably something deeper there ### Expected Behavior - Operators should function the same for both Numpy arrays and other ExtensionArrays ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 8a1d5a06f9fb3c232249e3ed301932053efb06d8 python : 3.10.17 python-bits : 64 OS : Linux OS-release : 6.11.0-29-generic Version : #29~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jun 26 14:16:59 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 3.0.0.dev0+2177.g8a1d5a06f9 numpy : 2.2.5 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : 3.0.12 sphinx : 8.1.3 IPython : 8.36.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 bottleneck : 1.4.2 fastparquet : 2024.11.0 fsspec : 2025.3.2 html5lib : 1.1 hypothesis : 6.131.15 gcsfs : 2025.3.2 jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : 3.10.3 numba : 0.61.2 numexpr : 2.10.2 odfpy : None openpyxl : 3.1.5 psycopg2 : 2.9.9 pymysql : 1.4.6 pyarrow : 20.0.0 pyiceberg : None pyreadstat : 1.2.8 pytest : 8.3.5 python-calamine : None pytz : 2025.2 pyxlsb : 1.0.10 s3fs : 2025.3.2 scipy : 1.15.2 sqlalchemy : 2.0.40 tables : 3.10.1 tabulate : 0.9.0 xarray : 2025.4.0 xlrd : 2.0.1 xlsxwriter : 3.2.3 zstandard : 0.23.0 qtpy : None pyqt5 : None None </details>
[ "Bug", "Numeric Operations", "NA - MaskedArrays" ]
0
0
0
0
0
0
0
0
[ "I changed the logical method in boolean.py so ExtensionArrays handle these cases like NumPy arrays do by returning NotImplemented when they can’t perform the operation. So Python can try other ways instead of showing error. Lets see if it passes the code checks. ", "@jbrockmendel this pr should fix it, please review the code changes." ]
3,232,796,055
61,865
DOC: Simplify footer wording in documentation (#51536)
closed
2025-07-15T15:45:11
2025-07-16T16:32:35
2025-07-16T16:32:35
https://github.com/pandas-dev/pandas/pull/61865
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61865
https://github.com/pandas-dev/pandas/pull/61865
hriday-goyal
1
This PR simplifies the wording in the pandas documentation footer for improved readability and clarity. Fixes: #51536
[]
0
0
0
0
0
0
0
0
[ "Thanks for this PR but this is already being worked on in https://github.com/pandas-dev/pandas/pull/61859 so closing" ]
3,232,666,176
61,864
DOC: make doc build run with string dtype enabled
closed
2025-07-15T15:09:32
2025-07-17T08:31:12
2025-07-17T08:31:08
https://github.com/pandas-dev/pandas/pull/61864
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61864
https://github.com/pandas-dev/pandas/pull/61864
jorisvandenbossche
1
~First commit is from https://github.com/pandas-dev/pandas/pull/61722 (will be removed here after that PR is merged and this one is rebased)~, then subsequent commits enable errors on the doc build again and fix issues.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Going to merge this already to make sure the dev docs are back up. But happy to follow-up with any comment." ]
3,232,435,167
61,863
BUG: describe(include=..) fails with unrelated error if provided data types are not present
open
2025-07-15T14:09:59
2025-07-16T06:42:59
null
https://github.com/pandas-dev/pandas/issues/61863
true
null
null
jorisvandenbossche
3
Example with current main: ``` >>> df = pd.DataFrame({"a": [1, 2, 3]}) >>> df.describe(include=["datetime"]) ... ValueError: No objects to concatenate ``` I assume the error comes from under the hood trying to concatenate the results of calculating the describe results for each of the incluced dtype groups, and in this case for datetime there is no content, so nothing to concatenate. But we shouldn't propagate that error message to the user, I think. Either we should provide a better error message about none of the included dtypes being present, or just return an empty DataFrame.
[ "Bug", "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "@jorisvandenbossche we can wrap [concat](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/core/methods/describe.py#L175C9-L180C10) call in a try-except block and raise a clear error for this case. or we can add a check for \"ldesc\" . I'm happy to open a PR for this.", "Sounds good! Can probably add a check for `ldesc` being an empty list before passing it to concat", "take" ]
3,232,408,118
61,862
BUG: DateOffset default temporal pattern does not work as expected with DST
open
2025-07-15T14:02:55
2025-07-17T18:30:22
null
https://github.com/pandas-dev/pandas/issues/61862
true
null
null
ValentinBilla
0
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd # https://www.timeanddate.com/time/change/belgium/brussels?year=2022 ts = pd.Timestamp("2022-10-30", tz="Europe/Brussels") offset_explicit_keyword = ts + pd.offsets.DateOffset(days=1) offset_default_value = ts + pd.offsets.DateOffset(1) assert offset_explicit_keyword == offset_default_value ``` ### Issue Description According to the [documentation](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/_libs/tslibs/offsets.pyx#L1687-L1689), `pd.offsets.DateOffset(1)` should be the same as `pd.offsets.DateOffset(days=1)` During instanciation the following function is called : https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/_libs/tslibs/offsets.pyx#L284-L334 For `pd.offsets.DateOffset(1)` it returns `timedelta(days=1), False`, whereas for `pd.offsets.DateOffset(days=1)` this returns `relativedelta(days=1), True`. This causes inconsistencies in the behavior of the two near DST transitions. ### Expected Behavior Either the doc can be changed or we ensure `pd.offsets.DateOffset(1)` equals `pd.offsets.DateOffset(days=1)` by my understanding this could be done in the following way ```diff cdef _determine_offset(kwds): if not kwds: + from dateutil.relativedelta import relativedelta + # GH 45643/45890: (historically) defaults to 1 day - return timedelta(days=1), False + return relativedelta(days=1), True ``` ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.12.10 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:25 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6020 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : None LOCALE : en_US.UTF-8 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Frequency" ]
0
0
0
0
0
0
0
0
[]
3,231,339,659
61,861
BUG: pd.eval raises AttributeError: 'BinOp' object has no attribute 'value'
closed
2025-07-15T08:32:29
2025-07-15T08:47:04
2025-07-15T08:47:03
https://github.com/pandas-dev/pandas/issues/61861
true
null
null
auderson
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import numpy as np import pandas as pd x = pd.DataFrame(np.empty((3, 4))) y = pd.DataFrame(np.empty((3, 4))) pd.eval("(x * y).sum()") ``` ### Issue Description The above code raises this error: <img width="1384" height="203" alt="Image" src="https://github.com/user-attachments/assets/38af2da0-9437-4638-8215-fe8a0f699fd3" /> Related issue: #61175 ### Expected Behavior . ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.12.11 python-bits : 64 OS : Linux OS-release : 5.15.0-122-generic Version : #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 1.26.4 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : 8.2.3 IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : 6.135.0 gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : 3.10.3 numba : 0.61.2 numexpr : 2.10.2 odfpy : None openpyxl : None pandas_gbq : None psycopg2 : 2.9.10 pymysql : 1.4.6 pyarrow : 20.0.0 pyreadstat : None pytest : 8.4.0 python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.2 sqlalchemy : 2.0.41 tables : 3.10.2 tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : 0.23.0 tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Looks like it's fixed on main, but not merged yet. Closing." ]
3,231,184,894
61,860
ENH: New method "ends" as a combination of “head” and "tail"
closed
2025-07-15T07:43:12
2025-08-05T16:28:53
2025-08-05T16:28:53
https://github.com/pandas-dev/pandas/issues/61860
true
null
null
JoergVanAken
3
### Feature Type - [x] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I often work with time series and want to see at a glance where and how they begin and end. ### Feature Description That's why I registered an "ends" accessor, which provides me with both ends in one call as a combination of "head" and "tail". It's really simple, but very usefull to me: ``` class _EndsAccessor: def __init__(self, pandas_obj): self._obj = pandas_obj def __call__(self, n=2): return pd.concat([self._obj.head(n), self._obj.tail(n)]) @pd.api.extensions.register_dataframe_accessor("ends") class EndsAccessorDataframe(_EndsAccessor): pass @pd.api.extensions.register_series_accessor("ends") class EndsAccessorSeries(_EndsAccessor): pass ``` ### Alternative Solutions We leave it as it is and I continue using the solution shown above. ### Additional Context _No response_
[ "Enhancement", "Needs Triage", "Closing Candidate" ]
0
0
0
0
0
0
0
0
[ "Im skeptical of this. The API is already big and we generally avoid adding methods with an easy alternative already available. Will leave this open in case others disagree.", "agree with @jbrockmendel and that we should prioritize minimal APIs and expect users to compose functionality using existing methods.\n\n`pd.concat([df.head(n), df.tail(n)])` is simple and readable, so no reason IMO that developers should not opt for that pattern directly.", "Seems like there not much buy in for this so closing" ]
3,230,907,178
61,859
Doc simplify footer
closed
2025-07-15T06:04:19
2025-07-28T17:24:56
2025-07-28T17:24:56
https://github.com/pandas-dev/pandas/pull/61859
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61859
https://github.com/pandas-dev/pandas/pull/61859
Siryoos
1
[x] closes #51536 - Simplified pandas theme footer by removing custom template dependency [x] Tests added and passed - Documentation changes don't require additional tests, but the build process validates the changes [x] All code checks passed - Changes follow pandas documentation standards and use proper reStructuredText formatting [x] Added type annotations - Not applicable for documentation-only changes [x] Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file - Entry already exists in doc/source/whatsnew/v3.0.0.rst under "Documentation changes" section
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen." ]
3,230,132,228
61,858
Upgraded README.md
closed
2025-07-14T21:50:23
2025-07-14T23:53:48
2025-07-14T23:53:48
https://github.com/pandas-dev/pandas/pull/61858
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61858
https://github.com/pandas-dev/pandas/pull/61858
kr1shnasomani
1
Enhanced readability of the file's content - [x] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "Thanks but I don't think this improves the readability of this file much so closing" ]
3,229,553,781
61,857
CI: Add testing for Window ARM
closed
2025-07-14T17:52:47
2025-07-14T17:58:27
2025-07-14T17:58:22
https://github.com/pandas-dev/pandas/pull/61857
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61857
https://github.com/pandas-dev/pandas/pull/61857
mroeschke
1
We added wheel support for `win_arm64` in https://github.com/pandas-dev/pandas/pull/61463, so we might as well be regularly testing this platform on CI. Additionally "pins" the runner images we use to test Windows and Mac
[ "CI" ]
0
0
0
0
0
0
0
0
[ "Darn micromamba doesnt support `win_arm64` so I guess this is a non-starter" ]
3,229,451,904
61,856
BUG: Inconsistent .values NA/NaN
open
2025-07-14T17:15:04
2025-07-30T15:56:17
null
https://github.com/pandas-dev/pandas/issues/61856
true
null
null
jbrockmendel
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd ser = pd.Series([1, pd.NA], dtype=pd.Float64Dtype()) df = pd.DataFrame({"A": ser, "B": ["foo", "bar"]}) >>> df[["A"]].values[1,0] np.float64(nan) >>> df.values[1,0] <NA> ``` ### Issue Description When another column forces .values to be object dtype we retain pd.NA, otherwise we cast to NaN. Same behavior with Int64Dtype. ### Expected Behavior This should be consistent. ### Installed Versions <details> Replace this line with the output of pd.show_versions() </details>
[ "PDEP missing values" ]
0
0
0
0
0
0
0
0
[ "Hi! Is anyone working on this issue? I'd like to take it up if it's available.\n", "This is not ready to be worked on. We suggest looking for an issue with the \"good first issue\" label" ]
3,229,434,913
61,855
BUG: If both index and axis are passed to DataFrame.drop, raise a clear error
closed
2025-07-14T17:08:35
2025-07-18T02:18:24
2025-07-18T02:18:16
https://github.com/pandas-dev/pandas/pull/61855
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61855
https://github.com/pandas-dev/pandas/pull/61855
khemkaran10
5
- [x] closes #61823
[ "Bug", "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "pre-commit.ci autofix", "@camriddell I have made the changes.", "Can @rhshadrach take a look and provide some guidance? If we want to maintain backwards compat. then this is likely the final form of this PR.\r\n\r\nHowever there is a deeper issue that is not present in either (thanks for finding these @khemkaran10) [DataFrame.reindex](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/core/generic.py#L5368), [DataFrame.rename](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/core/generic.py#L1019C13-L1022C18). Namely, those methods have a default of `axis=None` and can therefore check when that value is passed, whereas in `.drop` the axis argument defaults to `axis=0` thus making it always \"set\". So we can check for `axis==1` as @khemkaran10 did in the PR (or `axis != 0`). ", "LGTM cc @rhshadrach ", "Thanks @khemkaran10!" ]
3,228,706,921
61,854
BUG: Reassigning .rolling().mean() returns NaNs (pandas-dev#61841)
closed
2025-07-14T13:03:22
2025-07-14T13:07:11
2025-07-14T13:07:11
https://github.com/pandas-dev/pandas/pull/61854
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61854
https://github.com/pandas-dev/pandas/pull/61854
abujabarmubarak
0
This pull request resolves a bug highlighted in issue [[#61841](https://github.com/pandas-dev/pandas/issues/61841)](https://github.com/pandas-dev/pandas/issues/61841), where reassigning the result of `.rolling().mean()` to the same column in a DataFrame results in all-NaN values after the first assignment. #### 🔜 Root Cause: The root cause was improper alignment when using the `step` parameter within the `Window._apply()` function. The rolling results were sliced using `self.step` before being fully aligned with the original index, which caused mismatches in the returned Series/DataFrame. #### 🔧 Fix Implemented: * Adjusted the logic in `Window._apply()` to apply `self.step` only after the result is completely constructed and aligned. * Moved `Series` and `DataFrame` imports from inside a type-checking block (`if TYPE_CHECKING`) to the top of the file. This eliminates pre-commit CI errors related to inconsistent namespace usage. #### 📄 Verification: The fix was verified by executing: ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() print(df.tail()) ``` This now works as expected, and outputs the correct rolling mean values. All relevant pre-commit hooks and CI checks pass after the changes. --- Thank you for reviewing this fix! <img width="1269" height="377" alt="Screenshot 2025-07-13 201135" src="https://github.com/user-attachments/assets/0fb467e4-fb48-4d29-b294-c74abc7177f0" />
[]
0
0
0
0
0
0
0
0
[]
3,228,508,450
61,853
fix extension type check for ArrowDtype
closed
2025-07-14T11:58:50
2025-07-14T11:59:51
2025-07-14T11:59:51
https://github.com/pandas-dev/pandas/pull/61853
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61853
https://github.com/pandas-dev/pandas/pull/61853
rohanjain101
0
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[]
3,227,554,421
61,852
BUG: Fix .rolling().mean() reassignment returning NaNs (pandas-dev#61841)
closed
2025-07-14T06:39:11
2025-07-25T17:57:46
2025-07-25T17:57:46
https://github.com/pandas-dev/pandas/pull/61852
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61852
https://github.com/pandas-dev/pandas/pull/61852
abujabarmubarak
0
This pull request resolves a bug highlighted in issue [[#61841](https://github.com/pandas-dev/pandas/issues/61841)](https://github.com/pandas-dev/pandas/issues/61841), where reassigning the result of `.rolling().mean()` to the same column in a DataFrame results in all-NaN values after the first assignment. #### 🔜 Root Cause: The root cause was improper alignment when using the `step` parameter within the `Window._apply()` function. The rolling results were sliced using `self.step` before being fully aligned with the original index, which caused mismatches in the returned Series/DataFrame. #### 🔧 Fix Implemented: * Adjusted the logic in `Window._apply()` to apply `self.step` only after the result is completely constructed and aligned. * Moved `Series` and `DataFrame` imports from inside a type-checking block (`if TYPE_CHECKING`) to the top of the file. This eliminates pre-commit CI errors related to inconsistent namespace usage. #### 📄 Verification: The fix was verified by executing: ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() print(df.tail()) ``` This now works as expected, and outputs the correct rolling mean values. All relevant pre-commit hooks and CI checks pass after the changes. --- Thank you for reviewing this fix!
[ "Bug", "Window" ]
0
0
0
0
0
0
0
0
[]
3,227,483,722
61,851
BUG: Fix .rolling().mean() reassignment returning NaNs (pandas-dev#61841)
closed
2025-07-14T06:08:20
2025-07-14T16:39:18
2025-07-14T16:39:17
https://github.com/pandas-dev/pandas/pull/61851
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61851
https://github.com/pandas-dev/pandas/pull/61851
abujabarmubarak
1
This pull request resolves a bug highlighted in issue [[#61841](https://github.com/pandas-dev/pandas/issues/61841)](https://github.com/pandas-dev/pandas/issues/61841), where reassigning the result of `.rolling().mean()` to the same column in a DataFrame results in all-NaN values after the first assignment. #### 🔜 Root Cause: The root cause was improper alignment when using the `step` parameter within the `Window._apply()` function. The rolling results were sliced using `self.step` before being fully aligned with the original index, which caused mismatches in the returned Series/DataFrame. #### 🔧 Fix Implemented: * Adjusted the logic in `Window._apply()` to apply `self.step` only after the result is completely constructed and aligned. * Moved `Series` and `DataFrame` imports from inside a type-checking block (`if TYPE_CHECKING`) to the top of the file. This eliminates pre-commit CI errors related to inconsistent namespace usage. #### 📄 Verification: The fix was verified by executing: ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() print(df.tail()) ``` This now works as expected, and outputs the correct rolling mean values. All relevant pre-commit hooks and CI checks pass after the changes. --- Thank you for reviewing this fix!
[]
0
0
0
0
0
0
0
0
[ "Thanks for the PR, but please contain the changes to one open pull request at a time" ]
3,227,448,573
61,850
BUG: Fix issue #61841 - .rolling().mean() returns NaNs on reassignment
closed
2025-07-14T05:50:42
2025-07-14T06:25:58
2025-07-14T05:54:19
https://github.com/pandas-dev/pandas/pull/61850
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61850
https://github.com/pandas-dev/pandas/pull/61850
abujabarmubarak
0
This pull request fixes **issue #61841**, where reassigning a `.rolling().mean()` result unexpectedly returns a Series of all NaNs, even after copying the DataFrame. --- ### 🐛 Bug Reproduction Example ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() # ❌ Returns NaNs ``` --- ### 🔧 What Was Fixed * Modified logic in `Window._apply()`: * Previously, result slicing (`[:: self.step]`) broke shape/index alignment. * Now it checks for `self.step` and slices only *after* full shape result is returned. ```python # ✅ Fixed result = self._apply_columnwise(...) if self.step is not None and self.step > 1: if isinstance(result, Series): result = result.iloc[:: self.step] elif isinstance(result, DataFrame): result = result.iloc[:: self.step, :] return result ``` * Moved `Series` and `DataFrame` imports to the top level of `rolling.py` to fix pre-commit check failures related to inconsistent namespace usage. --- ### 🧪 How Verified ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() print(df.tail()) # ✅ Correct output ``` --- ### ✅ Status * [x] Bug fixed * [x] Code passes all CI and pre-commit checks * [x] Imports are consistently handled Thanks for reviewing this PR!
[]
0
0
0
0
0
0
0
0
[]
3,227,120,464
61,849
Remove incorrect line in Series init docstring
closed
2025-07-14T02:01:35
2025-07-17T03:36:38
2025-07-14T15:48:25
https://github.com/pandas-dev/pandas/pull/61849
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61849
https://github.com/pandas-dev/pandas/pull/61849
petern48
1
- [x] closes #61848 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Copied over from the issue for convenience ``` dtype : str, numpy.dtype, or ExtensionDtype, optional Data type for the output Series. If not specified, this will be inferred from `data`. See the :ref:`user guide <basics.dtypes>` for more usages. If ``data`` is Series then is ignored. ``` The last line here is incorrect. specifying a dtype will override the default behavior. See this example ``` >>> import pandas as pd >>> ser = pd.Series([1,2,3]) >>> ser 0 1 1 2 2 3 dtype: int64 >>> pd.Series(ser, dtype=float) 0 1.0 1 2.0 2 3.0 dtype: float64 ```
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "thanks @petern48 " ]
3,227,118,813
61,848
DOC: Series.__init__ doc incorrectly says dtype is ignored if data is a Series
closed
2025-07-14T02:00:43
2025-07-14T15:48:26
2025-07-14T15:48:26
https://github.com/pandas-dev/pandas/issues/61848
true
null
null
petern48
0
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/dev/reference/api/pandas.Series.html ### Documentation problem ``` dtype : str, numpy.dtype, or ExtensionDtype, optional Data type for the output Series. If not specified, this will be inferred from `data`. See the :ref:`user guide <basics.dtypes>` for more usages. If ``data`` is Series then is ignored. ``` The last line here is incorrect. specifying a dtype will override the default behavior. See this example ``` >>> import pandas as pd >>> ser = pd.Series([1,2,3]) >>> ser 0 1 1 2 2 3 dtype: int64 >>> pd.Series(ser, dtype=float) 0 1.0 1 2.0 2 3.0 dtype: float64 ``` ### Suggested fix for documentation Just remove that line
[ "Docs", "Series" ]
0
0
0
0
0
0
0
0
[]
3,226,691,653
61,847
BUG: Fix .rolling().mean() returning NaNs on reassignment (#61841)
closed
2025-07-13T17:48:15
2025-07-13T18:31:45
2025-07-13T18:31:45
https://github.com/pandas-dev/pandas/pull/61847
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61847
https://github.com/pandas-dev/pandas/pull/61847
abujabarmubarak
0
### What does this PR do? Fixes issue #61841 where `.rolling().mean()` unexpectedly returns all NaNs when the same assignment is executed more than once, even with `.copy()` used on the DataFrame. --- ### Problem When using: ```python df = pd.DataFrame({"Close": range(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() # ❌ Unexpectedly returns all NaNs ``` Only the first assignment works as expected. The second assignment results in a column full of NaNs. This bug is caused by slicing the output with `[:: self.step]` inside `_apply()`, which alters the result's shape and breaks alignment during reassignment. --- ### Fix In `Window._apply()`, we updated the logic to apply slicing only when needed and only after the result is correctly shaped: **Before (buggy):** ```python return self._apply_columnwise(...)[:: self.step] ``` **After (fixed):** ```python result = self._apply_columnwise(...) if self.step is not None and self.step > 1: if isinstance(result, pd.Series): result = result.iloc[::self.step] elif isinstance(result, pd.DataFrame): result = result.iloc[::self.step, :] return result ``` This change: * Preserves result shape and index alignment * Ensures `.rolling().mean()` works even on repeated assignment * Matches behavior in Pandas 2.3.x and above --- ### Testing Reproduced and verified the fix using both real-world and synthetic data: ```python import pandas as pd import numpy as np df = pd.DataFrame({"Close": np.arange(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() print(df["SMA20"].tail()) df["SMA20"] = df["Close"].rolling(20).mean() print(df["SMA20"].tail()) # ✅ Now works correctly ``` --- ### Notes * This was confirmed to be broken in Pandas 2.2.x and was still reproducible in `main` without this patch. * Newer versions avoid the issue due to deeper internal refactors, but this fix explicitly prevents the bug in current code. --- Let me know if anything needs improvement. Thanks for reviewing!
[]
0
0
0
0
0
0
0
0
[]
3,226,644,773
61,846
BUG: Fix .rolling().mean() returning NaNs on reassignment (#61841)
closed
2025-07-13T16:39:51
2025-07-13T17:45:12
2025-07-13T17:45:12
https://github.com/pandas-dev/pandas/pull/61846
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61846
https://github.com/pandas-dev/pandas/pull/61846
abujabarmubarak
0
### What does this PR do? Fixes issue #61841 where `.rolling().mean()` unexpectedly returns all NaNs when the same assignment is executed more than once, even with `.copy()` used on the DataFrame. --- ### Problem When using: ```python df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() # Unexpectedly returns all NaNs ``` Only the first assignment works as expected. The second assignment results in a column full of NaNs. This bug is caused by slicing the output with `[:: self.step]` inside `_apply_columnwise()`, which alters the result's shape and breaks alignment during reassignment. --- ### Fix This PR removes the problematic slicing from `_apply_columnwise()`: **Before (buggy):** ```python return self._apply_columnwise(...)[:: self.step] ``` **After (fixed):** ```python result = self._apply_columnwise(...) return result ``` This change: * Preserves result shape and index alignment * Ensures `.rolling().mean()` works even on repeated assignment * Matches behavior in Pandas 2.3.x and above --- ### Testing Reproduced and verified the fix using both real-world and synthetic data: ```python import pandas as pd df = pd.DataFrame({"Close": range(1, 31)}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() # ✅ Now works correctly ``` --- ### Notes * This was confirmed to be broken in Pandas 2.2.x and still reproducible in `main` without this patch. * Newer versions avoid the issue due to deeper internal refactors, but this fix explicitly prevents the bug in current code. --- Let me know if anything needs improvement. Thanks for reviewing! <img width="1269" height="377" alt="Screenshot 2025-07-13 201135" src="https://github.com/user-attachments/assets/b1d9bf2b-9faa-4e28-83be-ecac8bf18934" />
[]
0
0
0
0
0
0
0
0
[]
3,226,557,213
61,845
BUG: Fix rolling().mean() returning NaNs on reassignment (#61841)
closed
2025-07-13T14:42:18
2025-07-14T16:39:10
2025-07-14T16:39:10
https://github.com/pandas-dev/pandas/pull/61845
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61845
https://github.com/pandas-dev/pandas/pull/61845
abujabarmubarak
1
### Problem Fixes issue #61841 — calling `.rolling().mean()` twice on a copied DataFrame was returning all NaNs on the second run. This happened due to a slicing operation (`[::self.step]`) in `_apply_columnwise`, which broke result alignment when overwriting the same column. ### Solution Removed the `[:: self.step]` slicing from the return statement. This restores full alignment and fixes the regression. ### Test Case Tested locally with this code: ```python df = pd.DataFrame({"Close": list(range(1, 31))}) df = df.copy() df["SMA20"] = df["Close"].rolling(20).mean() df["SMA20"] = df["Close"].rolling(20).mean() print(df.tail()) <img width="1269" height="377" alt="Screenshot 2025-07-13 201135" src="https://github.com/user-attachments/assets/c1cbe325-28c6-4a39-bf98-861492d9295c" />
[]
0
0
0
0
0
0
0
0
[ "Thanks for the PR, but please contain the changes to one open pull request at a time" ]
3,226,437,589
61,844
ENH: Backport free-threading support to 2.3
open
2025-07-13T11:45:27
2025-07-14T20:35:18
null
https://github.com/pandas-dev/pandas/issues/61844
true
null
null
crusaderky
2
Follow-up from https://github.com/pandas-dev/pandas/issues/59057 When I import the pandas-2.3.1 wheel from pypi in Python 3.13t, I get > E RuntimeWarning: The global interpreter lock (GIL) has been enabled to load module 'pandas._libs.pandas_parser', which has not declared that it can run safely without the GIL. To override this behavior and keep the GIL disabled (at your own risk), run with PYTHON_GIL=0 or -Xgil=0. Additionally, there are no conda-forge packages at all for 2.3.1 3.13t. Nightly 2.4 wheels (https://pypi.anaconda.org/scientific-python-nightly-wheels/simple) work as expected. ### Environment Linux x86_64
[ "Enhancement", "Closing Candidate", "Python 3.13" ]
0
0
0
0
0
0
0
0
[ "Assuming #59057 is complete, it would be helpful to understand the set of PRs that would need to be backported to support this. While no strong objection assuming the changes are small, I think I'd still prefer to just release 3.0 which hopefully will not be a long wait.\n\ncc @mroeschke ", "Yeah I've lost track of the various PRs associated with #59057, but I would also be more comfortable waiting for pandas 3.0 (hopefully in a few months) instead of backporting all the free threading changes" ]
3,226,430,650
61,843
DOC: Simplify pandas theme footer
closed
2025-07-13T11:34:18
2025-07-14T16:37:16
2025-07-14T16:37:15
https://github.com/pandas-dev/pandas/pull/61843
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61843
https://github.com/pandas-dev/pandas/pull/61843
Siryoos
2
# 🎯 DOC: Simplify pandas theme footer ## 📝 Description This pull request refactors the pandas documentation footer by tapping into the built-in templates in **pydata-sphinx-theme** v0.16. The result is a leaner, more maintainable setup with zero visual regressions—science approved! ## 🔧 Changes Made 1. **`doc/source/conf.py`** - Updated the copyright line to include “pandas” - Swapped out custom footer bits for the theme’s built-in templates: ```python html_theme_options = { "footer_start": [ "copyright", "pandas_footer", "sphinx-version", ], … } ``` 2. **`doc/_templates/pandas_footer.html`** - Removed the now-redundant copyright snippet - Kept only the NumFOCUS & OVHcloud sponsor links 3. **`doc/source/_static/css/pandas.css`** - Added horizontal layout rules for the new footer items - Ensured consistent spacing & alignment 4. **`doc/source/whatsnew/v3.0.0.rst`** - Added a “Documentation changes” section - Linked to issue [#51536](https://github.com/pandas-dev/pandas/issues/51536) ## ✅ Benefits - **DRY**: Eliminates duplicated footer code - **Maintainable**: Leverages standard theme hooks - **Consistent**: Visual appearance remains identical - **Future-proof**: Automatically picks up theme updates --- 🚀 All checks have passed and this PR is ready for merge! 🎉
[]
0
0
0
0
0
0
0
0
[ "this is AI generated?", "Thanks for the PR, but we discourage the use of AI generated pull requests so closing" ]
3,226,180,052
61,842
Create Vix
closed
2025-07-13T06:20:40
2025-07-14T15:46:56
2025-07-14T15:46:56
https://github.com/pandas-dev/pandas/pull/61842
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61842
https://github.com/pandas-dev/pandas/pull/61842
vagdale
1
Want backtest data Nifty 50 index data rsi - [x] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "This doesn't belong here." ]
3,225,929,050
61,841
BUG: .rolling().mean() returns all NaNs on re-execution, despite .copy() use
open
2025-07-12T23:36:26
2025-07-14T20:45:05
null
https://github.com/pandas-dev/pandas/issues/61841
true
null
null
DavidZatica
5
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import yfinance as yf import pandas as pd # Step 1: Getting the data ticker = "AAPL" data = yf.download(ticker, start="2020-01-01", end="2025-07-13", auto_adjust=True, progress=False) # Step 3: Reduce to "Close" and copy data = data[["Close"]].copy() # Note the use of .copy() # Step 3: Calculate rolling averages shortWindow = 20 # NOTE: Re-running the following line (in e.g. a Jupyter notebook cell) results in a column full of NaNs. data[f"SMA{shortWindow}"] = data["Close"].rolling(window=shortWindow).mean() ``` ### Issue Description When running a simple rolling mean assignment on a copied DataFrame, the operation works as expected on first execution, but subsequent executions result in columns full of NaNs, even though `.copy()` was used explicitly to break view ties. This behavior suggests that `.rolling()` or the assignment mechanism is not fully stateless or clean between executions, which violates expectations around `.copy()` providing safe memory isolation. ### Expected Behavior After using `.copy()` on the sliced DataFrame—`data = data[['Close']].copy()`—I expect the object to be fully decoupled from its original state, and for repeated assignments to `data['SMA20'] = data['Close'].rolling(20).mean()` to behave identically and reliably, regardless of how many times the line is executed. Instead, the observed behaviour is as follows: - On the first execution, `.rolling().mean()` works correctly. - On any subsequent execution, the assigned columns become full of NaNs. - This occurs even though `.copy()` was used on the entire DataFrame. ### Confirmed Environment Behavior This issue was tested in: - ✅ **Pandas 1.5.3** — No issue: repeated `.rolling().mean()` assignments behave as expected. - ❌ **Pandas 2.3.1** (Replit script & JupyterLab) — Repeated assignment results in columns filled with NaNs. This confirms the issue is a **regression introduced in Pandas 2.x**. The behavior is reproducible across multiple environments and interfaces. ### 🔄 Update: Bug Persists Even with `.copy()` on the Input Series I believe I have further confirmed that the issue is **not due to view/copy ambiguity** of the input data. I tested the following pattern, using `.copy()` explicitly on the input Series before applying `.rolling()`, as shown below: ```python # Create a clean copy of the 'Close' column closeData = data["Close"].copy() # Assign rolling mean result to new column data[f"SMA{shortWindow}"] = closeData.rolling(window=shortWindow).mean() ``` However, **re-running this assignment line a second time still results in the `SMA20` column being filled with NaNs**. This happens **even though `closeData` is a deep copy**, isolated from any previous DataFrame state. This suggests that: * The issue is **not caused by shared views** or copy/reference issues in the input. * The bug may instead be related to **reassigning the same target column** (`SMA20`) multiple times with a rolling result — possibly due to internal caching, memory reuse, or stale index alignment in Pandas’ internal `BlockManager`. The bug continues to occur in both: * ✅ Replit script execution (not as a notebook) * ✅ JupyterLab environment This further supports the conclusion that the bug is **environment-independent**, and likely a **regression in core Pandas 2.x logic** for repeated rolling assignments. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.11.3 python-bits : 64 OS : Darwin OS-release : 24.0.0 Version : Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : en_GB.UTF-8 LOCALE : en_GB.UTF-8 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 22.3.1 Cython : None sphinx : None IPython : 9.4.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Indexing", "MultiIndex", "Window" ]
0
0
0
0
0
0
0
0
[ "I also faced this issue in Pandas 2.3.1.\n\nEven on the first run, using `.rolling().mean()` on a `.copy()` of the column gives all NaN values.\n\n---\n\n### ✅ Reproducible Code\n\n```python\nimport pandas as pd\nimport yfinance as yf\n\ndata = yf.download(\"AAPL\", start=\"2020-01-01\", end=\"2020-05-01\", auto_adjust=True, progress=False)\ndata = data[[\"Close\"]].copy()\ndata[\"SMA20\"] = data[\"Close\"].rolling(20).mean()\nprint(data.tail())\n\nMy Environment\n\nPandas 2.3.1 → ❌ All NaNs\n\nPandas 2.2.2 → ❌ All NaNs\n\nPandas 1.5.3 → ✅ Works correctly\n\nPython 3.x\n\nI also debugged using monkey-patching and confirmed:\n\nInput values are valid\n\nOutput is still all NaNs\n\nThis happens even on the first execution\n\nI’m happy to help test or provide more info if needed.", "✅ Update from my side:\n\nI cloned the Pandas repository and built version 2.2.2 from source on my local machine.\n\nAfter testing `.rolling().mean()` on a copied DataFrame, it works correctly and gives valid moving averages.\n\nSo this issue does not happen in Pandas 2.2.2 — the bug only appears in 2.3.x versions.\n\nThis confirms it's a regression. I can help test or trace it further if needed.\n", "@abujabarmubarak - it would be helpful to provide a reproducible example that does not depend on third party packages where possible. Does this bug exists if you just create a DataFrame from scratch, e.g. `df = pd.DataFrame({...})`? If so, can you update the OP with this.", "This seems related to aligning of indexes with multi-indexes\n\n```py\nIn [126]: cols = pd.MultiIndex.from_tuples([('A', 'B')])\n ...: df = pd.DataFrame([[i] for i in range(3)], columns=cols)\n ...: s = df['A'].rolling(2).mean()\n ...: df['C'] = s\n ...: print(df)\n ...: df['C'] = s\n ...: print(df)\n A C\n B\n0 0 NaN\n1 1 0.5\n2 2 1.5\n A C\n B\n0 0 NaN\n1 1 NaN\n2 2 NaN\n```", "Thanks @asishm! Making your last line `df[('C', '')] = s` gives the expected result. I would think `df['C'] = ...` and `df[('C', '')] = ...` should be treated the same. This looks like a bug to me.\n\nHave not yet checked if this is a duplicate report, but wouldn't be surprised if it was." ]
3,225,401,662
61,840
DOC: Add unified code guideline document (#33851)
closed
2025-07-12T14:07:15
2025-07-12T18:33:52
2025-07-12T18:33:51
https://github.com/pandas-dev/pandas/pull/61840
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61840
https://github.com/pandas-dev/pandas/pull/61840
abujabarmubarak
3
### Summary This PR addresses Issue #33851 by adding a new consolidated documentation file named `code_guidelines.md` under `doc/source/development/`. The goal is to unify coding standards and make it easier for new contributors to find all code style rules in one place. ### Key Highlights - Combines content from: - `code_style.html` - `contributing.html#code-standards` - Covers formatting tools (`black`, `flake8`, `isort`) - Includes naming conventions, testing rules, and docstring format - Improves onboarding for new contributors - Adds references and examples Let me know if you'd like me to update links in `index.rst` or make any changes. Happy to collaborate on refinements.
[]
0
0
0
0
0
0
0
0
[ "Hi team 👋,\r\nThis PR addresses issue #33851 by consolidating code style guidelines into one document (`code_guidelines.md`) under `doc/source/development/`. \r\n\r\nAll checks have passed ✅. Please let me know if anything needs to be revised — happy to make changes! Thanks for reviewing 🙏\r\n", "Hi team 👋,\r\nThis PR addresses issue #33851 by consolidating code style guidelines into one document (`code_guidelines.md`) under `doc/source/development/`. \r\n\r\nAll checks have passed ✅. Please let me know if anything needs to be revised — happy to make changes! Thanks for reviewing 🙏\r\n", "Thanks for the PR but this seems incorporate little to no information in the current contribution documentation. And as a reminder, we discourage the use of AI code contributions. Closing" ]
3,225,093,081
61,839
DOC: rm excessive backtick
closed
2025-07-12T09:16:13
2025-07-12T20:51:49
2025-07-12T18:36:28
https://github.com/pandas-dev/pandas/pull/61839
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61839
https://github.com/pandas-dev/pandas/pull/61839
mattwang44
1
- [ ] ~~closes #xxxx (Replace xxxx with the GitHub issue number)~~ only fix sphinx syntax - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. --- I’m developing a new sphinx-lint rule to detect excessive backticks ([PR](https://github.com/sphinx-contrib/sphinx-lint/pull/139)), and it flagged some in the current document ([this GitHub search link](https://github.com/search?q=repo%3Apandas-dev%2Fpandas+%2F%5B%5E%60%5D%3A%28class%7Cmeth%29%3A%5C%60%5B%5E%5C%60%5Cs%5D*%5C%60%5C%60%2F&type=code) lists these occurences too). This PR fixes detected cases.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks @mattwang44 " ]
3,224,754,917
61,838
ENH: Include line number and number of fields when read_csv() callable raises ParserWarning
open
2025-07-12T03:26:27
2025-07-24T13:58:57
null
https://github.com/pandas-dev/pandas/issues/61838
true
null
null
matthewgottlieb
2
### Feature Type - [ ] Adding new functionality to pandas - [x] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I wish I could use pandas to detect and repair issues in a CSV file, but raise an informative warning when an unrepairable issue is encountered. I have written a function which identifies common issues (e.g. the field delimiter being improperly used within a field) and checks surrounding fields to estimate the original intent of the data, but when the issue cannot be identified with this logic, the function would return the original line and the user should be directed to the problematic line. ### Feature Description Given a CSV with bad lines (e.g. line 3 having an extra "E"): ``` id,field_1,field_2 101,A,B 102,C,D,E 103,F,G ``` read_csv() will, with all defaults (`on_bad_lines='error'`), raise a ParserError: ``` pandas.errors.ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4 ``` With `on_bad_lines='warn'`, it will raise a ParserWarning, with the same helpful information: ``` <stdin>:1: ParserWarning: Skipping line 3: expected 3 fields, saw 4 ``` However, when a using a callable (e.g. `on_bad_lines=line_fixer`), the ParserWarning message is very generic, not indicating the line number, expected fields, nor seen fields: ``` >>> import pandas as pd >>> def line_fixer(line): ... return [1, 2, 3, 4, 5] ... >>> df = pd.read_csv('test.csv', engine='python', on_bad_lines=line_fixer) <stdin>:1: ParserWarning: Length of header or names does not match length of data. This leads to a loss of data with index_col=False. ``` Including these details would allow the user to find and fix the input CSV manually. ### Alternative Solutions - Pre-process the CSV file separately from the read_csv() function. - Pass line number and expected field count to the callable function, which can raise its own descriptive warning. ### Additional Context _No response_
[ "Enhancement", "Error Reporting", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Hi @matthewgottlieb ,\nIt seems a method accepts the expected col num and the actual col num when `engine='pyarrow'`, so I think maybe we can do the same thing for `engine='python'` as well.\n\n```python\nimport pandas as pd\nimport warnings\n\ndef on_bad_lines_pyarrow(arg):\n\twarnings.warn(\n\tf'Expected {arg[0]} columns, got {arg[1]}. Skip this row',\n\t\tpd.errors.ParserWarning\n\t)\n\treturn \"skip\"\n\nfile = pd.read_csv('input.csv', on_bad_lines=on_bad_lines_pyarrow, engine='pyarrow')\n\n# ParserWarning : Expected 3 columns, got 4. Skip this row\n \n```\n\nCould anyone kindly help confirm if this would be acceptable and I can work on this?\n\nExpecting to be like :\n\n```python\nimport pandas as pd\nimport warnings\n\ndef on_bad_lines_python(line, expected_columns, actual_columns, row):\n\twarnings.warn(\n\t\tf\"Expected {expected_columns}, got {actual_columns} at L{row} : {line}\",\n\t\tpd.errors.ParserWarning\n\t)\n\treturn [i for i in range(len(line))]\n\nfile = pd.read_csv('input.csv', on_bad_lines=on_bad_lines_python, engine='python')\n\n# Expected 3, got 4 at L3 : ['102', 'C', 'D', 'E']\n\n```", "take" ]
3,224,729,529
61,837
BUG: read_csv() on_bad_lines callable does not raise ParserWarning when index_col is set
open
2025-07-12T03:01:11
2025-07-19T16:40:31
null
https://github.com/pandas-dev/pandas/issues/61837
true
null
null
matthewgottlieb
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python >>> import pandas as pd >>> def line_fixer(line): ... return [1, 2, 3, 4, 5] ... >>> df = pd.read_csv('test.csv', engine='python', on_bad_lines=line_fixer) <stdin>:1: ParserWarning: Length of header or names does not match length of data. This leads to a loss of data with index_col=False. >>> df = pd.read_csv('test.csv', engine='python', on_bad_lines=line_fixer, index_col=0) >>> ``` ### Issue Description ### test.csv, with extra column ("E") in row 3 ``` id,field_1,field_2 101,A,B 102,C,D,E 103,F,G ``` Callable `line_fixer` returns a list with 5 elements, which is more elements than expected. Documentation for the read_csv() on_bad_lines callable states: > If the function returns a new list of strings with more elements than expected, a ParserWarning will be emitted while dropping extra elements. This behavior is correctly seen when index_col=None (the default), but not when index_col is set. ### Expected Behavior A ParserWarning should be raised regardless of the index_col parameter. In either case, data (elements 4 and 5, in this example) are being lost, but this is done silently when index_col is set. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.10.4 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.26100 machine : AMD64 processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_United States.1252 pandas : 2.3.1 numpy : 2.2.5 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : 3.2.3 zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "IO CSV" ]
0
0
0
0
0
0
0
0
[ "I looked into this and found that the ParserWarning for extra fields returned by a callable in on_bad_lines was only triggered when index_col=None. When index_col was set, the warning was silently skipped, causing silent data loss.\n\nI am not very confident but I tried to fixed the code so the warning is always raised regardless of the index_col setting. I also added a test covering both cases (index_col=None and index_col=0) to prevent regressions. its not passing the checks yet!" ]
3,224,725,538
61,836
DOC: Update README.md to reference issues related to 'good first issue' and 'Docs' properly
closed
2025-07-12T02:55:59
2025-07-12T18:49:43
2025-07-12T18:49:35
https://github.com/pandas-dev/pandas/pull/61836
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61836
https://github.com/pandas-dev/pandas/pull/61836
sivasweatha
1
- [x] closes #61835 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks @sivasweatha " ]
3,224,725,304
61,835
DOC: README.md link for issues specified for Docs and good first issue doesn't reference properly
closed
2025-07-12T02:55:35
2025-07-12T18:49:36
2025-07-12T18:49:36
https://github.com/pandas-dev/pandas/issues/61835
true
null
null
sivasweatha
1
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://github.com/pandas-dev/pandas?tab=readme-ov-file#contributing-to-pandas ### Documentation problem In the README.md, the links for 'Docs' and 'good first issue' doesn't reference to the appropriate labels. ### Suggested fix for documentation Change the links so they reference the proper labels.
[ "Docs", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Fixed this issue with #61836." ]
3,224,532,712
61,834
ENH: error message context.
open
2025-07-12T00:11:06
2025-07-30T16:41:50
null
https://github.com/pandas-dev/pandas/issues/61834
true
null
null
hunterhogan
2
### Feature Type - [ ] Adding new functionality to pandas - [x] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I wish I had more information when troubleshooting exceptions. ```python Traceback (most recent call last): File "c:\apps\astToolFactory\astToolFactory\_datacenterDataframe.py", line 413, in <module> updateDataframe() ~~~~~~~~~~~~~~~^^ File "c:\apps\astToolFactory\astToolFactory\_datacenterDataframe.py", line 377, in updateDataframe dataframe = _getDataFromStubFile(dataframe) File "c:\apps\astToolFactory\astToolFactory\_datacenterDataframe.py", line 144, in _getDataFromStubFile dataframe = dictionary2UpdateDataframe(getDictionary_match_args(), dataframe) File "c:\apps\astToolFactory\astToolFactory\_datacenterDataframe.py", line 342, in dictionary2UpdateDataframe dataframe.loc[getMaskByColumnValue(dataframe, columnValueMask), assign.column] = assign.value ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\apps\astToolFactory\.venv\Lib\site-packages\pandas\core\indexing.py", line 911, in __setitem__ iloc._setitem_with_indexer(indexer, value, self.name) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\apps\astToolFactory\.venv\Lib\site-packages\pandas\core\indexing.py", line 1942, in _setitem_with_indexer self._setitem_with_indexer_split_path(indexer, value, name) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "C:\apps\astToolFactory\.venv\Lib\site-packages\pandas\core\indexing.py", line 1998, in _setitem_with_indexer_split_path raise ValueError( ...<2 lines>... ) ValueError: Must have equal len keys and value when setting with an iterable ``` ### Feature Description # Concrete information Please print any available concrete information. For example, pseudocode: ```python message = f"Must have equal {len(keys)=} and {len(value)=} when setting with an iterable" raise ValueError(message) ``` # Error message for a user Please write the error message as an English sentence. ``` (what?) Must have equal len (length) keys and (length) value when setting (what?) with an iterable (.) ``` ### Alternative Solutions Grammar checker. ### Additional Context _No response_
[ "Enhancement", "Error Reporting", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Created a PR that will, hopefully, close this issue. Please review and suggest", "I greatly appreciate your work on this!" ]
3,223,459,674
61,833
DOC: Clarify str.cat output for Index object (GH35556)
closed
2025-07-11T16:10:36
2025-07-28T17:23:17
2025-07-28T17:23:17
https://github.com/pandas-dev/pandas/pull/61833
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61833
https://github.com/pandas-dev/pandas/pull/61833
anvivats
1
- [x] closes #35556 - [x] Tests added and passed — *N/A (doc-only change)* - [x] All code checks passed — *pre-commit and CI should pass* - [x] Added type annotations — *N/A (no new code added)* - [x] Added an entry in the latest whatsnew — *N/A (doc-only update)* ### Summary of Changes This PR improves the docstring for `str.cat()` to clarify what happens when the caller is an `Index` and `others` is `None`. Specifically, the doc now explains that in this case, the output is also an `Index` containing a single string, rather than a plain `str` as it is for `Series`. ### Example added: ```python >>> idx = pd.Index(["a", "b", np.nan]) >>> idx.str.cat(sep="-") Index(['a-b'], dtype='object')
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen." ]
3,223,192,067
61,832
REF: separate out helpers in libparser
closed
2025-07-11T14:51:21
2025-07-11T22:34:06
2025-07-11T16:32:39
https://github.com/pandas-dev/pandas/pull/61832
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61832
https://github.com/pandas-dev/pandas/pull/61832
jbrockmendel
1
Besides general code hygiene, I'm trying to isolate parts of the code that could be parallelized in a free-threading world xref #61825
[ "IO CSV" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,222,315,548
61,831
BUG: Intersection of Pandas Index Object is not working properly
closed
2025-07-11T09:57:14
2025-07-11T15:41:22
2025-07-11T15:41:22
https://github.com/pandas-dev/pandas/issues/61831
true
null
null
Abhinav2615
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd indA=pd.Index([1,3,5,7,9]) indB=pd.Index([2,3,5,7,11]) indA & indB ``` ### Issue Description the output i am getting is : Index([0, 3, 5, 7, 9], dtype='int64') ### Expected Behavior but after intersection, i should get the output: Index([3, 5, 7], dtype='int64') ### Installed Versions <details> Replace this line with the output of pd.show_versions() </details>
[ "Bug", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "`&` stopped being an intersection a long time ago. It is the `__and__` operator, the same as for Series/DataFrame. Try `indA.intersection(indB)`." ]
3,221,370,251
61,830
TST: Fix `test_mask_stringdtype`
closed
2025-07-11T03:33:52
2025-07-11T16:35:53
2025-07-11T16:35:45
https://github.com/pandas-dev/pandas/pull/61830
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61830
https://github.com/pandas-dev/pandas/pull/61830
arthurlw
1
- [x] closes #61824 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~ - [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
[ "Testing" ]
0
0
0
0
0
0
0
0
[ "Thanks again @arthurlw " ]
3,221,165,163
61,829
ENH: Add a function like PYQT signal
closed
2025-07-11T01:25:12
2025-08-05T16:19:10
2025-08-05T16:19:10
https://github.com/pandas-dev/pandas/issues/61829
true
null
null
an-unimportant-person
4
### Feature Type - [x] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I hope this function can use to keep two or more dataframe same like PyQt View and Model (if I revise model view will change ) ### Feature Description from typing import Callable class Index(): def __init__(self,column = -1,row = -1): self.column = column self.row = row def check(self,reviseRange): """if self.column,self.index in range return True""" return True class dataframe: def __init__(self): self.handlers = {Index:Callable}#index,function def _trigger(self,reviseRange): """use @ to adapt iloc loc __setitem__ """ for i,f in self.handlers.items(): if i.check(): f(reviseRange) ### Alternative Solutions pyqt singal ### Additional Context _No response_
[ "Enhancement", "Needs Triage", "Closing Candidate" ]
0
0
0
0
0
0
0
0
[ "Quick explanation for those of us who don’t know what pyqt signal is?", "it is a characteristic that content in widget will changed when model changed . that was powered by pyqtsignal ,when model changed ,model will emit \"DataChanged\" signal to update the data in widget(view).", "I don't think that belongs in pandas, but can keep this open to get other opinions.", "Yes, this probably suitable for another library to implement. Thanks but closing" ]
3,220,921,101
61,828
BUG: Dataframe arithmatic operators don't work with Series using fill_value
open
2025-07-10T22:54:01
2025-08-23T17:29:40
null
https://github.com/pandas-dev/pandas/pull/61828
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61828
https://github.com/pandas-dev/pandas/pull/61828
eicchen
10
- [x] closes #61581 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Removed a test which checked for expected error to be raised and a corner case. Added a test case to test multiple operators with Dataframe x Series operations while using fill_value
[]
0
0
0
0
0
0
0
0
[ "Im closing the PR for now until the additional fixes for EA are deployed", "Reopened to talk about fixes for this specific issue before I get sidetracked by 1D operations again (ignore all the failed checks for now)", "The appropriate fix is going to be in _maybe_align_series_as_frame", "> The appropriate fix is going to be in _maybe_align_series_as_frame\r\n\r\nSo this was what I was working on locally, and had questions about. I was able to reshape EAs in _maybe_align_series_as_frame and am still working on various places to get the operation smoothed out. But I feel like this issue deviates from the original issue, which is only related to fill_value. As far as I can tell this is not related to that issue so we should probably file it under another and mark the original closed for bookkeeping. \r\n\r\nI can add another test case which wouldn't require 2D EA operations for the dtype test. \r\n\r\n(There was original a bunch of brain spew about issues I was currently having, but I'll organize it before reposting if needed)\r\n", "Just making sure, do you agree with splitting the 1D part off?", "It looks like the change might have inadvertently changed some behavior that I don't know if I should keep or not.\r\n\r\nIt reverts the error message that is expected in the test_period_add_timestamp_raises test back to what it was pre-resolution-inference according to your comment from a year ago.\r\n\r\nAnd it makes the test_add_strings in test_string.py return a success, rather than the xfail that it was supposed to be. test_add_frame unfortunately still fails though so I don't know if I should purposefully break it to keep the actions in line with each other. I read the linked issue but don't think there was a consensus (#28527 )\r\n\r\n", "whats the updated exception messsage for the period one?\r\n\r\nFixing xfailed tests is a good thing.", "it is now \"cannot add PeriodArray and DatetimeArray\", which is inline with what it is for everything else. \r\n\r\nhere's the code snippet. I modified.\r\n<img width=\"799\" height=\"165\" alt=\"image\" src=\"https://github.com/user-attachments/assets/7bcc7b0e-9f90-41d6-a677-cbdc5da56c90\" />\r\n\r\n\r\nHowever, it looks like contrary to my earlier statement, add_to_frame doesn’t consistently pass as xfail on the pipeline, some jobs fail while others don’t. It works as expected locally, so I’m not sure how best to debug this properly. Do you have any advice? ", "Can you remove the xfail and let’s see how the CI does", "> Can you remove the xfail and let’s see how the CI does\r\n\r\nSo interestingly, it seems to pass the tests it failed previously while failing the ones it previously succeeded. Do you know if there is a significant difference between the subset of unit tests that are different than the others? (Freethreading, Numpy Dev, Linux-32-bit. Linux-Musl, Pyodide, and Without PyArrow). Alternatively, I can carve out StringArray for now and investigate it as a separate issue" ]
3,220,718,791
61,827
DOC: Correct error message in AbstractMethodError for methodtype argument
closed
2025-07-10T21:10:11
2025-07-12T07:33:27
2025-07-11T22:50:19
https://github.com/pandas-dev/pandas/pull/61827
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61827
https://github.com/pandas-dev/pandas/pull/61827
Maaz-319
3
Fixing an error message in the AbstractMethodError class found in pandas/errors/__init__.py. Currently: raise ValueError( f"methodtype must be one of {methodtype}, got {types} instead." ) Here, {methodtype} and {types} are swapped. This means if you called this error with methodtype="foo", the message would read: methodtype must be one of foo, got {'method', 'classmethod', 'staticmethod', 'property'} instead. That’s confusing, because the set of valid types should be listed after “must be one of”, and the invalid value you passed should be listed after “got”. Corrected: ======= raise ValueError( f"methodtype must be one of {types}, got {methodtype} instead." ) Now, if you called this error with methodtype="foo", the message would read: methodtype must be one of {'method', 'classmethod', 'staticmethod', 'property'}, got foo instead. This is clearer and follows standard error message conventions.
[ "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "pre-commit.ci autofix", "Thanks @Maaz-319 ", "> Thanks @Maaz-319 \n\nGlad to contribute @mroeschke " ]
3,220,114,056
61,826
TST: enable 2D tests for MaskedArrays, fix+test shift
closed
2025-07-10T17:21:01
2025-07-11T22:33:53
2025-07-11T16:41:58
https://github.com/pandas-dev/pandas/pull/61826
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61826
https://github.com/pandas-dev/pandas/pull/61826
jbrockmendel
1
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Testing", "NA - MaskedArrays" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,219,990,761
61,825
PERF: Brainstorming read_csv perf improvements
open
2025-07-10T16:32:15
2025-07-11T14:11:21
null
https://github.com/pandas-dev/pandas/issues/61825
true
null
null
jbrockmendel
3
- [ ] With free-threading, could _convert_column_data be called in parallel for each column? - [ ] (free-threading) For large files, split into chunks and parse in parallel, then concat? - [ ] In a pyarrow-always-available world, could `_string_box_utf8` allocate a buffer+mask rather than ndarray[object]? - [ ] #17743 Anyone else have more ideas?
[ "Performance", "IO CSV", "Needs Discussion" ]
0
0
0
0
0
0
0
0
[ "Is it possible to prefilter? @jbrockmendel ", "Can you describe what you have in mind?", "@jbrockmendel I mean do a filter on the rows while reading the csv, instead of returning the entire CSV. I'm thinking of what happens with parquet, where a user can return a subset of the data, filtered based on partitioning. CSV doesn't have that; maybe there is a different way to make that happen? Polars has something similar with scan_csv; I believe duckdb has similar capabilities with their CSV reader" ]
3,219,391,518
61,824
BUG: `mask` in `test_mask_stringdtype` would always return the same result regardless of `cond`
closed
2025-07-10T13:32:51
2025-07-11T16:35:47
2025-07-11T16:35:47
https://github.com/pandas-dev/pandas/issues/61824
true
null
null
sanggon6107
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd # test_mask_stringdtype obj = pd.DataFrame( {"A": ["foo", "bar", "baz", pd.NA]}, index=["id1", "id2", "id3", "id4"], dtype=pd.StringDtype(), ) filtered_obj = pd.DataFrame( {"A": ["this", "that"]}, index=["id2", "id3"], dtype=pd.StringDtype() ) expected = pd.DataFrame( {"A": [pd.NA, "this", "that", pd.NA]}, index=["id1", "id2", "id3", "id4"], dtype=pd.StringDtype(), ) filter_ser = pd.Series([False, True, True, False]) obj.mask(filter_ser, filtered_obj) # A # id1 <NA> # id2 this # id3 that # id4 <NA> filter_ser = pd.Series([True, False, False, True]) obj.mask(filter_ser, filtered_obj) # A # id1 <NA> # id2 this # id3 that # id4 <NA> filter_ser = pd.Series([False, False, False, False]) obj.mask(filter_ser, filtered_obj) # A # id1 <NA> # id2 this # id3 that # id4 <NA> filter_ser = pd.Series([True, True, True, True]) obj.mask(filter_ser, filtered_obj) # A # id1 <NA> # id2 this # id3 that # id4 <NA> ``` ### Issue Description Found during #60772 . I suppose the purpose of this test is to check if `mask` works as expected with `pd.StringDtype()` (See #40824 ), but the test seems to return the same result regardless of `cond` since it fails to align in `_where`. If we want to check if `mask` replaces with `other` only where `cond` is `True` and let `cond` propagate where `cond` is `False`, I think `filter_ser` should have `index` so that `mask` can recognize the corresponding `other` value. ### Expected Behavior ```python filter_ser = pd.Series([False, True, True, False], index=["id1", "id2", "id3", "id4"]) obj.mask(filter_ser, filtered_obj) # A # id1 foo # id2 this # id3 that # id4 <NA> ``` ### Installed Versions <details> commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.12.7 python-bits : 64 OS : Windows OS-release : 11 Version : 10.0.26100 machine : AMD64 processor : AMD64 Family 25 Model 80 Stepping 0, AuthenticAMD byteorder : little LC_ALL : None LANG : None LOCALE : Korean_Korea.949 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 24.2 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Testing" ]
0
0
0
0
0
0
0
0
[ "Confirmed on main. PRs and investigations are welcome\n\nThanks for raising this!" ]
3,218,613,276
61,823
BUG: drop doesn't recognise MultiIndexes
closed
2025-07-10T09:39:27
2025-07-18T02:18:17
2025-07-18T02:18:17
https://github.com/pandas-dev/pandas/issues/61823
true
null
null
pratt-fds
7
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd foo = pd.DataFrame({'a': [1, 2, 3], 'b': ['foo', 'foo', 'bar']}) foo = pd.concat([foo], keys=['foo'], axis=1) foo.drop(index='b', level=1, axis=1) ``` ### Issue Description When drop is called, an AssertionError is raised `AssertionError: axis must be a MultiIndex` On inspection of the dataframe, the columns are a MultiIndex ### Expected Behavior Drop should not raise an incorrect AssertionError ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.10.8 python-bits : 64 OS : Linux OS-release : 6.6.87.2-microsoft-standard-WSL2 Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : C.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.1 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : 8.37.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : 6.129.3 gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : 8.1.1 python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details> Also tested in a clean 3.13.5 environment: <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.13.5 python-bits : 64 OS : Linux OS-release : 6.6.87.2-microsoft-standard-WSL2 Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : C.UTF-8 LOCALE : C.UTF-8 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug" ]
0
0
0
0
0
0
0
0
[ "Your function call specification is incorrect here, the arguments `index` and `columns` are supposed to be mutually exclusive from the use of `labels` & `axis` (confusing I know).\n\n```python\nfoo.drop(index='b', level=1, axis=1)\n```\n\nBy specifying `index='b'` pandas is only examining the row index for a MultiIndex (as hinted by `level=1`), and ultimately ignores the `axis=1` parameter.\n\nSo instead, you should use\n\n```python\n>>> foo.drop(columns='b', level=1)\n foo\n a\n0 1\n1 2\n2 3\n```\n\nor \n\n```python\n>>> foo.drop('b', level=1, axis=1)\n foo\n a\n0 1\n1 2\n2 3\n```\n", "Would it make sense to raise an exception related to invalid options selected, when passing index/columns and axis?\n\nTo me, index is a bit of a weird name for the index on the rows, as there can also be an index on the columns - It's easy to forget that index in this regard specifically means the row index", "+1 on raising when both `index` and `axis` or both `columns` and `axis` are specified. PRs to fix are welcome!", "take\n@rhshadrach can I work on this?", "> take [@rhshadrach](https://github.com/rhshadrach) can I work on this?\n\nAny chance this can be fixed for all functions/methods that allow both `func(index=…, columns=…)` and `func(arg, axis=…)` as mutually exclusive groups?\n\nA few off the top of my head\n- DataFrame.drop\n- DataFrame.reindex\n- DataFrame.rename_axis\n- DataFrame.rename\n\n\n(This last one may be scope creep)\n- DataFrame.set_axis only supports the `func(arg, axis=…)` signature. Could a `func(index=…, columns=…)` be added here?", "Hi @camriddell I added a basic check for this scenario. If you think I’m in the right direction, I’ll make the changes in the other functions as well. please let me know.", "This check is already there in \n[DataFrame.reindex](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/core/generic.py#L5368)\n[DataFrame.rename](https://github.com/pandas-dev/pandas/blob/bc6ad140daf230c470fef92bec598831d4f94a16/pandas/core/generic.py#L1019C13-L1022C18)" ]
3,218,091,232
61,822
TST: Adding tests for validating DataFrame.__setitem__ and .loc behavior
closed
2025-07-10T06:33:10
2025-08-01T05:32:39
2025-08-01T05:32:38
https://github.com/pandas-dev/pandas/pull/61822
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61822
https://github.com/pandas-dev/pandas/pull/61822
niruta25
3
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Following up from #61804, adding tests to test_api.py to validate the DataFrame.__setitem__ and .loc assignment from Series
[ "Testing" ]
0
0
0
0
0
0
0
0
[ "@WillAyd Since I have you, last follow up from the https://github.com/pandas-dev/pandas/pull/61804, Should we keep these tests? \r\n\r\nLast convo on this issue regarding tests was here: https://github.com/pandas-dev/pandas/pull/61804#discussion_r2193379353", "I appreciate adding tests but I think this PR is superfluous - these cases are definitely covered in the existing code", "> I appreciate adding tests but I think this PR is superfluous - these cases are definitely covered in the existing code\r\n\r\nGotcha! Thanks for the insight. Closing this ticket." ]
3,217,521,002
61,821
DOC: Update link to pytz documentation
closed
2025-07-10T00:46:51
2025-07-11T16:21:36
2025-07-11T16:21:30
https://github.com/pandas-dev/pandas/pull/61821
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61821
https://github.com/pandas-dev/pandas/pull/61821
star1327p
2
Pytz documentation: https://pypi.org/project/pytz/ The original link does not work: http://pytz.sourceforge.net/index.html - [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@star1327p Can I take this ?", "Thanks @star1327p " ]
3,216,771,450
61,820
DOC: Clarify broadcasting behavior when using lists in DataFrame arithmetic (GH18857)
open
2025-07-09T18:21:25
2025-08-14T22:41:09
null
https://github.com/pandas-dev/pandas/pull/61820
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61820
https://github.com/pandas-dev/pandas/pull/61820
Shashwat1001
1
- Clarifies the behavior when using Python lists in arithmetic operations with DataFrames. - Adds an example in `dsintro.rst` to show how adding a list returns a Series of arrays. - Adds a note in `basics.rst` to explain that lists are not broadcasted element-wise, unlike NumPy arrays or Series. - Fixes #18857.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@jbrockmendel Can you please help me merge this PR. I have made the changes." ]
3,216,509,873
61,819
BUG: Series created from pre-2.1 legacy pickles lose their names during .copy operations
open
2025-07-09T16:28:53
2025-07-13T23:57:01
null
https://github.com/pandas-dev/pandas/issues/61819
true
null
null
Liam3851
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example On pandas < 2.1 (e.g. 1.5.3, 2.0.3): ```python3 import pandas as pd pd.Series(['a'], name='hi').to_pickle('G:/temp/test.pkl') ``` On pandas 2.3.0 and main: ```python3 import pandas as pd ser = pd.read_pickle('G:/temp/test.pkl') # appears to work ser2 = pd.Series(['a'], name='hi') # works pd.testing.assert_series_equal(ser, ser2) # works pd.testing.assert_series_equal(ser, ser.copy()) # Attribute "name" are different ``` ### Issue Description In doing a migration from 1.5.3 to the 2.x series we hit an issue where copying an unpickled Series drops its name (the actual operation was a `.reindex_like`, which called `.copy` under the hood). The bug begins with the pandas 2.1 series; I believe this may have been introduced in #51784 when the Series metadata was changed from name to _name. ### Expected Behavior It seems like an unpickled Series and its copy should be equal in all attributes, since that's what .copy does. However anything which does a copy (including implicit copies, such as calling `.reindex()`) currently causes the name to be dropped inadvertently. Now I'm not sure to what extent `read_pickle` guarantees that all actions on an unpickled legacy object work the same way on a newly-created object. That said, one reason this may be worth fixing is that the problem seems to persist in new versions, i.e. rewriting the pickle with the new version directly doesn't mitigate the problem: ```python3 # using version 2.3.0 # read legacy pickle ser = pd.read_pickle('G:/temp/test.pkl') # write out new pickle of the object ser.to_pickle('G:/temp/ser_copy.pkl') # read in new pickle ser_copy = pd.read_pickle('G:/temp/ser_copy.pkl') pd.testing.assert_series_equal(ser, ser_copy) # works pd.testing.assert_series_equal(ser_copy, ser_copy.copy()) # fails, even though ser_copy is read in from a pickle created in 2.3.0) ``` And of course obviously calling ser.copy() to get a new pandas 2.3 object also does not work. Thus it seems the only workaround to: 1) Read in the legacy pickle 2) Serialize the legacy pickle to some other format 3) Deserialize the other format 4) Serialize the newly-created object as a replacement pickle ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.11.12 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.3.0 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : 1.5.0 dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : 3.10.3 numba : 0.61.2+0.g1e70d8ceb.dirty numexpr : 2.10.2 odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 19.0.1 pyreadstat : None pytest : 8.4.1 python-calamine : None pyxlsb : None s3fs : 2025.5.1 scipy : 1.15.2 sqlalchemy : 2.0.41 tables : None tabulate : 0.9.0 xarray : 2025.6.1 xlrd : None xlsxwriter : 3.2.5 zstandard : 0.23.0 tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Needs Discussion", "IO Pickle" ]
0
0
0
0
0
0
0
0
[ "Thanks for the report. From https://pandas.pydata.org/pandas-docs/dev/user_guide/io.html#pickling:\n\n> [read_pickle()](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.read_pickle.html#pandas.read_pickle) is only guaranteed backwards compatible back to a few minor release.\n\nSo this indeed is not a supported case.\n\nI would go even further and think about dropping the promise of \"a few minor releases\". pickles are really not meant for transferring data across environments, and trying to do so is going to be a constant source of edge cases. We should instead encourage users to use proper data formats like parquet that handles the vast majority of cases (just not general Python objects).\n\ncc @pandas-dev/pandas-core ", "> We should instead encourage users to use proper data formats\n\n+1" ]
3,215,487,867
61,818
Improved installation instruction in docs for clarity
closed
2025-07-09T10:51:03
2025-07-09T14:37:35
2025-07-09T14:37:34
https://github.com/pandas-dev/pandas/pull/61818
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61818
https://github.com/pandas-dev/pandas/pull/61818
lalitchoudhary81097
1
Made a small improvement in the installation section to improve clarity for new users. First open-source contribution
[]
0
0
0
0
0
0
0
0
[ "@lalitchoudhary81097 I'm glad you're interested in open source. Fixes to spelling or grammar are welcome, but please keep in mind that a volunteer needs to take time to review your suggestions." ]
3,214,392,590
61,817
To develop a machine learning model that accurately predicts house prices based on various features such as location, size, number of bedrooms, and other relevant factors.
closed
2025-07-09T03:51:38
2025-07-09T16:03:02
2025-07-09T16:03:02
https://github.com/pandas-dev/pandas/issues/61817
true
null
null
Kavinsanjai57
1
null
[]
0
0
0
0
0
0
0
0
[ "Hi, please use the issue template when creating an issue. From the title, I don't think this is something the pandas team can help with unless you have something more concrete.\n\nIf you do, feel free to open another issue!" ]
3,214,346,286
61,816
BUG: DataFrame.aggregate to preserve extension dtypes with callable functions
open
2025-07-09T03:15:42
2025-08-22T00:08:11
null
https://github.com/pandas-dev/pandas/pull/61816
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61816
https://github.com/pandas-dev/pandas/pull/61816
AdrianoCLeao
3
- [x] closes #61812 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Bug", "Stale", "Arrow", "pyarrow dtype retention" ]
0
0
0
0
0
0
0
0
[ "Should I add something to the doc/source/whatsnew/vX.X.X.rst file?", "I'll handle the failed build checks tomorrow", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,213,602,334
61,815
DOC: Add Raises section to pd.to_numeric docstring
closed
2025-07-08T19:44:19
2025-07-16T16:30:30
2025-07-16T16:30:29
https://github.com/pandas-dev/pandas/pull/61815
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61815
https://github.com/pandas-dev/pandas/pull/61815
renegade620
1
- [x ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "Thanks for the pull request, but this issue has been addressed by https://github.com/pandas-dev/pandas/pull/61868 so closing. Happy to have contributions towards any other issue labeled `good first issue`" ]
3,213,566,302
61,814
CI: Remove PyPy references in CI testing
closed
2025-07-08T19:27:53
2025-07-09T22:43:51
2025-07-09T21:47:42
https://github.com/pandas-dev/pandas/pull/61814
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61814
https://github.com/pandas-dev/pandas/pull/61814
mroeschke
1
We haven't had reliable PyPy testing in many years now and no one to champion supporting this platform. We only have 1 job that builds pandas with PyPy on Python 3.9 (already dropped). It's made more difficult that conda-forge no longer supports PyPy either, https://conda-forge.org/news/2024/08/14/sunsetting-pypy/ I don't think it's worth using resources for this job anymore. pandas can still have code for PyPy compatibility for those wanting to support PyPy independently.
[ "CI" ]
1
0
0
0
0
0
0
0
[ "Thanks @mroeschke " ]
3,212,782,838
61,813
ENH: Add Polars engine to read_csv
open
2025-07-08T14:31:02
2025-07-24T22:47:43
null
https://github.com/pandas-dev/pandas/issues/61813
true
null
null
datapythonista
11
Since we won't get it for free via #61642, is would be good to add a Polars engine manually, so pandas users can benefit from state-of-the-art speed while readings CSVs. @pandas-dev/pandas-core any objection?
[ "IO CSV", "good first issue" ]
0
0
0
0
0
0
0
0
[ "1. How does it compare performance wise to the PyArrow csv parser?\n2. Compared to the PyArrow csv reader, I'm less eager to add a Polars engine since it already has a `to_pandas` method and pandas `read_csv` doesn't have a use for the intermediate Polars data structures unlike PyArrow (i.e. `ArrowExtensionArray` using `pyarrow.ChunkedArray`s when `dtype_backend=\"pyarrow\"`) ", "1. Last time I checked it took one third of the time compared to pandas with PyArrow\n2. Not sure I understand what's the problem. Polars will return a Polars dataframe that will be converted to a pandas dataframe backed by ArrowExtensionArray and PyArrow, no? Do you mind expanding on what's the issue?", "No objection in principle.\n\nI am curious if we can learn from what they've done to improve our engine.\n\nWould the implementation be roughly `return pl.read_csv(...).to_pandas()` or would the kwargs/outputs need some massaging like with the pyarrow engine?\n\nWill the tests Just Work with this engine or will they need a bunch of `if engine == \"polars\": ...` tweaks?", "I didn't check the mapping of all parameters in detail, but I'd use the lazy version with at least a `.select()` and a `.filter()` to support column pruning and predicate pushdown. So, not a single liner, but my expectation is that it's a simple wrapper.\n\nI'm hoping tests will pass. I guess not all kwargs may be supported as with pyarrow, so maybe something custom is needed.", "> I am curious if we can learn from what they've done to improve our engine.\n\nI checked some time ago and wrote about some of my findings in this blog post. It also contains benchmarks of different CSV readers: https://datapythonista.me/blog/how-fast-can-we-process-a-csv-file\n\nI can tell you that Ritchie spent a huge amount of time optimizing the Polars reader. But if you have time and interest, improving our C engine sounds great.", "> Do you mind expanding on what's the issue?\n\nMore just that, as @jbrockmendel mentioned, would `pd.read_csv(..., engine=\"polars\")` just be syntatic sugar for `pl.read_csv(...).to_pandas()` correct?\n\nWhile at least with `pd.read_csv(..., engine=\"pyarrow\", dtype_backend=\"pyarrow\")`, it's not just syntatic sugar as we're still holding/using PyArrow objects after the reading of the CSV. i.e. there is more \"use\" for PyArrow here.\n\nEDIT: I see that you mentioned in https://github.com/pandas-dev/pandas/issues/61813#issuecomment-3049520253 it might not just a be a 1 liner but fitting the right lazy APIs to our `read_csv` signature, so I would be a bit more positive including this now as there's more \"art\" than just being a `pl.read_csv(...).to_pandas()` passthrough", "I read the blog post and got curious about when engine=\"python\" is necessary. Patching read_csv to change `engine in {\"python\", \"python-fwf\"}` to \"c\" breaks <s>26</s> <b>I applied the patch incorrectly. Will update with correct number</b> tests. <s>10 of those are about on_bad_line being callable. The rest need a closer look but tentatively look like they are about string inference. It may be feasible to just get rid of the python engine.</s> 112 tests. on_bad_lines being callable, regex separators, skipfooter support are the main ones.\n\nNext up, patching to always use the pyarrow engine and see if it breaks the world. 3137 failures. Looks like mostly about unsupported keywords like low_memory, thousands.", "Nice blog post. Those are some impressive benchmarks on the polars side.\n\nDo you think it matters at all that polars uses string views for storage whereas we are going to default to large strings? I think that gets doubly confusing when you try to mix the pyarrow backend with the polars engine, as I'm unsure what data type a user would expect in that case (probably string_view?)", "Polars read_csv is very fast. Won't it be easier for the user to do a `pl.read_csv` or `pl.scan_csv` followed by `to_pandas`? There is also the maintenance aspect of it. Or maybe mention in the user guide that there are faster CSV readers that the user may access instead. Just an opinion ", "> Won't it be easier for the user to do a `pl.read_csv` or `pl.scan_csv` followed by `to_pandas`?\n\nEasier for us, but not easier for users in my opinion. I don't disagree with you, but in many parts of the pandas API we provide syntactic sugar to make user code look very simple and compact. For example allowing urls or compressed files when reading files. My preferred option would be the PR referenced in the description, but since there is no consensus for that, I think providing polars in the same way as we provide pyarrow is fair. Otherwise we are encouraging users to use a much slower reader.", "Most people won't be reading large amounts of data that would require using Polars' engine, and the current approach will be sufficient.\nIf you want to use Polars, just read it with pl.read_csv and then use to_pandas.\nI also tried reading 2,000 CSV files (total of 100 million records) and creating a pandas DataFrame, and pyarrow was faster than Polars.\n" ]
3,212,422,515
61,812
BUG: Dataframe.aggregate drops pyarrow backend for lambda aggregation functions
open
2025-07-08T12:47:25
2025-07-29T16:01:30
null
https://github.com/pandas-dev/pandas/issues/61812
true
null
null
flori-ko
4
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import numpy as np df = pd.DataFrame(data={"A": [np.nan, 1]}, dtype="double[pyarrow]") df.aggregate(lambda x: x.mean()).dtypes ``` ### Issue Description The input is a dataframe with a pyarrow backend but the output uses numpy float64. This behaviour is very inconsistent especially considering that, if called like this: `df.aggregate("mean").dtypes` the output is a "double[pyarrow]". ### Expected Behavior I expect the returned dtypes to be double[pyarrow], since pyarrow type was given as input (based on https://github.com/pandas-dev/pandas/issues/53831). ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.11.9 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22631 machine : AMD64 processor : Intel64 Family 6 Model 143 Stepping 8, GenuineIntel byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_United States.1252 pandas : 2.3.1 numpy : 2.2.5 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : 8.2.3 IPython : 9.2.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : 8.3.5 python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.3 sqlalchemy : 2.0.40 tables : None tabulate : 0.9.0 xarray : 2025.4.0 xlrd : 2.0.1 xlsxwriter : 3.2.5 zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Arrow", "pyarrow dtype retention" ]
0
0
0
0
0
0
0
0
[ "I've taken some time to verify it locally on my setup (pandas 2.3.1, pyarrow 20.0.0, Python 3.10.12). \nI was able to test the return in some ways:\n\n- df.aggregate(lambda x: x.mean()) → `float64` (loses pyarrow dtype)\n- df.aggregate(\"mean\") → `double[pyarrow]` (preserves pyarrow dtype)\n- df.aggregate(np.mean) → `double[pyarrow]` (preserves pyarrow dtype)\n- df.mean() → `double[pyarrow]` (preserves pyarrow dtype)\n\nSo:\n\n - Any aggregation using a lambda function does lose the extension dtype.\n - Native string methods and numpy functions both preserve it.\n\nI get the same fallback for df.apply(lambda x: x.mean())—so this isn't just limited to .aggregate() but is about how callables are dispatched internally.", "Confirmed on main. PRs are welcome!\n\n> df.aggregate(np.mean) → double[pyarrow] (preserves pyarrow dtype)\n\nIn my testing, this also returns a `float64` dtype, so investigations for this are welcome as well.\n\nThanks for raising this!", "@AdrianoCLeao Thank you for taking a look at this. Since nothing happened for a while, are you still working on this?", "Yes, I’ve been working on it, but I haven’t had much time to fix some CI issues" ]
3,211,617,510
61,811
DOC: Lacking information on error type raised by pd.to_numeric
closed
2025-07-08T08:46:42
2025-07-16T16:29:48
2025-07-16T16:29:48
https://github.com/pandas-dev/pandas/issues/61811
true
null
null
ericludvigs
1
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html ### Documentation problem There is no "Raises" section that describes *which* errors are raised when setting the argument "errors" to "raise". It is not immediately clear if a conversion error will cause a TypeError or ValueError, or both depending on how conversion failed. This would be useful when doing as recommended to "Catch exceptions explicitly instead.", and writing a `try: except:` with specific errors caught to avoid an overly generic error-catch which is bad practice etc. etc. ### Suggested fix for documentation Add a "Raises:" section or include specific error names instead of the generic "Raises an exception". See for example: https://numpy.org/doc/2.1/reference/generated/numpy.array.html > For False it raises a ValueError if a copy cannot be avoided. Default: True.
[ "Docs", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Hi, I'd like to pick this up as my first open source contribution. " ]
3,211,492,270
61,810
DOC: update release process maintainer guide
closed
2025-07-08T08:06:45
2025-07-22T09:35:05
2025-07-22T09:34:20
https://github.com/pandas-dev/pandas/pull/61810
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61810
https://github.com/pandas-dev/pandas/pull/61810
jorisvandenbossche
2
A small things I noticed that could use some update or clarification while releasing 2.3.1. cc @mroeschke can you check that this confirms with your experience when releasing 2.3.0?
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@jorisvandenbossche one thing that I used to do during pre-release is ensure that the release notes between the backport and the main branch were in sync.\r\n\r\nDiscrepancies can arise if backports are missed or merge conflicts on the backport were perhaps not resolved correctly, so this used to be a beneficial check.\r\n\r\nIIRC we had some issues with the 2.3.0 release that would have been picked up if this was done. As well as the release process instructions, I used to use a release checklist on the release issue.\r\n\r\nMaybe, while updating the process, you could add something to \"Update and clean release notes for the version to be released, including:\" section?", "Thanks @jorisvandenbossche " ]
3,210,844,817
61,809
BUG: Pandas Series with Xarray slow print time.
open
2025-07-08T02:57:42
2025-07-23T23:37:47
null
https://github.com/pandas-dev/pandas/issues/61809
true
null
null
chaoyupeng
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import xarray as xr import numpy as np import time # Minimal reproducible example: Pandas Series print performance with large xarray DataArrays print("Pandas Series Print Performance Issue") print("=" * 40) # Create minimal test data np.random.seed(42) time_coords = pd.date_range('2023-01-01', periods=2, freq='D') x_coords = np.linspace(0, 10, 256) y_coords = np.linspace(0, 10, 256) # Create one large xarray DataArray data = np.random.randn(2, 256, 256) large_dataarray = xr.DataArray( data, coords={'time': time_coords, 'y': y_coords, 'x': x_coords}, dims=['time', 'y', 'x'] ) # Create minimal pandas Series with large DataArray series = pd.Series({ 'id': 1, 'data': large_dataarray, 'name': 'test_series' }) print(f"DataArray size: {large_dataarray.nbytes / 1024 / 1024:.1f} MB") print(f"DataArray shape: {large_dataarray.shape}") # Method 1: Print Series directly print("\nMethod 1: Print Series") start_time = time.time() print(series) method1_time = time.time() - start_time # Method 2: Extract DataArray first, then print it print("\nMethod 2: Extract DataArray first, then print") start_time = time.time() extracted_da = series['data'] print(extracted_da) method2_time = time.time() - start_time # Results print(f"\nTiming Results:") print(f"Method 1 (print Series): {method1_time:.4f} seconds") print(f"Method 2 (extract + print DataArray): {method2_time:.4f} seconds") print(f"Difference: {abs(method1_time - method2_time):.4f} seconds") if method1_time > method2_time: ratio = method1_time / method2_time print(f"Method 1 is {ratio:.1f}x slower than Method 2") else: ratio = method2_time / method1_time print(f"Method 2 is {ratio:.1f}x slower than Method 1") # Environment info print(f"\nEnvironment:") print(f"Pandas: {pd.__version__}") print(f"XArray: {xr.__version__}") print(f"NumPy: {np.__version__}") ``` ### Issue Description Hi Pandas team, so I was working with pandas series and was trying to put an xarray into a cell. So when I was trying to print out the pandas series with the xarray, I found that it is extremely slow, directly printing out the pandas series is 1000X slower than getting the xarray and then print out the xarray. The above script is an example with an xarray inside a pandas series, and the time comparison between printing the pandas series directly and get the xarray first and them print the values. Possible issue: String formatting with xarray. <img width="595" height="106" alt="Image" src="https://github.com/user-attachments/assets/c069e779-bf6e-4781-baa3-fc7ccf0e4ec6" /> ### Expected Behavior Similar time consumption for directly printing the pandas series and get the xarray and print the content. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.11.7 python-bits : 64 OS : Linux OS-release : 6.11.0-29-generic Version : #29~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jun 26 14:16:59 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_AU.UTF-8 LOCALE : en_AU.UTF-8 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 23.2.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.16.0 sqlalchemy : None tables : None tabulate : None xarray : 2025.7.0 xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None None </details>
[ "Bug", "Performance", "Output-Formatting", "Needs Discussion", "Nested Data" ]
0
0
0
0
0
0
0
0
[ "From a surface-level investigation, it seems the difference is caused because when printing a Series, pandas forces `repr` to be called on each element, whereas I assume extracting the data first and printing the result requires calling `repr` only once. In this case, the difference in performance is substantial because `repr` on `DataArray` is a costly function (especially in this case where the data it contains is large).\n\nWhether this is something that should be changed needs more discussion.\n\nThanks for opening this issue!" ]
3,210,425,973
61,808
CLN: remove doctest-ignores
closed
2025-07-07T22:39:49
2025-07-08T15:01:53
2025-07-08T15:01:48
https://github.com/pandas-dev/pandas/pull/61808
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61808
https://github.com/pandas-dev/pandas/pull/61808
jbrockmendel
0
Found this old branch, no idea if past-me was right about these being removable.
[]
0
0
0
0
0
0
0
0
[]
3,210,192,989
61,807
BUG: union of MultiIndex throws exception for datetime and pd.Timestamp with identical values
open
2025-07-07T20:36:01
2025-07-28T07:56:03
null
https://github.com/pandas-dev/pandas/issues/61807
true
null
null
jwg4
4
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python from datetime import date import pandas as pd mi_a = pd.MultiIndex.from_tuples([(date(2001, 1, 1), "foo")], names=["first", "second"]) mi_b = pd.MultiIndex.from_tuples([(pd.Timestamp(date(2001, 1, 1)), "asdf")], names=["first", "second"]) mi_a.union(mi_b) ``` ### Issue Description The following exception is thrown: ``` InvalidIndexError Traceback (most recent call last) ... InvalidIndexError: Reindexing only valid with uniquely valued Index objects ``` ### Expected Behavior I would have expected the two values `date(2001, 1, 1)` and `pd.Timestamp(date(2001, 1, 1))` to be treated as different values, which is how I believe `pd.DataFrame.drop_duplicates` acts. However treating the two values as identical could also be valid, but I don't think that the exception is. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6 python : 3.13.5 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 13, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United Kingdom.1252 pandas : 2.3.1 numpy : 2.3.1 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.0 Cython : None sphinx : None IPython : 8.37.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Datetime", "Dtype Conversions", "Index" ]
0
0
0
0
0
0
0
0
[ "This apparent bug also affects another basic operation which could be expected to succeed, `DataFrame.combine_first` when using frames with MultiIndex as described above:\n\n```\nfrom datetime import date\nimport pandas as pd\n\ndf_a = pd.DataFrame(\n [\n (date(2001, 1, 1), \"foo\", 11),\n ],\n columns=[\"a\", \"b\", \"c\"]\n)\ndf_a = df_a.set_index([\"a\", \"b\"])\n\ndf_b = pd.DataFrame(\n [\n (pd.Timestamp(date(2001, 1, 1)), \"bar\", 33),\n ],\n columns=[\"a\", \"b\", \"c\"]\n)\ndf_b = df_b.set_index([\"a\", \"b\"])\n\ndf_a.combine_first(df_b)\n```", "Thanks for the report. pandas is converting the `Index` to a `DatetimeIndex` here:\n\nhttps://github.com/pandas-dev/pandas/blob/7c2796d134e74f613cbfd85137d6809f5abf39a4/pandas/core/indexes/base.py#L6217-L6221\n\nFurther investigations are welcome!", "> Further investigations are welcome!\n\nProbably a regression if not a design choice.\n\nThe code sample worked in pandas 1.5 but gave\n\n> FutureWarning: Comparison of Timestamp with datetime.date is deprecated in order to match the standard library behavior. In a future version these will be considered non-comparable. Use 'ts == pd.Timestamp(date)' or 'ts.date() == date' instead.", "I investigated the issue. The exception is raised from here\n\nhttps://github.com/pandas-dev/pandas/blob/e4a03b6e47a8ef9cd045902916289cbc976d3d33/pandas/core/indexes/base.py#L3679-L3680\n\nthis was added as part of fixing de-duplication get_indexer methods [#38372](https://github.com/pandas-dev/pandas/pull/38372) \n\nI added a condition to skip index unique check for dtype.kind == \"M\" the issue is resolved\n\n``` \nif self.dtype.kind != \"M\" and not self._index_as_unique:\n raise InvalidIndexError(self._requires_unique_msg)\n```\n@rhshadrach @simonjayhawkins kindly let me know your comments about this approach\n\n```\nimport pandas as pd\nmi_a = pd.MultiIndex.from_tuples([(date(2001, 1, 1), \"foo\")], names=[\"first\", \"second\"])\nmi_b = pd.MultiIndex.from_tuples([(pd.Timestamp(date(2001, 1, 1)), \"asdf\")], names=[\"first\", \"second\"])\nmi_a.union(mi_b)\n<ipython-input-2-90716c1e6b7f>:5: RuntimeWarning: The values in the array are unorderable. Pass `sort=False` to suppress this warning.\n mi_a.union(mi_b)\n\n\nOut[2]: \nMultiIndex([( 2001-01-01, 'foo'),\n (2001-01-01 00:00:00, 'asdf')],\n names=['first', 'second'])\n```\n\n```\nfrom datetime import date\nimport pandas as pd\ndf_a = pd.DataFrame(\n [\n (date(2001, 1, 1), \"foo\", 11),\n ],\n columns=[\"a\", \"b\", \"c\"]\n)\ndf_a = df_a.set_index([\"a\", \"b\"])\ndf_b = pd.DataFrame(\n [\n (pd.Timestamp(date(2001, 1, 1)), \"bar\", 33),\n ],\n columns=[\"a\", \"b\", \"c\"]\n)\ndf_b = df_b.set_index([\"a\", \"b\"])\ndf_a.combine_first(df_b)\n<ipython-input-3-566360167aed>:17: RuntimeWarning: The values in the array are unorderable. Pass `sort=False` to suppress this warning.\n df_a.combine_first(df_b)\n\n\nOut[3]: \n c\na b \n2001-01-01 foo 11\n2001-01-01 00:00:00 bar 33\n```" ]
3,210,131,899
61,806
DEPS: Bump NumPy and tzdata
closed
2025-07-07T20:11:43
2025-07-08T15:44:48
2025-07-08T15:44:42
https://github.com/pandas-dev/pandas/pull/61806
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61806
https://github.com/pandas-dev/pandas/pull/61806
mroeschke
0
These dependencies should have been released ~2 years ago by the time we release pandas 3.0 closes https://github.com/pandas-dev/pandas/issues/61588
[ "Dependencies" ]
0
0
0
0
0
0
0
0
[]
3,209,849,864
61,805
DOC: Improve clarity of GroupBy introduction sentence
closed
2025-07-07T18:03:11
2025-07-15T22:25:13
2025-07-15T22:25:12
https://github.com/pandas-dev/pandas/pull/61805
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61805
https://github.com/pandas-dev/pandas/pull/61805
Vedant-Kadam-Noobie
1
This small change clarifies the introductory sentence of the GroupBy user guide, as recommended for documentation improvements. It makes the definition of the "group by" process more direct and easier for new users to understand. - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
[]
0
0
0
0
0
0
0
0
[ "@Vedant-Kadam-Noobie thanks for the PR but simon and I prefer the existing phrasing." ]
3,209,770,110
61,804
DOC: Improve documentation for DataFrame.__setitem__ and .loc assignment from Series
closed
2025-07-07T17:27:16
2025-08-01T15:31:10
2025-08-01T15:31:04
https://github.com/pandas-dev/pandas/pull/61804
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61804
https://github.com/pandas-dev/pandas/pull/61804
niruta25
4
- [x] closes #61662 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. The core problem is that when assigning a Series, pandas aligns on index and values in the Series that don't match an index label will result in NaN [DOC: Improve documentation for DataFrame.__setitem__ and .loc assignment from Series · Issue #61662 · pandas-dev/pandas](https://github.com/pandas-dev/pandas/issues/61662), but this behavior is poorly documented. My proposed solution addresses the issue comprehensively by: - Adding a complete docstring for DataFrame.__setitem__ with clear explanations and examples - Enhancing the .loc documentation with specific notes about Series alignment - Expanding the user guide with a dedicated section on Series assignment and index alignment - Including comprehensive test cases to ensure the behavior is well-tested The fix emphasizes that pandas performs index-based alignment rather than positional assignment, which is the source of confusion for many users. The documentation will now clearly explain that when you assign a Series to a DataFrame column, pandas matches values by index labels, not by position, and missing labels result in NaN values. This solution follows pandas' documentation conventions and provides both reference documentation and practical examples that will help users understand and correctly use this important feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@WillAyd Any thought on this resolution? ", "Thanks @niruta25 for the PR\r\n\r\n> * Expanding the user guide with a dedicated section on Series assignment and index alignment\r\n\r\nI see that \"align\" is found 16 times when searching \"Intro to data structures\" section of the docs. This chapter is only preceded by \"10 minutes to pandas\" so i'm not sure that the linked issue which states \"The current documentation is incomplete and vague about how Series alignment works in assignments.\" is correct that this fundamental paradigm of pandas is not covered in the documentation.\r\n\r\nI'm not a member of the documentation team so others may be more positive to these changes, but if I was to review this PR, I would prefer to see more discussion on the issue itself before proceeding to the PR stage.", "I do not have access to merge this PR. Can you please help. ", "Thanks @niruta25 " ]
3,209,648,127
61,803
Backport PR #61794 on branch 2.3.x (DOC: prepare 2.3.1 whatsnew notes for release)
closed
2025-07-07T16:36:39
2025-07-07T17:09:22
2025-07-07T17:09:22
https://github.com/pandas-dev/pandas/pull/61803
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61803
https://github.com/pandas-dev/pandas/pull/61803
meeseeksmachine
0
Backport PR #61794: DOC: prepare 2.3.1 whatsnew notes for release
[ "Docs" ]
0
0
0
0
0
0
0
0
[]
3,209,627,593
61,802
[pre-commit.ci] pre-commit autoupdate
closed
2025-07-07T16:30:08
2025-07-07T18:09:18
2025-07-07T18:09:15
https://github.com/pandas-dev/pandas/pull/61802
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61802
https://github.com/pandas-dev/pandas/pull/61802
pre-commit-ci[bot]
0
<!--pre-commit.ci start--> updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.12 → v0.12.2](https://github.com/astral-sh/ruff-pre-commit/compare/v0.11.12...v0.12.2) - [github.com/MarcoGorelli/cython-lint: v0.16.6 → v0.16.7](https://github.com/MarcoGorelli/cython-lint/compare/v0.16.6...v0.16.7) - [github.com/pre-commit/mirrors-clang-format: v20.1.5 → v20.1.7](https://github.com/pre-commit/mirrors-clang-format/compare/v20.1.5...v20.1.7) - [github.com/trim21/pre-commit-mirror-meson: v1.8.1 → v1.8.2](https://github.com/trim21/pre-commit-mirror-meson/compare/v1.8.1...v1.8.2) <!--pre-commit.ci end-->
[ "Code Style" ]
0
0
0
0
0
0
0
0
[]
3,209,113,779
61,801
[backport 2.3.x] TST: update expected dtype for sum of decimals with pyarrow 21+ (#61799)
closed
2025-07-07T13:46:21
2025-07-07T14:56:26
2025-07-07T14:56:23
https://github.com/pandas-dev/pandas/pull/61801
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61801
https://github.com/pandas-dev/pandas/pull/61801
jorisvandenbossche
0
Backport of #61799
[]
0
0
0
0
0
0
0
0
[]
3,209,017,025
61,800
[backport 2.3.x] BUG[string]: incorrect index downcast in DataFrame.join (#61771)
closed
2025-07-07T13:19:28
2025-07-07T16:35:52
2025-07-07T15:40:37
https://github.com/pandas-dev/pandas/pull/61800
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61800
https://github.com/pandas-dev/pandas/pull/61800
jorisvandenbossche
0
Backport of #61771
[]
0
0
0
0
0
0
0
0
[]
3,208,910,996
61,799
TST: update expected dtype for sum of decimals with pyarrow 21+
closed
2025-07-07T12:50:39
2025-07-07T13:46:41
2025-07-07T13:41:24
https://github.com/pandas-dev/pandas/pull/61799
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61799
https://github.com/pandas-dev/pandas/pull/61799
jorisvandenbossche
3
This should fix the failure we started having for the pyarrow nightly build (behaviour change in https://github.com/apache/arrow/pull/44184 to increase the precision of the resulting decimal for sum)
[ "Testing", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Apologies for going quickly here, but going to merge this to have green CI on 2.3.x for releasing", "Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 ebca3c56c1f9454ef5d2de5bb19ce138e1619504\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61799: TST: update expected dtype for sum of decimals with pyarrow 21+'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61799-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61799 on branch 2.3.x (TST: update expected dtype for sum of decimals with pyarrow 21+)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ", "Manual backport -> https://github.com/pandas-dev/pandas/pull/61801" ]
3,208,792,361
61,798
Backport PR #61795 on branch 2.3.x (DOC: add section about upcoming pandas 3.0 changes (string dtype, CoW) to 2.3 whatsnew notes)
closed
2025-07-07T12:16:00
2025-07-07T13:25:28
2025-07-07T13:25:28
https://github.com/pandas-dev/pandas/pull/61798
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61798
https://github.com/pandas-dev/pandas/pull/61798
meeseeksmachine
0
Backport PR #61795: DOC: add section about upcoming pandas 3.0 changes (string dtype, CoW) to 2.3 whatsnew notes
[ "Docs" ]
0
0
0
0
0
0
0
0
[]
3,208,572,203
61,797
Backport PR #61705 on branch 2.3.x (DOC: add pandas 3.0 migration guide for the string dtype)
closed
2025-07-07T11:09:03
2025-07-07T11:37:31
2025-07-07T11:37:30
https://github.com/pandas-dev/pandas/pull/61797
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61797
https://github.com/pandas-dev/pandas/pull/61797
meeseeksmachine
0
Backport PR #61705: DOC: add pandas 3.0 migration guide for the string dtype
[ "Docs", "Strings" ]
0
0
0
0
0
0
0
0
[]
3,208,491,765
61,796
Bump pypa/cibuildwheel from 2.23.3 to 3.0.1
closed
2025-07-07T10:46:14
2025-07-28T12:59:31
2025-07-28T12:59:29
https://github.com/pandas-dev/pandas/pull/61796
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61796
https://github.com/pandas-dev/pandas/pull/61796
dependabot[bot]
1
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.3 to 3.0.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p> <blockquote> <h2>v3.0.1</h2> <ul> <li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li> <li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li> </ul> <h2>v3.0.0</h2> <p>See <a href="https://github.com/henryiii"><code>@​henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p> <ul> <li> <p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p> </li> <li> <p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p> </li> <li> <p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> <p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p> <ul> <li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li> <li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li> </ul> </li> <li> <p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p> </li> <li> <p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p> </li> <li> <p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p> </li> <li> <p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p> </li> <li> <p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p> </li> <li> <p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p> </li> <li> <p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p> </li> <li> <p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p> </li> <li> <p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p> </li> <li> <p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p> </li> <li> <p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p> </li> <li> <p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>&quot;auto&quot;</code>. It now requires explicit <code>&quot;auto32&quot;</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p> </li> <li> <p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>⚠️ Dropped official support for Appveyor. If it was working for you before, it will probably continue to do so, but we can't be sure, because our CI doesn't run there anymore. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2386">#2386</a>)</p> </li> <li> <p>📚 A reorganisation of the docs, and numerous updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2280">#2280</a>)</p> </li> <li> <p>📚 Use Python 3.14 color output in docs CLI output. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2407">#2407</a>)</p> </li> <li> <p>📚 Docs now primarily use the pyproject.toml name of options, rather than the environment variable name. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2389">#2389</a>)</p> </li> <li> <p>📚 README table now matches docs and auto-updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2427">#2427</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2428">#2428</a>)</p> </li> </ul> <h2>v3.0.0rc3</h2> <p>Not yet released, but available for testing.</p> <p>Note - when using a beta version, be sure to check the <a href="https://cibuildwheel.pypa.io/en/latest/">latest docs</a>, rather than the stable version, which is still on v2.X.</p> <!-- raw HTML omitted --> <p>If you've used previous versions of the beta:</p> <ul> <li>⚠️ Previous betas of v3.0 changed the working directory for tests. This has been rolled back to the v2.x behaviour, so you might need to change configs if you adapted to the beta 1 or 2 behaviour. See [issue <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2406">#2406</a>](<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2406">pypa/cibuildwheel#2406</a>) for more information.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p> <blockquote> <h3>v3.0.1</h3> <p><em>5 July 2025</em></p> <ul> <li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li> <li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li> </ul> <h3>v3.0.0</h3> <p><em>11 June 2025</em></p> <p>See <a href="https://github.com/henryiii"><code>@​henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p> <ul> <li> <p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p> </li> <li> <p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p> </li> <li> <p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> <p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p> <ul> <li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li> <li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li> </ul> </li> <li> <p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p> </li> <li> <p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p> </li> <li> <p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p> </li> <li> <p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p> </li> <li> <p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p> </li> <li> <p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p> </li> <li> <p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p> </li> <li> <p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p> </li> <li> <p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p> </li> <li> <p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p> </li> <li> <p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p> </li> <li> <p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>&quot;auto&quot;</code>. It now requires explicit <code>&quot;auto32&quot;</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p> </li> <li> <p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>⚠️ Dropped official support for Appveyor. If it was working for you before, it will probably continue to do so, but we can't be sure, because our CI doesn't run there anymore. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2386">#2386</a>)</p> </li> <li> <p>📚 A reorganisation of the docs, and numerous updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2280">#2280</a>)</p> </li> <li> <p>📚 Use Python 3.14 color output in docs CLI output. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2407">#2407</a>)</p> </li> <li> <p>📚 Docs now primarily use the pyproject.toml name of options, rather than the environment variable name. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2389">#2389</a>)</p> </li> <li> <p>📚 README table now matches docs and auto-updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2427">#2427</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2428">#2428</a>)</p> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pypa/cibuildwheel/commit/95d2f3a92fbf80abe066b09418bbf128a8923df2"><code>95d2f3a</code></a> Bump version: v3.0.1</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/40de3fe51083fa91bc8804c5a8de5c496f61ed52"><code>40de3fe</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2483">#2483</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/920081014e8b136b4565824771dee1d489e046ef"><code>9200810</code></a> feat: added Python 3.14 preview for iOS (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/94fe0a212fa9c1bfabd54ee45473c28f30227c03"><code>94fe0a2</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2482">#2482</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/405ddd5315d4df8a9ffc569790a05e19f4344d7d"><code>405ddd5</code></a> fix: pyodide missing some logging (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2477">#2477</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/34b4f1e86e47792c683de9ef813ed4d614159846"><code>34b4f1e</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2474">#2474</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/e69b5532ab01c9d7c73e8e376a4e1219307cd4bd"><code>e69b553</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2473">#2473</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/3e86452449f29075a6e1fa3a165a532effacfdab"><code>3e86452</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/e73749579b3347d39c4793da6f01e22ef6e4363e"><code>e737495</code></a> chore(deps): bump actions/attest-build-provenance from 2.3.0 to 2.4.0 in the ...</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/588dee0e0c7780ab3264dfd3fab3a197f50306d3"><code>588dee0</code></a> docs: include Windows ARM in examples (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2468">#2468</a>)</li> <li>Additional commits viewable in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.3...v3.0.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pypa/cibuildwheel&package-manager=github_actions&previous-version=2.23.3&new-version=3.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
[ "Build", "CI", "Dependencies" ]
0
0
0
0
0
0
0
0
[ "Superseded by #61981." ]
3,208,273,381
61,795
DOC: add section about upcoming pandas 3.0 changes (string dtype, CoW) to 2.3 whatsnew notes
closed
2025-07-07T09:43:16
2025-07-07T12:15:57
2025-07-07T12:15:54
https://github.com/pandas-dev/pandas/pull/61795
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61795
https://github.com/pandas-dev/pandas/pull/61795
jorisvandenbossche
7
This is largely copied from the equivalent notes in the 2.2 release notes at https://pandas.pydata.org/pandas-docs/stable/whatsnew/v2.2.0.html#upcoming-changes-in-pandas-3-0, with some updates (and some new content copied from WIP 3.0 release notes in https://github.com/pandas-dev/pandas/pull/61724)
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "/preview", "LGTM @jorisvandenbossche is the \"TODO add link to migration guide\" going to be done here or a follow up?", "There is a mix of `code-block:: python` and `code-block:: ipython`. Intentional?", "> LGTM @jorisvandenbossche is the \"TODO add link to migration guide\" going to be done here or a follow up?\r\n\r\nPlanning to merge that migration guide now first and then update here with a link", "> There is a mix of `code-block:: python` and `code-block:: ipython`. Intentional?\r\n\r\nFixed (although it was for code blocks where no code prompt was included, and at that point there is not really a difference, I think)", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61795/" ]
3,208,152,643
61,794
DOC: prepare 2.3.1 whatsnew notes for release
closed
2025-07-07T09:03:14
2025-07-07T16:36:06
2025-07-07T16:36:05
https://github.com/pandas-dev/pandas/pull/61794
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61794
https://github.com/pandas-dev/pandas/pull/61794
jorisvandenbossche
9
Prepping for doing a 2.3.1 release today, xref https://github.com/pandas-dev/pandas/issues/61590
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@jorisvandenbossche for the \"Comparisons between different string dtypes\" is there an issue ref?\r\n\r\nI don't get why:\r\n\r\n> When ``pd.StringDtype(\"pyarrow\", na_value=pd.NA)`` is compared against any other string dtype, the result will always be ``boolean[pyarrow]``.\r\n\r\nsince when did we start mixing the pandas nullable dtypes with the ArrowDtypes?\r\n\r\nIf this is now policy, when do the ArrowDtypes return the ArrowDtype version of the string array and not the new default string array (pd.NA variant)?", "in \"Index set operations ignore empty RangeIndex and object dtype Index\"\r\n\r\nthe code block uses \r\n\r\n```python\r\npd.options.mode.infer_string = True\r\n```\r\n\r\nthis should be\r\n```python\r\npd.options.future.infer_string = True\r\n```", "> for the \"Comparisons between different string dtypes\" is there an issue ref?\r\n\r\nhttps://github.com/pandas-dev/pandas/pull/61138 is the PR, https://github.com/pandas-dev/pandas/issues/60639 the issue. Will add a link\r\n\r\n> since when did we start mixing the pandas nullable dtypes with the ArrowDtypes?\r\n\r\nUnfortunately for some time .. (and it is also a change that I don't really agree with). I was also again confused about it when finalizing that PR (see https://github.com/pandas-dev/pandas/pull/61138#discussion_r2089434158). But, this has been like this now for some releases, so not something to change here in pandas 2.3 (if we want to change it, it's something for 3.0 I think). \r\nI know there was some discussion about this in the past, looking it up.\r\n", "Well I need to do some more research to be sure, but I'm not happy on two fronts: the change itself and the fact that you were \"required\" to do PDEP-14 and maintain backwards compat with the \"experimental\" StringDtype because it had been available for so long. So it appears others seem to have changed the API without any deprecation or warning. Hopefully this will be clarified in the roadmap discussion.", "> > for the \"Comparisons between different string dtypes\" is there an issue ref?\r\n> \r\n> #61138 is the PR, #60639 the issue. Will add a link\r\n\r\ngreat.\r\n\r\nLet's just do that for now. No need to block on the rest of my comment.", "> and the fact that you were \"required\" to do PDEP-14 \r\n\r\nJust to clarify here: this behaviour stems from before PDEP-14, and it is _only_ for the NA-variant of the dtype, not for the future-default NaN-variant (so that's another reason that resolving this specific item is not a priority for 2.3)", "Putting it here now just because I looked it up (but further not related to the content of this PR): the change for returning `bool[pyarrow]` instead of BooleanDtype was done in 2.0 in https://github.com/pandas-dev/pandas/pull/51643, triggered by doing something similar for `value_counts` returning `int64[pyarrow]` instead of Int64 (https://github.com/pandas-dev/pandas/pull/51542). This came up again at https://github.com/pandas-dev/pandas/pull/59330#discussion_r1693973328 and then Will created https://github.com/pandas-dev/pandas/issues/59346 to discuss (but we haven't actually further discussed it)", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61794/" ]
3,207,901,022
61,793
Backport PR #61770 on branch 2.3.x (BUG: Fix unpickling of string dtypes of legacy pandas versions)
closed
2025-07-07T07:41:47
2025-07-07T13:14:39
2025-07-07T13:14:39
https://github.com/pandas-dev/pandas/pull/61793
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61793
https://github.com/pandas-dev/pandas/pull/61793
meeseeksmachine
1
Backport PR #61770: BUG: Fix unpickling of string dtypes of legacy pandas versions
[ "Bug", "Strings", "IO Pickle" ]
0
0
0
0
0
0
0
0
[ "I am adding an ignore for a new numpy deprecation warning. Although it also happens to occur in a pickle test, it is not actually related to the changes in this PR. It just started happening now because the linked numpy PR was merged a few days ago (the same failures can be seen on the 2.3.x branch). \r\nAnd it is also only for a test that is already removed in the main branch (testing py2 compat in pickle), so just ignoring here seems fine" ]
3,207,827,049
61,792
TST: assert reading of legacy pickles against current data
open
2025-07-07T07:15:47
2025-08-10T00:10:01
null
https://github.com/pandas-dev/pandas/pull/61792
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61792
https://github.com/pandas-dev/pandas/pull/61792
jorisvandenbossche
2
While reviewing https://github.com/pandas-dev/pandas/pull/61770, I noticed that we didn't actually compare the read pickle data to some ground truth expected value, but just to itself (we were essentially doing `assert_equal(result, result)` ..), due to some accidental change in a clean-up many years ago in https://github.com/pandas-dev/pandas/commit/f2246cfa215d01b68aebd2da4afb836d912d248d) Fixing that here by again creating the expected unpickled data with `create_pickle_data()` during the test run, to compare with the data from the older pickled files.
[ "Testing", "IO Pickle", "Stale" ]
0
0
0
0
0
0
0
0
[ "can you merge main and see if the pyarrow decimal issue resolves itself?", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,207,503,620
61,791
DOC: Improve text color in dark mode for tutorial navigation buttons
closed
2025-07-07T04:28:05
2025-07-07T20:56:41
2025-07-07T13:15:53
https://github.com/pandas-dev/pandas/issues/61791
true
null
null
yuting1008
3
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/getting_started/index.html#intro-to-pandas ### Documentation problem In dark mode, the text within the tutorial navigation boxes under "Intro to pandas" has low contrast against the background, making it difficult to read. <img width="741" height="578" alt="Image" src="https://github.com/user-attachments/assets/2d941542-05b3-40f9-b235-769bfda89c31" /> ### Suggested fix for documentation Lighten the font color to `#CED6DD` which matches the color of other texts in dark mode.
[ "Docs", "Duplicate Report", "Web", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "If possible, I would like to take this issue myself. I'm new to open source, please let me know if there is any suggestion or guidance!", "@yuting1008 Thanks for raising the issue. This appears to be a duplicate of https://github.com/pandas-dev/pandas/issues/60041 and https://github.com/pandas-dev/pandas/issues/60024. The issue addressed by https://github.com/pandas-dev/pandas/pull/61379, but since that PR was merged into main and not backported to 2.3.x, it’s still not working in dark mode on 2.3.0.\n\nYou can confirm the fix works by switching the documentation version to `dev` instead of `2.3.0`:\n![Image](https://github.com/user-attachments/assets/cdce95ed-a1c1-4ecb-ab9a-313f23d4af16)", "@chilin0525 Noted! \nThank you for your reminder. I didn't notice the difference between versions. I will close this issue then. Thanks again!" ]
3,207,074,435
61,790
DOC: Add link to WebGL in pandas ecosystem
closed
2025-07-06T21:57:00
2025-07-07T16:21:20
2025-07-07T16:21:14
https://github.com/pandas-dev/pandas/pull/61790
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61790
https://github.com/pandas-dev/pandas/pull/61790
star1327p
1
Add link to WebGL in pandas ecosystem. https://www.khronos.org/webgl/ - [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks @star1327p " ]
3,206,317,537
61,789
CLN: remove and udpate for outdated _item_cache
closed
2025-07-06T09:01:28
2025-07-07T16:29:24
2025-07-07T16:29:16
https://github.com/pandas-dev/pandas/pull/61789
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61789
https://github.com/pandas-dev/pandas/pull/61789
chilin0525
1
- [x] closes #61746 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Testing" ]
0
0
0
0
0
0
0
0
[ "Thanks @chilin0525 " ]
3,205,751,577
61,788
BUG: read_excel() converts the string "None" in an Excel file to "NaN"
closed
2025-07-05T22:56:52
2025-07-08T22:06:33
2025-07-08T22:06:33
https://github.com/pandas-dev/pandas/issues/61788
true
null
null
jmcnamara
6
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. (I am working on compiling and testing this.) ### Reproducible Example ```python import pandas as pd excel_file = "string_list.xlsx" df_openpyxl = pd.read_excel(excel_file, engine="openpyxl") df_calamine = pd.read_excel(excel_file, engine="calamine") print("openpyxl engine") print("===============") print(df_openpyxl) print("calamine engine") print("===============") print(df_calamine) ``` ### Issue Description The attached excel file `string_list.xlsx` contains the following data: ``` Header 0 Alone 1 Bone 2 None 3 Cone 4 Done ``` It looks like this: <img width="612" height="452" alt="Image" src="https://github.com/user-attachments/assets/f1633bf0-e09a-469b-beb4-784d77a9d5cc" /> When read with `read_excel()` using either the `openpyxl` or `calamine` engine it converts the string cell "None" to a `NaN`. The output from the above program is: ``` openpyxl engine =============== Header 0 Alone 1 Bone 2 NaN 3 Cone 4 Done calamine engine =============== Header 0 Alone 1 Bone 2 NaN 3 Cone 4 Done ``` Note that "None" has changed to `NaN`. Sample file: [string_list.xlsx](https://github.com/user-attachments/files/21082151/string_list.xlsx) I checked `openpyxl`, `calamine` and `python-calamine` outside of Pandas and they each print the expected string "None". ### Expected Behavior The string "None" from an Excel file shouldn't be interpreted as Python `None` and/or converted to `NaN`. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.11.1 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:26 PDT 2025; root:xnu-11417.121.6~2/RELEASE_X86_64 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 2.1.3 pytz : 2024.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.4 lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : 3.2.5 zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None </details>
[ "Bug", "IO Excel", "Closing Candidate" ]
0
0
0
0
0
0
0
0
[ "I can look into this", "Haven't investigated, but it's likely due to the `na_values` / `keep_default_na` parameters of `read_excel`. I'd try with setting `keep_default_na=False` ", "Looks like that is the issue:\n\n\n```\nkeep_default_na : bool, default True\n Whether or not to include the default NaN values when parsing the data.\n Depending on whether ``na_values`` is passed in, the behavior is as follows:\n\n * If ``keep_default_na`` is True, and ``na_values`` are specified,\n ``na_values`` is appended to the default NaN values used for parsing.\n * If ``keep_default_na`` is True, and ``na_values`` are not specified, only\n the default NaN values are used for parsing.\n * If ``keep_default_na`` is False, and ``na_values`` are specified, only\n the NaN values specified ``na_values`` are used for parsing.\n * If ``keep_default_na`` is False, and ``na_values`` are not specified, no\n strings will be parsed as NaN.\n```\nYup, None is in there\n<img width=\"553\" height=\"429\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/fb3efdc7-74a5-409c-8539-75ddae204cf6\" />\nTurning that to `False` should resolve the issue.", "Thanks, I can confirm that add `keep_default_na=False` make it work as expected.\n\n```python\ndf_openpyxl = pd.read_excel(excel_file, engine=\"openpyxl\", keep_default_na=False)\ndf_calamine = pd.read_excel(excel_file, engine=\"calamine\", keep_default_na=False)\n```\n\n```\nopenpyxl engine\n===============\n Header\n0 Alone\n1 Bone\n2 None\n3 Cone\n4 Done\ncalamine engine\n===============\n Header\n0 Alone\n1 Bone\n2 None\n3 Cone\n4 Done\n```\n\nSo, to be clear, are you saying that this is expected behaviour and not a bug?", "Yes, this is expected behavior.", "> Yes, this is expected behavior.\n\nThanks. Then I will close." ]
3,205,656,770
61,787
DOCS: Add detailed Windows build instructions
closed
2025-07-05T21:33:33
2025-07-05T21:33:47
2025-07-05T21:33:47
https://github.com/pandas-dev/pandas/pull/61787
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61787
https://github.com/pandas-dev/pandas/pull/61787
TheGuruCo
0
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[]
3,205,636,284
61,786
PERF: avoid object-dtype path in ArrowEA._explode
closed
2025-07-05T21:19:41
2025-07-07T16:47:04
2025-07-07T16:39:58
https://github.com/pandas-dev/pandas/pull/61786
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61786
https://github.com/pandas-dev/pandas/pull/61786
jbrockmendel
1
Identified in #61732
[ "Arrow" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,205,623,246
61,785
REF: remove unreachable, stronger typing in parsers.pyx
closed
2025-07-05T21:08:44
2025-07-07T17:31:36
2025-07-07T17:09:59
https://github.com/pandas-dev/pandas/pull/61785
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61785
https://github.com/pandas-dev/pandas/pull/61785
jbrockmendel
1
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "IO CSV" ]
0
0
0
0
0
0
0
0
[ "Thanks @jbrockmendel " ]
3,205,501,535
61,784
ENH: Add Coefficient of Variation to DataFrame.describe()
closed
2025-07-05T19:51:00
2025-07-07T21:26:48
2025-07-07T21:26:48
https://github.com/pandas-dev/pandas/issues/61784
true
null
null
ffaa1234
4
### Feature Type - [x] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description The `DataFrame.describe()` method includes standard deviation (`std`), but its significance is hard to interpret without context, as it depends on the data’s scale. The coefficient of variation (CV = `std / mean * 100`) provides a relative measure of variability, making it easier to assess if `std` is "big." ### Feature Description Add CV as a row in `DataFrame.describe()` output for numeric columns, optionally enabled via `df.describe(include_cv=True)`. ## Example ```python import pandas as pd data = {'A': [10, 12, 14, 15, 13], 'B': [1000, 1100, 900, 950, 1050]} df = pd.DataFrame(data) desc = df.describe() desc.loc['CV (%)'] = (df.std() / df.mean() * 100) print(desc) ``` **Output**: ``` A B count 5.000000 5.000000 mean 12.800000 1000.000000 std 1.923538 79.056942 min 10.000000 900.000000 25% 12.000000 950.000000 50% 13.000000 1000.000000 75% 14.000000 1050.000000 max 15.000000 1100.000000 CV (%) 15.027641 7.905694 ``` ## Benefits - **Interpretability**: CV shows relative variability, aiding comparison across columns. - **Usability**: Simplifies exploratory data analysis. - **Relevance**: Widely used in fields like finance and biology. ### Alternative Solutions Users can compute CV manually, but this is less convenient. ### Additional Context _No response_
[ "Enhancement", "Closing Candidate" ]
0
1
0
0
0
0
0
0
[ "Thanks for the request. This is similar to https://github.com/pandas-dev/pandas/issues/59897, we receive various requests to rows to `DataFrame.describe`. If we were to add them, `describe` would become overloaded and noisy. I'm opposed here. \n\nAs the OP demonstrates, pandas makes it simple to add such a row already.", "@rhshadrach I understand your concerns about describe() becoming overloaded. However, standard deviation alone often lacks context; its significance depends entirely on the data's scale.\n\nIn this example, both features have an STD of 5, but they tell completely different stories about the data. The first STD is very low, and the second is very high, because their value ranges differ significantly.\n<img width=\"530\" height=\"211\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/a6e16314-7660-45fa-8fa8-b905553b80e2\" />", "@ffaa1234 - I understand; my position remains unchanged.", "Agreed with @rhshadrach. Unfortunately convenience of an operation and importance of a metric are too subjective to make changes for every request in pandas.\n\nThanks for the suggestion but closing." ]
3,205,193,985
61,783
PERF: Unnecessary string interning in read_csv?
closed
2025-07-05T16:26:49
2025-07-08T14:52:42
2025-07-08T14:52:42
https://github.com/pandas-dev/pandas/issues/61783
true
null
null
jbrockmendel
2
Going through parsers.pyx, particularly _string_box_utf8, I'm trying to figure out what the point of the hashtable checks are: ``` k = kh_get_strbox(table, word) # in the hash table if k != table.n_buckets: # this increments the refcount, but need to test pyval = <object>table.vals[k] else: # box it. new ref? pyval = PyUnicode_Decode(word, strlen(word), "utf-8", encoding_errors) k = kh_put_strbox(table, word, &ret) table.vals[k] = <PyObject *>pyval result[i] = pyval ``` This was introduced in 2012 a9db003. I don't see a clear reason why this isn't just ``` result[i] = PyUnicode_Decode(word, strlen(word), "utf-8", encoding_errors) ``` <s>My best guess is that it involves string interning. Prior to py37, only small strings were interned. Now most strings up to 4096 I think are interned. Under the old system, the hashtable could prevent a ton of memory allocation, but that may no longer be the case.</s> No, that doesn't apply to runtime-created strings. So that may be the reason why, but if so it is still a valid one. Does anyone have a longer memory than me on this?
[ "Performance", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Maybe for the case where you have a string column with repeated values? In that case the above, keeping a hashtable of all encountered values, might make it faster(?) or at least reduce memory.\n\nThe arrow->python conversion in pyarrow has a similar option, `deduplicate_objects` (https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_pandas), which was introduced in https://github.com/apache/arrow/pull/3257, and that seems to have some context / benchmarks.", "That's my guess too. I suspect that if we ever get to all-pyarrow-strings we can replace this, and a ndarray[object] allocation, with something a lot more efficient." ]
3,204,915,032
61,782
BUG: Errors using pyarrow datetime types on windows
open
2025-07-05T13:02:02
2025-07-11T06:31:26
null
https://github.com/pandas-dev/pandas/issues/61782
true
null
null
Liam3851
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example This example in the documentation fails the doc build on Windows: ```python3 import pyarrow as pa from datetime import datetime pa_type = pd.ArrowDtype(pa.timestamp("ns")) ser_dt = pd.Series([datetime(2022, 1, 1), None], dtype=pa_type) ser_dt.dt.strftime("%Y-%m") ``` Above raises ```python3 ArrowInvalid: Cannot locate timezone 'UTC': Timezone database not found at "C:\Users\krychd\Downloads\tzdata" ``` ### Issue Description Pyarrow upstream appears not to properly support datetime on windows. See above behavior and open issue https://github.com/apache/arrow/issues/30186. I would usually not raise concerns about pyarrow's lack in this regard (I don't use pyarrow timestamps, just pandas ones, and have considered it experimental) but given the fact it fails the doc build and discussion in #61618 I thought I would open the issue. ### Expected Behavior ```python3 Series(['2022-01', NA], dtype=pyarrow[string]) ``` ### Installed Versions <details> In [5]: pd.show_versions() INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.11.12 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.3.0 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : 1.5.0 dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : 3.10.3 numba : 0.61.2+0.g1e70d8ceb.dirty numexpr : 2.10.2 odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 19.0.1 pyreadstat : None pytest : 8.4.1 python-calamine : None pyxlsb : None s3fs : 2025.5.1 scipy : 1.15.2 sqlalchemy : 2.0.41 tables : None tabulate : 0.9.0 xarray : 2025.6.1 xlrd : None xlsxwriter : 3.2.5 zstandard : 0.23.0 tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Docs", "Datetime", "Windows", "Upstream issue", "Arrow" ]
0
0
0
0
0
0
0
0
[ "take\n" ]
3,203,207,683
61,781
DOC: Typo within `Series.mask()` docs - alignment is done between self and cond, not cond and other
open
2025-07-04T16:06:29
2025-07-14T13:33:45
null
https://github.com/pandas-dev/pandas/issues/61781
true
null
null
nickodell
2
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/reference/api/pandas.Series.mask.html#pandas.Series.mask Similar issues exist for DataFrame.mask, Series.where, and DataFrame.where; they appear to use the same docstring with replacements. ### Documentation problem In this passage: >The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if `cond` is `False` the element is used; otherwise the corresponding element from the DataFrame `other` is used. **If the axis of `other` does not align with axis of `cond` Series/DataFrame, the misaligned index positions will be filled with True.** The bolded sentence is not correct. Here is an example where the `other` value is not aligned to `cond`, because the d value in `cond` has no match in `other`. However, `cond` is still not filled with True. ``` import pandas as pd a = pd.Series(['apple', 'banana', 'cherry', 'dango'], index=['a', 'b', 'c', 'd']) b = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd']) other = pd.Series(['asparagus', 'broccoli', 'carrot', 'dill'], index=['a', 'b', 'c', 'D']) cond = b.lt(3) print("Cond matches other?", cond.index == other.index) print("Cond matches self?", cond.index == a.index) a.mask(cond, other) ``` Output: ``` Cond matches other? [ True True True False] Cond matches self? [ True True True True] a asparagus b broccoli c cherry d dango ``` In this example, you can see that even though cond's d index has no corresponding aligned element in other, it still does not make a replacement for item d - not even to replace it with an NA value. Rather, the alignment is done between `self` and `cond`, not `other` and `cond`. ### Suggested fix for documentation Proposed fix: >The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if `cond` is `False` the element is used; otherwise the corresponding element from the DataFrame `other` is used. If the axis of **`self`** does not align with axis of `cond` Series/DataFrame, the misaligned index positions will be filled with True. Here is an example which shows that this is correct. In the following code, `cond` and `self` are not aligned. The unaligned value in `cond` is treated as True. ``` import pandas as pd a = pd.Series(['apple', 'banana', 'cherry', 'dango'], index=['a', 'b', 'c', 'd']) b = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'D']) other = pd.Series(['asparagus', 'broccoli', 'carrot', 'dill'], index=['a', 'b', 'c', 'd']) cond = b.lt(3) print("Cond matches other?", cond.index == other.index) print("Cond matches self?", cond.index == a.index) a.mask(cond, other) ``` Output: ``` Cond matches other? [ True True True False] Cond matches self? [ True True True False] a asparagus b broccoli c cherry d dill dtype: object ```
[ "Docs", "Needs Triage" ]
2
0
0
0
0
0
0
0
[ "Thanks @nickodell I believe a PR suffices for this. Thoughts @rhshadrach @jbrockmendel @mroeschke ? If accepted, maybe the PR could be extended to `case_when`?", "Hi @nickodell ,\n\nI think `self` tries to align with `other` in `_where` as below.\n\nhttps://github.com/pandas-dev/pandas/blob/a2315af1df30ec3648786502457eb544d002c71d/pandas/core/generic.py#L9760-L9771\n\nThe reason why you see \"dango\" from your first example is because `cond` is `False` for `d`, and then `_where` let \"dango\" propagate.\n\nInterstingly, when I changed `cond` to `b.ge(2)`, I can see the value is filled with `NaN`:\n\n```python\n\nimport pandas as pd\na = pd.Series(['apple', 'banana', 'cherry', 'dango'], index=['a', 'b', 'c', 'd'])\nb = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])\n\nother = pd.Series(['asparagus', 'broccoli', 'carrot', 'dill'], index=['a', 'b', 'c', 'D'])\ncond = b.gt(2) # [False, False, True, True]\n\na.mask(cond, other)\n# a apple\n# b banana\n# c carrot\n# d NaN\n# dtype: object\n\n```\n\nSo I think we might be able to consider changing the sentence like :\n>If the axis of other does not align with axis of cond Series/DataFrame <b>*and `cond` is `True`*</b>, the misaligned index positions will be filled with <b>*`NaN`*</b>.\n\n\nFurthermore, I think the behavior of `mask` in your second example could be inconsistent depending on `inplace`.\n\n```python\nimport pandas as pd\n\na = pd.Series(['apple', 'banana', 'cherry', 'dango'], index=['a', 'b', 'c', 'd'])\nb = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'D'])\n\nother = pd.Series(['asparagus', 'broccoli', 'carrot', 'dill'], index=['a', 'b', 'c', 'd'])\ncond = b.lt(3)\n\na.mask(cond, other)\n\n# a asparagus\n# b broccoli\n# c cherry\n# d dill # filled with True\n# dtype: object\n\na.mask(cond, other, inplace=True)\na\n# a asparagus\n# b broccoli\n# c cherry\n# d dango # filled with False\n# dtype: object\n```\n\n\nThis is because `cond` would be filled with `inplace` in `_where` . (`fillna` comes before `align`. FYI, see https://github.com/pandas-dev/pandas/issues/52955#issuecomment-1537254678)\n\nhttps://github.com/pandas-dev/pandas/blob/a2315af1df30ec3648786502457eb544d002c71d/pandas/core/generic.py#L9731-L9734\n\nPlease note that I'm currently working on a PR regarding this. (#60772)" ]
3,203,070,509
61,780
BUG: tz_localize(None) with Arrow timestamp
closed
2025-07-04T15:08:39
2025-08-11T16:29:29
2025-08-11T16:29:29
https://github.com/pandas-dev/pandas/issues/61780
true
null
null
jbrockmendel
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python # based on test_dt_tz_localize_none import pandas as pd import pyarrow as pa ts = pd.Timestamp("2023-01-02 3:00:00") ser = pd.Series( [ts, None], dtype=pd.ArrowDtype(pa.timestamp("ns", tz="US/Pacific")), ) res = ser.dt.tz_localize(None) assert res[0] == ser[0].tz_localize(None) # <- nope! ``` ### Issue Description The pyarrow tz_localize AFAICT is equivalent to `.tz_convert("UTC").tz_localize(None)` ### Expected Behavior Equivalent to pointwise operation, matching the non-pyarrow tz_localize ### Installed Versions <details> Replace this line with the output of pd.show_versions() </details>
[ "Bug", "Datetime", "Timezones", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Hi @jbrockmendel \n I’d like to work on this issue and submit a fix." ]