id
int64 | number
int64 | title
string | state
string | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | html_url
string | is_pull_request
bool | pull_request_url
string | pull_request_html_url
string | user_login
string | comments_count
int64 | body
string | labels
list | reactions_plus1
int64 | reactions_minus1
int64 | reactions_laugh
int64 | reactions_hooray
int64 | reactions_confused
int64 | reactions_heart
int64 | reactions_rocket
int64 | reactions_eyes
int64 | comments
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,203,022,783
| 61,779
|
TST: option_context bug on Mac GH#58055
|
closed
| 2025-07-04T14:50:58
| 2025-07-07T16:46:15
| 2025-07-07T16:42:39
|
https://github.com/pandas-dev/pandas/pull/61779
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61779
|
https://github.com/pandas-dev/pandas/pull/61779
|
jbrockmendel
| 1
|
- [x] closes #58055 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,202,114,847
| 61,778
|
BUG?: creating Categorical from pandas Index/Series with "object" dtype infers string
|
open
| 2025-07-04T10:15:04
| 2025-07-15T13:50:11
| null |
https://github.com/pandas-dev/pandas/issues/61778
| true
| null | null |
jorisvandenbossche
| 4
|
When creating a pandas Series/Index/DataFrame, I think we generally differentiate between passing a pandas object with `object` dtype and a numpy array with `object` dtype:
```
>>> pd.options.future.infer_string = True
>>> pd.Index(pd.Series(["foo", "bar", "baz"], dtype="object"))
Index(['foo', 'bar', 'baz'], dtype='object')
>>> pd.Index(np.array(["foo", "bar", "baz"], dtype="object"))
Index(['foo', 'bar', 'baz'], dtype='str')
```
So for pandas objects, we preserve the dtype, for numpy arrays of object dtype, we essentially treat that as a sequence of python objects where we infer the dtype (@jbrockmendel that's also your understanding?)
But for categorical that doesn't seem to happen:
```
>>> pd.options.future.infer_string = True
>>> pd.Categorical(pd.Series(["foo", "bar", "baz"], dtype="object"))
['foo', 'bar', 'baz']
Categories (3, str): [bar, baz, foo] # <--- categories inferred as str
```
So we want to preserver the dtype for the categories here as well?
|
[
"Dtype Conversions",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> (@jbrockmendel that's also your understanding?)\n\nYes.\n\n> So we want to preserver the dtype for the categories here as well?\n\nMakes sense.",
"How about we modify the Categorical constructor to distinguish between:\n\n* Pandas objects (Index/Series) with object dtype → preserve object dtype\n* Numpy arrays with object dtype → allow normal inference (existing behavior)\n* Raw Python sequences → allow normal inference (existing behavior)\n\nWe can implement the change where dtype validation occurs.\nThis change will preserve existing behavior for numpy arrays and raw sequences while fixing the inconsistency for pandas objects.\n\nIf you all agree with the solution, I can take it up.",
"That's the right idea, give it a try.",
"take"
] |
3,201,065,687
| 61,776
|
Request For Help: unexplained ArrowInvalid overflow
|
closed
| 2025-07-04T02:04:25
| 2025-07-04T14:10:35
| 2025-07-04T14:10:31
|
https://github.com/pandas-dev/pandas/pull/61776
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61776
|
https://github.com/pandas-dev/pandas/pull/61776
|
jbrockmendel
| 5
|
Because of #61775 and to address failures in #61732 I'm trying out calling pd.to_datetime in ArrowEA._box_pa_array when we have a timestamp type. AFAICT this isn't breaking anything at construction-time (see the assertion this adds, which isn't failing in any tests). What is breaking is subsequent subtraction operations, that are raising `pyarrow.lib.ArrowInvalid: overflow`.
```
pytest "pandas/tests/extension/test_arrow.py::TestArrowArray::test_arith_series_with_scalar[__sub__-timestamp[s, tz=US/Eastern]]"
[...]
E pyarrow.lib.ArrowInvalid: overflow
```
It is happening on both sub and rsub ops. When I try operating with a subset of of the array it looks like the exception only happens when i use a slice that contains a null.
To examine the buffers, I added a breakpoint after the assertion in the diff. In the relevant case, `alt[8]` is null:
```
left = alt[8:10]
right = pa_array[8:10]
lb = left.buffers()[1]
rb = right.buffers()[1]
(Pdb) np.asarray(lb[64:72]).view("M8[ns]")
array(['NaT'], dtype='datetime64[ns]')
(Pdb) np.asarray(rb[64:72]).view("M8[ns]")
array(['1970-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
```
So my current hypothesis is that when we get to the pc.subtract_checked call, it isn't skipping the iNaT entry despite the null bit, and the subtraction for that entry is overflowing. This seems likely unintentional and may be an upstream bug cc @jorisvandenbossche?
Regardless of if it is an upstream bug, I could use guidance on how to make the construction with to_datetime work. Filtering out Decimal(NaN) manually would be pretty inefficient.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> So my current hypothesis is that when we get to the pc.subtract_checked call, it isn't skipping the iNaT entry despite the null bit, and the subtraction for that entry is overflowing.\r\n\r\nI assume that is indeed what is happening here, because there is in any case an (unfortunately long-standing) bug for exactly this case: https://github.com/apache/arrow/issues/35088 (rereading the issue and based on Weston's comment, it seems the fix should actually be quite easy). \r\n\r\nA workaround might be to cast the duration to int64 (which should be zero-copy), and the the substract_checked kernel should work correctly.\r\n\r\n",
"> > So my current hypothesis is that when we get to the pc.subtract_checked call, it isn't skipping the iNaT entry despite the null bit, and the subtraction for that entry is overflowing.\r\n> \r\n> I assume that is indeed what is happening here, because there is in any case an (unfortunately long-standing) bug for exactly this case: [apache/arrow#35088](https://github.com/apache/arrow/issues/35088) (rereading the issue and based on Weston's comment, it seems the fix should actually be quite easy).\r\n> \r\n> A workaround might be to cast the duration to int64 (which should be zero-copy), and the the substract_checked kernel should work correctly.\r\n\r\n```\r\n>>> arr = pa.array(pd.Series([pd.Timestamp(\"2020-01-01\"), None]))\r\n>>> other = pa.scalar(pd.Timestamp(\"2019-12-31T20:01:01\"), type=arr.type)\r\n>>> \r\n>>> pc.subtract_checked(arr, other)\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\nCell In[35], line 1\r\n----> 1 pc.subtract_checked(arr, other)\r\n\r\nFile ~/conda/envs/dev/lib/python3.11/site-packages/pyarrow/compute.py:252, in _make_generic_wrapper.<locals>.wrapper(memory_pool, *args)\r\n 250 if args and isinstance(args[0], Expression):\r\n 251 return Expression._call(func_name, list(args))\r\n--> 252 return func.call(args, None, memory_pool)\r\n\r\nFile ~/conda/envs/dev/lib/python3.11/site-packages/pyarrow/_compute.pyx:407, in pyarrow._compute.Function.call()\r\n\r\nFile ~/conda/envs/dev/lib/python3.11/site-packages/pyarrow/error.pxi:155, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nFile ~/conda/envs/dev/lib/python3.11/site-packages/pyarrow/error.pxi:92, in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: overflow\r\n>>> pc.subtract_checked(arr.cast(\"int64\"), other.cast(\"int64\")).cast(pa.duration(arr.type.unit)).to_pandas()\r\n0 0 days 03:58:59\r\n1 NaT\r\ndtype: timedelta64[s]\r\n```\r\n\r\n---\r\n\r\nAnd so you can indeed see that the underlying values would overflow if the value masked by the null is not ignored:\r\n\r\n```\r\n>>> np_arr = np.frombuffer(arr.buffers()[1], dtype=\"int64\")\r\n>>> np_arr\r\narray([ 1577836800, -9223372036854775808])\r\n>>> other.value\r\n1577822461\r\n>>> np_arr - other.value\r\narray([ 14339, 9223372035276953347])\r\n```",
"> Regardless of if it is an upstream bug, I could use guidance on how to make the construction with to_datetime work. Filtering out Decimal(NaN) manually would be pretty inefficient.\r\n\r\nWhat do you want to change here exactly? The issue is that pyarrow allows `Decimal(NaN)` as a null value when constructing from a list of scalars, and pandas does not? (or the other way around, so creating an inconsistency in behaviour?)",
"Seeing https://github.com/pandas-dev/pandas/pull/61773, I understand the issue now (it's also related to the fact that we specify `pa.array(..., from_pandas=true)` to allow NaN, since we support that in pandas for this creation, so we cannot turn that off. But then pyarrow does not seem to distinguish numpy vs decimal NaN ..).\r\n\r\nIn the end, the reason that this overflow comes up in the tests because of this change is because in `pd.to_datetime`, we create a numpy datetime64 array using NaT, and numpy uses the smallest integer for NaT. When converting that numpy array to pyarrow, the data is converted zero-copy (only as bitmask is added) and so the masked value is this smallest integer. \r\nWhen `pa.array(...)` creates the array from the python scalars, it defaults to fill masked values with 0, so you don't run (or not that easily) into overflows.\r\n\r\nSo one workaround would be to also fill the created pyarrow array with zeros. One potential way of doing this:\r\n\r\n```\r\n>>> pa_type = pa.timestamp(\"us\")\r\n>>> \r\n>>> np_arr = pd.to_datetime(scalars).as_unit(pa_type.unit).values\r\n>>> np_arr\r\narray(['2020-01-01T00:00:00.000000', 'NaT'],\r\n dtype='datetime64[us]')\r\n>>> mask = np.isnat(arr)\r\n>>> np_arr2 = np_arr.astype(\"int64\")\r\n>>> np_arr2\r\narray([ 1577836800000000, -9223372036854775808])\r\n>>> np_arr2[mask] = 0\r\n>>> pa_arr = pa.array(np_arr2, mask=mask, type=pa_type)\r\n>>> pa_arr\r\n<pyarrow.lib.TimestampArray object at 0x7f1ad0ef86a0>\r\n[\r\n 2020-01-01 00:00:00.000000,\r\n null\r\n]\r\n>>> np.frombuffer(pa_arr.buffers()[1], dtype=\"int64\")\r\narray([1577836800000000, 0])\r\n```",
"> So one workaround would be to also fill the created pyarrow array with zeros.\r\n\r\nI eventually stumbled on that idea long after posting. Will give it a go in #61773. Thank you."
] |
3,201,008,776
| 61,775
|
API/BUG: different constructor behavior for numpy vs pyarrow dt64tzs
|
closed
| 2025-07-04T01:16:30
| 2025-07-07T16:54:31
| 2025-07-07T16:54:31
|
https://github.com/pandas-dev/pandas/issues/61775
| true
| null | null |
jbrockmendel
| 1
|
```python
import pandas as pd
dtype1 = "datetime64[ns, US/Eastern]"
dtype2 = "timestamp[ns, US/Eastern][pyarrow]"
ts = pd.Timestamp("2025-07-03 18:10")
>>> pd.Series([ts], dtype=dtype1)[0]
Timestamp('2025-07-03 18:10:00-0400', tz='US/Eastern')
>>> pd.Series([ts], dtype=dtype2)[0]
Timestamp('2025-07-03 14:10:00-0400', tz='US/Eastern')
```
Long ago we decided that when passing tznaive datetimes and specifying a tzaware dtype, we treat the input as a wall-time. It looks like the pyarrow path (which I'm pretty sure just ends up calling `pa.array([ts], type=...)`) treats it as a UTC time.
cc @jorisvandenbossche
|
[
"Bug",
"API - Consistency",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I see it is not documented very well (the [array constructor docstring](https://arrow.apache.org/docs/python/generated/pyarrow.array.html#pyarrow.array) does mention something about timezones, but that is only for the case of inferring, not when a type is specified), but AFAIK he behaviour of pyarrow is indeed to assume naive data to be UTC (so choosing to interpret it as the underlying storage, not as wall clock time).\n\nI assume for converting object to a timestamp type, we might need to use our own `to_datetime` first (which is what you were trying to do, I think?)"
] |
3,200,840,173
| 61,774
|
CI: Add NumPy 1.26 test job
|
closed
| 2025-07-03T22:53:51
| 2025-07-08T10:19:38
| 2025-07-08T10:19:37
|
https://github.com/pandas-dev/pandas/pull/61774
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61774
|
https://github.com/pandas-dev/pandas/pull/61774
|
Anantanand005
| 1
|
This PR adds a CI job to test Pandas with NumPy 1.26 to ensure compatibility with the latest version.
- Related to issue: [#61588](https://github.com/pandas-dev/pandas/issues/61588)
- Installs NumPy 1.26.0 explicitly and runs the full test suite
- Helps identify future compatibility issues with NumPy releases
### Checklist
- [x] Closes #61588
- [x] Tests added and passed (via CI job)
- [x] All code checks passed (linting, CI)
- [ ] No new type hints were added
- [ ] No doc entry needed (not a new feature or bugfix)
|
[
"CI",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @Anantanand005 for the PR\r\n\r\nclosing as superseded by #61806"
] |
3,200,833,112
| 61,773
|
BUG: Decimal(NaN) incorrectly allowed in ArrowEA constructor with tim…
|
closed
| 2025-07-03T22:48:01
| 2025-07-07T17:32:11
| 2025-07-07T16:54:30
|
https://github.com/pandas-dev/pandas/pull/61773
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61773
|
https://github.com/pandas-dev/pandas/pull/61773
|
jbrockmendel
| 2
|
…estamp type
- [x] closes #61775
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Surfaced by #61732
|
[
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Huh. the \"ArrowInvalid: overflow\" here is weird. will look into it",
"Thanks @jbrockmendel "
] |
3,200,716,258
| 61,772
|
BUG: Calling dict(df.groupby(...)) raises TypeError: 'str' object is not callable despite valid inputs
|
open
| 2025-07-03T21:39:11
| 2025-07-07T20:55:48
| null |
https://github.com/pandas-dev/pandas/issues/61772
| true
| null | null |
kay-ou
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({
'security': ['A', 'B'],
'price': [1, 2]
})
print(dict(df.groupby('security'))) # ❌ Raises TypeError
Using a comprehension works fine:
res = {k: v for k, v in df.groupby('security')} # ✅ Succeeds
Verifying the iteration:
for k, v in df.groupby('security'):
print(type(k), type(v)) # <class 'str'>, <class 'DataFrame'>
```
### Issue Description
When using the built-in dict() constructor on a DataFrameGroupBy object returned by pandas.DataFrame.groupby(...), I get:
TypeError: 'str' object is not callable
This occurs even though the iterable yields valid (str, DataFrame) pairs, and built-in dict is not shadowed.
Environment Info
Item | Value
-- | --
pandas version | 2.3.0
Python version | 3.12.3
Install method | poetry
OS | Ubuntu 22.04
Reproducible in venv | ✅ Yes
Reproducible in clean script | ✅ Yes
Additional Notes
- dict is <class 'type'> and matches builtins.dict
- Removing __pycache__ and .pyc files does not help
- The error only occurs when using dict(df.groupby(...)), not in other contexts
- inspect.getmodule(dict) returns the expected built-in location
This could potentially be a pandas bug, interpreter-edge case, or a low-level compatibility glitch with Python 3.12+. Please let me know if you'd like a deeper trace or full traceback logs!
### Expected Behavior
{
'A': pd.DataFrame(...),
'B': pd.DataFrame(...)
}
### Installed Versions
<details>
Item | Value
-- | --
Python version | 3.12.3
OS | Ubuntu 22.04
pandas version | 2.3.0
Install method | poetry
Reproduced in venv | Yes
Reproduced in CLI | Yes
</details>
|
[
"Bug",
"Groupby",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @kay-ou for the report.\n\nhttps://docs.python.org/3/library/stdtypes.html#dict:~:text=If%20no%20positional,the%20new%20dictionary.\n\nYes, the python builtin `dict` will first look for and presumably call a `keys` method.\n\n```python\ndf.groupby(\"security\").keys()\n# TypeError: 'str' object is not callable\n```\n\nand indeed gives the same error. So it appears that the `GroupBy` object has a `keys` attribute instead of a keys method. \n\n```python\ndf.groupby(\"security\").keys\n# 'security'\n```\n\nAs you noted, the `GroupBy` object is iterable and maybe a more compact workaround is therefore\n\n```python\ndict(iter(df.groupby(\"security\")))\n# {'A': security price\n# 0 A 1,\n# 'B': security price\n# 1 B 2}\n```\n\ninstead of using the comprehension.\n\nIt appears from the documentation https://pandas.pydata.org/docs/reference/groupby.html that the `keys` attribute is not defined in the public api, so to fix this issue, it maybe as simple as renaming that attribute.",
"Though not explicitly public, if we are going to make a change to `keys` I think we should deprecate. Also, any `groupby` on a DataFrame with a `keys` column will still suffer a similar issue because `df.groupby(...).keys` will be an attribute."
] |
3,200,714,976
| 61,771
|
BUG[string]: incorrect index downcast in DataFrame.join
|
closed
| 2025-07-03T21:38:28
| 2025-07-07T14:47:01
| 2025-07-07T13:15:03
|
https://github.com/pandas-dev/pandas/pull/61771
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61771
|
https://github.com/pandas-dev/pandas/pull/61771
|
jbrockmendel
| 3
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 cf1a11c1b49d040f7827f30a1a16154c80c552a7\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61771: BUG[string]: incorrect index downcast in DataFrame.join'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61771-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61771 on branch 2.3.x (BUG[string]: incorrect index downcast in DataFrame.join)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Thanks @jbrockmendel!",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61800"
] |
3,200,572,306
| 61,770
|
BUG: Fix unpickling of string dtypes of legacy pandas versions
|
closed
| 2025-07-03T20:35:31
| 2025-07-07T08:50:36
| 2025-07-07T07:41:22
|
https://github.com/pandas-dev/pandas/pull/61770
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61770
|
https://github.com/pandas-dev/pandas/pull/61770
|
Liam3851
| 1
|
- [x] closes #61763
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.3.1.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"Strings",
"IO Pickle"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks very much for the review @jorisvandenbossche, I've added pickles for 2.0-2.2 as extra checks and a whatsnew entry."
] |
3,200,479,757
| 61,769
|
Improve MultiIndex label rename checks
|
open
| 2025-07-03T19:50:23
| 2025-08-22T00:08:13
| null |
https://github.com/pandas-dev/pandas/pull/61769
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61769
|
https://github.com/pandas-dev/pandas/pull/61769
|
TabLand
| 1
|
- [x] closes #55169
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"MultiIndex",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,200,138,340
| 61,768
|
BUG: NA.__and__, __or__, __xor__ with np.bool_ objects
|
closed
| 2025-07-03T17:38:42
| 2025-07-03T22:57:17
| 2025-07-03T22:49:57
|
https://github.com/pandas-dev/pandas/pull/61768
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61768
|
https://github.com/pandas-dev/pandas/pull/61768
|
jbrockmendel
| 1
|
- [x] closes #58427 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I expected this to break some other tests, but nope.
|
[
"Missing-data"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,200,054,572
| 61,767
|
Revert "ENH: Allow third-party packages to register IO engines"
|
closed
| 2025-07-03T17:07:31
| 2025-07-03T17:07:42
| 2025-07-03T17:07:39
|
https://github.com/pandas-dev/pandas/pull/61767
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61767
|
https://github.com/pandas-dev/pandas/pull/61767
|
jbrockmendel
| 0
|
Reverts pandas-dev/pandas#61642
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,198,875,537
| 61,766
|
BUG: ensure to_numeric down-casts to uint64 for large unsigned integers
|
closed
| 2025-07-03T10:24:24
| 2025-07-28T17:21:37
| 2025-07-28T17:21:36
|
https://github.com/pandas-dev/pandas/pull/61766
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61766
|
https://github.com/pandas-dev/pandas/pull/61766
|
mohiuddin-khan-shiam
| 1
|
[to_numeric(..., downcast="unsigned")](cci:1://file:///d:/Github/pandas/pandas/core/tools/numeric.py:48:0-315:21) failed to honour the requested
[uint64](cci:1://file:///d:/Github/pandas/pandas/tests/tools/test_to_numeric.py:580:0-586:44) dtype when values exceeded `np.iinfo(np.int64).max`, returning
`float64` instead and losing integer precision (GH #14422 /
[test_downcast_uint64](cci:1://file:///d:/Github/pandas/pandas/tests/tools/test_to_numeric.py:580:0-586:44)).
Added a fallback that detects integral, non-negative float results and
safely casts them to `np.uint64`. All existing logic remains unchanged
for other code paths; the previously xfailed test now passes.
|
[
"Dtype Conversions"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,198,775,065
| 61,765
|
chore: testing
|
closed
| 2025-07-03T09:50:20
| 2025-07-03T09:52:02
| 2025-07-03T09:52:02
|
https://github.com/pandas-dev/pandas/pull/61765
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61765
|
https://github.com/pandas-dev/pandas/pull/61765
|
gherulloa
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,198,494,657
| 61,764
|
ENH: speed up wide DataFrame.line plots by using a single LineCollection
|
open
| 2025-07-03T08:16:19
| 2025-08-22T00:08:15
| null |
https://github.com/pandas-dev/pandas/pull/61764
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61764
|
https://github.com/pandas-dev/pandas/pull/61764
|
EvMossan
| 2
|
<!--
Thank you for your contribution to pandas!
Please fill in the checklist below. Mark completed items with an “x”.
-->
### What does this PR change?
* **Speeds up `DataFrame.plot(kind="line")` when the frame is “wide”.**
* If the DataFrame has **> 200 columns**, a **numeric index** (e.g. `RangeIndex`
or integer/float values), is **not** a time-series plot, has **no stacking**
and **no error bars**, we now draw everything with a single
`matplotlib.collections.LineCollection` instead of one `Line2D` per column.
* No API changes; behaviour is identical for smaller plots or the excluded
cases above.
### Performance numbers
| 500 rows × 2000 cols (RangeIndex) | master | this PR | speed-up |
|-----------------------------------|--------|---------|----------|
| `df.plot(legend=False)` | 0.342 s| 0.069 s | **5×** |
*Benchmarked on pandas **3.0.0.dev0+2183.g94ff63adb2**, matplotlib **3.10.3**, NumPy **2.2.6***
### Notes
* This PR does _not_ change anything for `DatetimeIndex` plots—those remain on the original per-column path. A follow-up could combine `LineCollection` with the `x_compat=True` workaround (see [#61398](https://github.com/pandas-dev/pandas/issues/61398)) to similarly speed up time-series plots.
* Threshold (`> 200` columns) is a heuristic and can be tuned in review.
* The fast path activates only for numeric indices. Datetime/period/timedelta
indices still use the original per-column draw, so behaviour there is
unchanged.
---
- [x] closes **#61532**
- [x] tests added / passed (`pytest pandas/tests/plotting -q`)
- [x] code checks passed (`pre-commit run --all-files`)
- [x] added entry in `doc/source/whatsnew/v3.0.0.rst`
cc @shadnikn @arthurlw – happy to take any feedback 🙂
|
[
"Visualization",
"Performance",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@jbrockmendel Done in the latest commit, thanks!",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,197,252,119
| 61,763
|
BUG: StringDtype objects from pandas <2.3.0 cannot be reliably unpickled in 2.3.0.
|
closed
| 2025-07-02T21:35:45
| 2025-07-07T07:41:23
| 2025-07-07T07:41:23
|
https://github.com/pandas-dev/pandas/issues/61763
| true
| null | null |
Liam3851
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
### Using pandas 2.2.3
import pandas as pd
pd.DataFrame([['a', 'b'], ['c', 'd']]).astype('string').to_pickle('G:/temp/test2.pkl')
```
```python
### Using pandas 2.3.0
import pandas as pd
df = pd.read_pickle('G:/temp/test2.pkl') # looks ok
df.dtypes # raises AttributeError: 'StringDtype' object has no attribute '_na_value'
df[0] + df[1] # also raises AttributeError
```
### Issue Description
The code in a StringDtype object in 2.3 refers to an internal _na_value representation that appears not to have existed prior to 2.3.0. Pickled objects containing StringDtype columns pickled in earlier versions, including 2.2.3, may initially appear to unpickle successfully. However, listing the dtypes or even implicitly checking the dtypes by doing an operation, raises an AttributeError.
### Expected Behavior
The documentation at read_pickle indicates backward compatibility to version 0.20.3, so a pickle from 2.2.3 should be readable and usable in 2.3.0.
A current workaround is something like this, to wrap the object in a freshly created 2.3.0-compatible dtype:
```
def unpickle_wrap(fn):
df = pd.read_pickle(fn)
for col, dtype in df.dtypes.items():
if pd.api.types.is_string_dtype(dtype):
df[col] = df[col].astype(object).astype('string')
return df
```
### Installed Versions
<details>
In [55]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.11.12
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : 0.61.2+0.g1e70d8ceb.dirty
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : 2025.5.1
scipy : 1.15.2
sqlalchemy : 2.0.41
tables : None
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : None
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
(Edit: fixed example to make copy-pastable, and confirmed on main)
|
[
"Bug",
"Strings",
"IO Pickle"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
3,196,664,652
| 61,762
|
Update __init__.py
|
closed
| 2025-07-02T17:28:32
| 2025-07-02T21:20:11
| 2025-07-02T21:20:11
|
https://github.com/pandas-dev/pandas/pull/61762
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61762
|
https://github.com/pandas-dev/pandas/pull/61762
|
phanipaladugula
| 1
|
Added a quick explanation for Extension Arrays to better understand for new comers.
DOC: Wrap long comment lines to fix E501 error
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR, but I don't think this adds much to this file so closing.\r\n\r\nIf interested, you're welcome to tackle issues label `good first issue`"
] |
3,196,611,258
| 61,761
|
Update __init__.py
|
closed
| 2025-07-02T17:09:50
| 2025-07-02T17:17:06
| 2025-07-02T17:17:06
|
https://github.com/pandas-dev/pandas/pull/61761
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61761
|
https://github.com/pandas-dev/pandas/pull/61761
|
phanipaladugula
| 0
|
Added a quick explanation about Extension Arrays to better understand for the new users
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,195,611,750
| 61,760
|
BUG: .describe() doesn't work for EAs #61707
|
closed
| 2025-07-02T11:41:36
| 2025-07-20T05:48:46
| 2025-07-20T05:48:33
|
https://github.com/pandas-dev/pandas/pull/61760
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61760
|
https://github.com/pandas-dev/pandas/pull/61760
|
kernelism
| 0
|
This PR fixes a bug where Series.describe() fails on certain `ExtensionArray` dtypes such as `pint[kg]`, due to attempting to cast the result to `Float64Dtype`. This is because some of the produced statistics are not castable to float, which raises errors like DimensionalityError.
We now avoid forcing a Float64Dtype return dtype when the EA’s scalar values cannot be safely cast. Instead:
If the EA produces outputs with mixed dtypes, the result is returned with `dtype=None`.
- [x] closes #61707
- [x] Adds a regression test.
- [x] pre-commit checks passed
- [x] Adds type annotations
- [x] Adds a whatsnew entry
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,194,982,752
| 61,759
|
chore: remove redundant words in comment
|
closed
| 2025-07-02T08:02:05
| 2025-07-02T16:48:24
| 2025-07-02T16:48:18
|
https://github.com/pandas-dev/pandas/pull/61759
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61759
|
https://github.com/pandas-dev/pandas/pull/61759
|
ianlv
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
remove redundant words in comment
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @ianlv "
] |
3,194,484,704
| 61,758
|
BUG: user expected pd.isna to be False for NaNs with Float64Dtype
|
open
| 2025-07-02T04:19:28
| 2025-07-16T02:39:48
| null |
https://github.com/pandas-dev/pandas/issues/61758
| true
| null | null |
abhmul
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"a": [-1., 2., 3., 4., 5.],
"b": [1., 2., 3., 4., 5.],
}, dtype=pd.Float64Dtype()
)
df = np.sqrt(df)
# Returns False
print(df.isna().any().any())
# Returns True
print(pd.isna(df.loc[0, "a"]))
```
### Issue Description
Apply a NumPy operation that yields NaN for some value of the dataframe of type Float64Dtype. Then pandas null checking functions (isna, isnull, notna) will not detect the NaN value. However, it is detected if we index the NaN value.
### Expected Behavior
Both the above print statements should be True.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.13.5
python-bits : 64
OS : Linux
OS-release : 6.15.4-arch2-1
Version : #1 SMP PREEMPT_DYNAMIC Fri, 27 Jun 2025 16:35:07 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 2.3.0
numpy : 2.1.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"PDEP missing values"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I confirmed on **pandas 2.3.0** + **NumPy 2.1.2** that after applying a NumPy ufunc the nullable Float64 dtype mask isn’t catching NaNs:\n\n```\ndf_plain = df.astype(float)\nprint(df_plain.isna().any().any()) # True\n```\n\nThis suggests ```Float64Dtype.isna()``` isn’t recognizing the NaN created by ```np.sqrt```.\n\nCan someone confirm this is a bug in the nullable array mask logic? Thanks.\n",
"> Can someone confirm this is a bug in the nullable array mask logic? Thanks.\n\nIt is not a bug but it is a design choice that frequently causes confusion (#60106, #59891, #56451, #53887). The original discussion for how to handle this is in #32265 and more recently in #61618."
] |
3,193,689,367
| 61,757
|
TST (string dtype): resolve skip in misc test_memory_usage
|
closed
| 2025-07-01T20:16:37
| 2025-07-02T09:02:49
| 2025-07-02T09:02:48
|
https://github.com/pandas-dev/pandas/pull/61757
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61757
|
https://github.com/pandas-dev/pandas/pull/61757
|
jorisvandenbossche
| 0
|
Addressing one of the remaining xfail/skips for the string dtype, see https://github.com/pandas-dev/pandas/pull/61727#issuecomment-3020456375 for context
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,193,400,503
| 61,756
|
DOC: Pass docstring validation for Index.infer_objects
|
closed
| 2025-07-01T18:12:29
| 2025-07-01T19:14:47
| 2025-07-01T19:14:44
|
https://github.com/pandas-dev/pandas/pull/61756
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61756
|
https://github.com/pandas-dev/pandas/pull/61756
|
mroeschke
| 1
|
Currently failing on main.
Missed in https://github.com/pandas-dev/pandas/pull/61736
|
[
"Docs",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Merging to get CI to green"
] |
3,193,365,754
| 61,755
|
Revert "[2.3.x] DEPS: Drop Python 3.9 (#60792)"
|
closed
| 2025-07-01T18:00:36
| 2025-07-03T15:55:55
| 2025-07-03T07:21:26
|
https://github.com/pandas-dev/pandas/pull/61755
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61755
|
https://github.com/pandas-dev/pandas/pull/61755
|
mroeschke
| 0
|
This reverts commit 2e617d36af3592a371fe09a1aec8282f9db550da.
Re-enables 3.9 wheels and testing for the 2.3.x branch
xref https://github.com/pandas-dev/pandas/issues/61590
- [ ] closes #61579 (Replace xxxx with the GitHub issue number)
|
[
"Build",
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,193,186,405
| 61,754
|
[backport 2.3.x] TST/CI: temporary upper pin for scipy in downstream tests for compat with statsmodels (#61750)
|
closed
| 2025-07-01T16:56:26
| 2025-07-01T19:48:58
| 2025-07-01T19:48:53
|
https://github.com/pandas-dev/pandas/pull/61754
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61754
|
https://github.com/pandas-dev/pandas/pull/61754
|
jorisvandenbossche
| 0
|
Backport of https://github.com/pandas-dev/pandas/pull/61750
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,192,220,348
| 61,753
|
BUG: Segmentation fault when misusing `VariableWindowIndexer.get_window_bounds`
|
closed
| 2025-07-01T12:09:36
| 2025-07-02T16:46:47
| 2025-07-02T16:46:47
|
https://github.com/pandas-dev/pandas/issues/61753
| true
| null | null |
BergLucas
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from pandas.core.indexers.objects import VariableWindowIndexer
variable_window_indexer = VariableWindowIndexer()
variable_window_indexer.get_window_bounds(1)
```
### Issue Description
Hi,
For a research paper, we carried out a large-scale benchmark of [Pynguin](https://www.pynguin.eu/), an Automatic Unit Test Generation Tool for Python, to test its new feature that can find Python interpreter crashes. In this benchmark, we found a potential bug in pandas, and we are making this issue to report it.
### Expected Behavior
In our opinion, pandas should not produce a segmentation fault when calling a public function. However, we don't know whether this function is part of pandas' public API so we just wanted to at least warn you that this behaviour exists, so that you can take the action that suits you best.
### Installed Versions
commit : 0ab10aa1417f19ecf265ff9383b1aa851b02736b
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.14.11-300.fc42.x86_64
Version : #1 SMP PREEMPT_DYNAMIC Tue Jun 10 16:24:16 UTC 2025
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2192.g0ab10aa141
numpy : 2.2.6
dateutil : 2.9.0.post0
pip : 23.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : N/A
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
|
[
"Bug",
"Window",
"Segfault",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke i suspect this is a non-issue, but maybe an underscore is appropriate somewhere?",
"Yeah this is a private API (`.core`) that we call correctly internally. `get_window_bounds` is probably rightfully public since users have access to its subclass `BaseIndexer`.\n\nThanks for the issue but going to close since we call this without segfaulting internally"
] |
3,191,348,254
| 61,752
|
[backport 2.3.x] CI: clean up wheel build workarounds now that Cython 3.1.0 is out (#61446)
|
closed
| 2025-07-01T08:27:22
| 2025-07-04T08:51:29
| 2025-07-03T15:56:45
|
https://github.com/pandas-dev/pandas/pull/61752
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61752
|
https://github.com/pandas-dev/pandas/pull/61752
|
jorisvandenbossche
| 1
|
Backport of https://github.com/pandas-dev/pandas/pull/61446
|
[
"Build"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche "
] |
3,191,300,670
| 61,751
|
[backport 2.3.x] DOC: move relevant whatsnew changes from 2.3.0 to 2.3.1 file (#61698)
|
closed
| 2025-07-01T08:16:13
| 2025-07-01T12:08:14
| 2025-07-01T12:08:10
|
https://github.com/pandas-dev/pandas/pull/61751
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61751
|
https://github.com/pandas-dev/pandas/pull/61751
|
jorisvandenbossche
| 0
|
Backport of https://github.com/pandas-dev/pandas/pull/61698
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,191,232,389
| 61,750
|
TST/CI: temporary upper pin for scipy in downstream tests for compat with statsmodels
|
closed
| 2025-07-01T07:59:53
| 2025-07-01T16:57:03
| 2025-07-01T16:41:55
|
https://github.com/pandas-dev/pandas/pull/61750
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61750
|
https://github.com/pandas-dev/pandas/pull/61750
|
jorisvandenbossche
| 3
|
See https://github.com/statsmodels/statsmodels/issues/9542 / https://github.com/statsmodels/statsmodels/issues/9584
|
[
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 7f783db6dcc4ef200643fee845e9565f0a97f37f\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61750: TST/CI: temporary upper pin for scipy in downstream tests for compat with statsmodels'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61750-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61750 on branch 2.3.x (TST/CI: temporary upper pin for scipy in downstream tests for compat with statsmodels)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Backport -> https://github.com/pandas-dev/pandas/pull/61750"
] |
3,191,213,916
| 61,749
|
TST: fix decimal cast error message for pyarrow nightly tests
|
closed
| 2025-07-01T07:55:04
| 2025-07-01T16:46:29
| 2025-07-01T16:07:11
|
https://github.com/pandas-dev/pandas/pull/61749
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61749
|
https://github.com/pandas-dev/pandas/pull/61749
|
jorisvandenbossche
| 2
|
PyArrow seems to have updated the error message, this updates our assert to catch both "Decimal" and "Decimal128"
|
[
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"thanks @jorisvandenbossche ",
"(we don't yet have this test on x, so the backport label was not necessary)"
] |
3,190,159,121
| 61,748
|
BUG: Fix assert_frame_equal with check_dtype=False for pd.NA dtype differences (GH#61473)
|
open
| 2025-06-30T23:26:04
| 2025-08-22T00:08:16
| null |
https://github.com/pandas-dev/pandas/pull/61748
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61748
|
https://github.com/pandas-dev/pandas/pull/61748
|
gamzeozgul
| 2
|
- [x] closes #61473
- [x] tests added / passed
- [x] Ensure all linting tests pass
- [x] whatsnew entry
## Problem
When comparing two DataFrames containing `pd.NA` values with `check_dtype=False`, `assert_frame_equal` fails when the DataFrames only differ in dtype (object vs Int32). This happens because `pd.NA` and `np.nan` are treated as different values even though they represent the same missing value.
## Solution
Modified `assert_frame_equal` in `pandas/_testing/asserters.py` to normalize `pd.NA` and `np.nan` values when `check_dtype=False` is specified. This ensures that DataFrames with equivalent missing values but different dtypes can be compared successfully.
## Changes Made
- Added normalization logic in `assert_frame_equal` function to handle `pd.NA` and `np.nan` equivalence when `check_dtype=False`
- Added comprehensive unit test in `pandas/tests/util/test_assert_frame_equal.py` to verify the fix
## Testing
- Added unit test `test_assert_frame_equal_pd_na_dtype_difference` that reproduces the original issue and verifies the fix
- Test passes successfully with the implemented solution
|
[
"Bug",
"Testing",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hello,\r\n\r\nIt seems that the \"Downstream Compat\" CI job is failing with an `ImportError` related to `statsmodels` and `scipy`. This appears to be an issue with the CI environment's dependencies (e.g., version mismatch between `statsmodels` and `scipy`), and not directly related to the changes introduced in this PR.\r\n\r\nThe error message is:\r\n```\r\n_______________________________ test_statsmodels _______________________________\r\nE pytest.PytestDeprecationWarning: \r\nE Module 'statsmodels.formula.api' was found, but when imported by pytest it raised:\r\nE ImportError(\"cannot import name '_lazywhere' from 'scipy._lib._util' (/home/runner/micromamba/envs/test/lib/python3.11/site-packages/scipy/_lib/_util.py)\")\r\n```\r\n\r\nI believe this is an infrastructure/dependency issue on the CI side, rather than a bug in my proposed changes for GH#61473.\r\n\r\nCould you please confirm this and let me know if there's anything I can do to help resolve this CI issue, or if this is something the core `pandas` team needs to address? ",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,189,405,433
| 61,747
|
PERF: Arrow dtypes are much slower than Numpy for DataFrame.apply
|
open
| 2025-06-30T18:04:33
| 2025-07-13T13:35:37
| null |
https://github.com/pandas-dev/pandas/issues/61747
| true
| null | null |
ehsantn
| 8
|
The same code with `DataFrame.apply` is >4x slower when the data is in Arrow dtypes versus Numpy.
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
import pyarrow as pa
import time
NUM_ROWS = 500_000
df = pd.DataFrame({"A": np.arange(NUM_ROWS) % 30, "B": np.arange(NUM_ROWS)+1.0})
print(df.dtypes)
df2 = df.astype({"A": pd.ArrowDtype(pa.int64()), "B": pd.ArrowDtype(pa.float64())})
print(df2.dtypes)
t0 = time.time()
df.apply(lambda r: 0 if r.A == 0 else (r.B // r.A), axis=1)
print(f"Non-Arrow time: {time.time() - t0:.2f} seconds")
t0 = time.time()
df2.apply(lambda r: 0 if r.A == 0 else (r.B // r.A), axis=1)
print(f"Arrow time: {time.time() - t0:.2f} seconds")
```
Output with Pandas 2.3 on a local M1 Mac (tested on main branch too).
```
A int64
B float64
dtype: object
A int64[pyarrow]
B double[pyarrow]
dtype: object
Non-Arrow time: 3.21 seconds
Arrow time: 16.66 seconds
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.13.5
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.1.2
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : 2025.5.1
jinja2 : None
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 19.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : 2025.5.1
scipy : 1.15.2
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
### Prior Performance
_No response_
|
[
"Performance",
"Apply",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Any hotspots show up in profiling?",
"Profiler output is a bit hard to read as usual. Here are some snakeviz screenshots. The [fast_xs](https://github.com/pandas-dev/pandas/blob/dc1e367598a6b0b2c0ee700b3805f72aaccbda86/pandas/core/internals/managers.py#L1095) function has different code paths for `ExtensionDtype` that look suspicious to me. `find_common_type` and `_from_sequence` stand out looks like.\n\n\n\n\n",
"That's a tough one. In core.apply.FrameColumnApply.series_generator we have a fastpath that only works with numpy dtypes.\n\nWe might be able to get some mileage for EA Dtypes by changing \n\n```\n for i in range(len(obj)):\n yield obj._ixs(i, axis=0)\n```\n\nto something like\n\n```\ndtype = ser.dtype\nfor i in range(len(obj)):\n new_vals = the_part_of_fast_xs_after_interleaved_dtype_is_called()\n new_arr = type(ser.array)._from_sequence(new_vals, dtype=dtype)\n yield obj._constructor(new_arr, name=name)\n```\n\nThat would save 20% of the runtime by avoiding the interleaved_dtype calls.",
"This difference makes sense but what's confusing is that the performance issue goes away if one of the column is changed to string:\n```python\nimport pandas as pd\nimport numpy as np\nimport pyarrow as pa\nimport time\n\nNUM_ROWS = 500_000\ndf = pd.DataFrame({\"A\": np.arange(NUM_ROWS) % 30, \"B\": np.arange(NUM_ROWS).astype(str)})\nprint(df.dtypes)\ndf2 = df.astype({\"A\": pd.ArrowDtype(pa.int64()), \"B\": pd.ArrowDtype(pa.large_string())})\nprint(df2.dtypes)\n\nt0 = time.time()\ndf.apply(lambda r: 0 if r.A == 0 else (int(r.B) // r.A), axis=1)\nprint(f\"Non-Arrow time: {time.time() - t0:.2f} seconds\")\n\nt0 = time.time()\ndf2.apply(lambda r: 0 if r.A == 0 else (int(r.B) // r.A), axis=1)\nprint(f\"Arrow time: {time.time() - t0:.2f} seconds\")\n```\n```\nA int64\nB object\ndtype: object\nA int64[pyarrow]\nB large_string[pyarrow]\ndtype: object\nNon-Arrow time: 3.35 seconds\nArrow time: 3.21 seconds\n```",
"id have to look at the profiling output but my prior puts a lot of weight on \"object dtype is just that bad\"",
"But in this case object dtype is basically the same as Numpy numeric dtype in the no-arrow cases (`3.35` vs `3.21`, see first numbers in the two outputs). The difference is that `pa.large_string()` is a lot better than `pa.float64()` in the Arrow cases.",
"In this case df.iloc[0] has an object dtype even when you have pyarrow dtypes, so it the iteration in series_generator goes through the numpy fastpath\n\n(Looking at profiling results, I think we can trim a bunch by changing is_object_dtype check in _can_hold_identifiers_and_holds_name to just `self.dtype == object`)\n",
"Ok, makes sense. Thanks for the explanation."
] |
3,189,351,137
| 61,746
|
CLN: references/tests for item_cache
|
closed
| 2025-06-30T17:44:51
| 2025-07-07T16:29:17
| 2025-07-07T16:29:17
|
https://github.com/pandas-dev/pandas/issues/61746
| true
| null | null |
jbrockmendel
| 1
|
test_to_dict_of_blocks_item_cache is about _item_cache invalidation, but IIRC we got rid of that cache a while back. grepping for "item_cache" i see a bunch of comments that are no longer accurate and tests that are no longer testing anything. These can be updated/removed.
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
3,189,341,197
| 61,745
|
Backport PR #61744 on branch 2.3.x (CI: if no docstring, create error GL08 and don't validate - fix for numpydoc 1.9)
|
closed
| 2025-06-30T17:41:40
| 2025-07-01T08:09:41
| 2025-07-01T08:09:41
|
https://github.com/pandas-dev/pandas/pull/61745
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61745
|
https://github.com/pandas-dev/pandas/pull/61745
|
Dr-Irv
| 1
|
backporting the change related to `numpydoc` upgrade
Backport of #61744
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks!"
] |
3,186,831,817
| 61,744
|
CI: if no docstring, create error GL08 and don't validate - fix for numpydoc 1.9
|
closed
| 2025-06-30T01:34:01
| 2025-07-01T08:08:39
| 2025-06-30T16:52:17
|
https://github.com/pandas-dev/pandas/pull/61744
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61744
|
https://github.com/pandas-dev/pandas/pull/61744
|
Dr-Irv
| 5
|
`validate_docstrings` was failing with `numpydoc` 1.9 on cython methods that have no docstrings. When no docstring, there is nothing to validate.
Partially addresses the CI issue mentioned in #61740
|
[
"CI",
"Code Style"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"is this an alternative to #61725?",
"> is this an alternative to #61725?\r\n\r\nYes, and it allows us to use `numpydoc` 1.9",
"Thanks @Dr-Irv ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 30a2e7fb239e63beab44c7c459517eb4d2908a0d\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am \"Backport PR #61744: CI: if no docstring, create error GL08 and don't validate - fix for numpydoc 1.9\"\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61744-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61744 on branch 2.3.x (CI: if no docstring, create error GL08 and don't validate - fix for numpydoc 1.9)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Backport -> https://github.com/pandas-dev/pandas/pull/61745"
] |
3,186,530,460
| 61,743
|
BUG: Assigning boolean series with boolean indexer
|
closed
| 2025-06-29T20:15:55
| 2025-07-01T19:04:04
| 2025-07-01T17:47:12
|
https://github.com/pandas-dev/pandas/pull/61743
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61743
|
https://github.com/pandas-dev/pandas/pull/61743
|
yuanx749
| 1
|
Supersedes #60127
- [x] closes #57338 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"Indexing",
"PDEP6-related"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @yuanx749 "
] |
3,186,492,364
| 61,742
|
BUG: fillna with DataFrame input should preserve dtype when possible
|
open
| 2025-06-29T19:26:42
| 2025-07-31T00:09:06
| null |
https://github.com/pandas-dev/pandas/pull/61742
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61742
|
https://github.com/pandas-dev/pandas/pull/61742
|
iabhi4
| 2
|
When filling a DataFrame with another DataFrame using `fillna`, columns with matching dtypes were being unnecessarily cast to `object` due to use of `np.where`.
This PR updates the logic to use pandas’ `Series.where`, which is dtype-safe and respects extension and datetime types.
- [x] Closes #61568
- [x] Adds a regression test
- [x] Adds a whatsnew entry
- [x] pre-commit checks passed
|
[
"Bug",
"Dtype Conversions",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Since we now operate column-wise and use `Series.where `instead of `np.where`, so it keeps dtype safety as suggested by @jbrockmendel\r\n\r\nThis also preserves extension dtypes like `string[pyarrow]`, which used to get cast to object. Because of that, `test_fillna_dataframe_preserves_dtypes_mixed_columns` is failing since it expects the downgraded dtype.\r\n\r\nLet me know if this behavior change is fine, happy to update the test or tweak the logic based on what’s preferred!",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,186,475,467
| 61,741
|
TST: Test coverage for Excel Formatter.py
|
closed
| 2025-06-29T19:07:46
| 2025-07-01T02:34:34
| 2025-07-01T02:34:34
|
https://github.com/pandas-dev/pandas/pull/61741
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61741
|
https://github.com/pandas-dev/pandas/pull/61741
|
lsgordon
| 0
|
- ✅ [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- ✅ All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Added tests to the ExcelFormatter class, which gets the file up to 93\% test coverage in total. The only exception is the `.write` function, (otherwise writing to excel in the first place is not tested) but is already most definitely covered, but should be tested further.
- Leo
|
[
"Testing",
"IO Excel"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,186,432,260
| 61,740
|
CI Failures due to new scipy and new numpydoc
|
closed
| 2025-06-29T18:17:31
| 2025-08-13T18:09:34
| 2025-08-13T18:09:34
|
https://github.com/pandas-dev/pandas/issues/61740
| true
| null | null |
Dr-Irv
| 3
|
The job `Downstream Compat` is failing in CI because `statsmodels` 0.14.4 is incompatible with `scipy` 1.16.0. The latter was released on June 22, so that's why we have recent failures.
Should we lock down the `scipy` version to 1.15.3 in `ci/deps/actions-311-downstream_compat.yaml` ?
The job `Docstring validation, typing, and other manual pre-commit hooks` is failing because `numpydoc` 1.9 was released on June 24. Should we pin `numpydoc` to 1.8?
|
[
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Regarding `statsmodels` and `scipy` - https://github.com/statsmodels/statsmodels/issues/9584\n",
"Opened https://github.com/pandas-dev/pandas/pull/61750 as a temporary fix for the statsmodels/scipy issue",
"closed via #61933 "
] |
3,186,142,198
| 61,739
|
DOC: Fix grammar in AUTHORS.md
|
closed
| 2025-06-29T12:56:53
| 2025-06-30T17:18:42
| 2025-06-30T17:18:36
|
https://github.com/pandas-dev/pandas/pull/61739
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61739
|
https://github.com/pandas-dev/pandas/pull/61739
|
sangampaudel530
| 8
|
This pull request fixes minor grammatical errors and improves clarity in the AUTHORS.md file to enhance the overall documentation quality. Thank you for considering this contribution!
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hello, Happy to Contribute first time in this repository ! ",
"Hi all, just a quick note , the current failing checks are due to unrelated dependency and docstring validation issues.\r\nThis PR only contains grammar fixes in author.md and does not affect the code or dependencies.\r\nI kindly suggest these external issues be addressed separately so this documentation improvement can be merged smoothly.\r\nThank you very much for your time and understanding!\r\n\r\n",
"Can you revert the whitespace changes so it is easier to focus on the grammar",
"Thank you for the feedback!\r\nI’ll revert the whitespace changes right away so the grammar fixes are clearer.",
"I've reverted the unintended whitespace changes and applied only the requested grammatical fixes to AUTHORS.md. Please let me know if any further changes are needed. Thank you for your guidance!",
"I'm now seeing nothing _but_ whitespace changes.",
"Sorry for inconvenience, I think I have reverted the whitespace and made all the required grammar changes.",
"Thanks @sangampaudel530 "
] |
3,186,136,473
| 61,738
|
ENH: Added features of issue 61691
|
closed
| 2025-06-29T12:48:13
| 2025-06-30T17:21:32
| 2025-06-30T17:19:19
|
https://github.com/pandas-dev/pandas/pull/61738
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61738
|
https://github.com/pandas-dev/pandas/pull/61738
|
ishaan1234
| 4
|
- [x] closes #61691
- [ ] [Tests added]
- [ ] All [code checks passed]
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@ishaan1234 generally its best to wait to implement a requested feature until there is a consensus in the issue that it is something we want to do. Requests for new top-level APIs usually get politely rejected.",
"Thanks @ishaan1234 but as mentioned, this feature needs discussion before a pull request will be considered so closing this PR for now",
"> @ishaan1234 generally its best to wait to implement a requested feature until there is a consensus in the issue that it is something we want to do. Requests for new top-level APIs usually get politely rejected.\r\n\r\nAlright! Thanks!",
"> Thanks @ishaan1234 but as mentioned, this feature needs discussion before a pull request will be considered so closing this PR for now\r\n\r\nOkay"
] |
3,185,975,178
| 61,737
|
ENH: Parallelization support for pairwise correlation
|
closed
| 2025-06-29T09:44:11
| 2025-07-21T17:10:01
| 2025-07-21T17:10:00
|
https://github.com/pandas-dev/pandas/pull/61737
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61737
|
https://github.com/pandas-dev/pandas/pull/61737
|
gangula-karthik
| 3
|
- [X] closes #40956
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Used cython.parallel to parallelize the nancorr function.
|
[
"Enhancement",
"Multithreading",
"cov/corr"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Looks like webAssembly/emscripten environments don't have OpenMP support. To get around this Im thinking of modifying the menson.build and setup.py files to conditionally include OpenMP compiler and linker flags only when not building for WebAssembly/Emscripten. (WIP)",
"I'm positive on adding parallelization as is being done here, but negative without the proper framework for use across pandas. See https://github.com/pandas-dev/pandas/issues/43313.",
"Thanks for the PR, but as mentioned the project needs to coordinate & decide a common pattern for parallelism before implementing it in any specific method yet, so closing as that discussion needs resolution first"
] |
3,185,740,721
| 61,736
|
DOC: Add missing Index.infer_objects link to API reference
|
closed
| 2025-06-29T04:30:29
| 2025-06-30T17:20:16
| 2025-06-30T17:20:11
|
https://github.com/pandas-dev/pandas/pull/61736
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61736
|
https://github.com/pandas-dev/pandas/pull/61736
|
PreethamYerragudi
| 2
|
- [x] closes #61733
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - NA
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions - NA.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"Thanks @PreethamYerragudi "
] |
3,185,706,722
| 61,735
|
adding pandas.api.typing.aliases and docs
|
open
| 2025-06-29T03:38:17
| 2025-08-17T21:22:45
| null |
https://github.com/pandas-dev/pandas/pull/61735
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61735
|
https://github.com/pandas-dev/pandas/pull/61735
|
Dr-Irv
| 24
|
- [x] closes #55231
- [x] [Tests added and passed: `pandas/tests/test_api.py:TestApi.test_api_typing_aliases()`
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This is my first proposal for adding the typing aliases that are "public" so that people do not import from `pandas._typing`.
|
[
"Typing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> If we are to make these public, what is the process of making changes to them?\r\n\r\nMy suggestion would be that if someone adds an alias to `pandas._typing.py` that is used as an argument or return type of a documented pandas method, then they should update the `pandas/api/typing/aliases.py` file and `doc/source/reference/aliases.rst` . Should I add something to the contributors guide about that?",
"@Dr-Irv - my question is about how do we go about changing the definition of aliases that we have already made public, not about adding new aliases.",
"> @Dr-Irv - my question is about how do we go about changing the definition of aliases that we have already made public, not about adding new aliases.\r\n\r\nWe just edit `pandas._typing.py` and we don't have to make changes elsewhere. Am I still misunderstanding your question?\r\n\r\n",
"> We just edit `pandas._typing.py` and we don't have to make changes elsewhere. Am I still misunderstanding your question?\r\n\r\nAnd break user code without warning? Can we introduce such breakages in minor or patch releases? While most breakages I would expect to be of a type-checking nature and therefore an annoyance, type-hints can be enforced in runtime and changes in this regard can introduce runtime breakages as well.",
"> > We just edit `pandas._typing.py` and we don't have to make changes elsewhere. Am I still misunderstanding your question?\r\n> \r\n> And break user code without warning? Can we introduce such breakages in minor or patch releases? While most breakages I would expect to be of a type-checking nature and therefore an annoyance, type-hints can be enforced in runtime and changes in this regard can introduce runtime breakages as well.\r\n\r\nI am pretty sure we can change the definition of an alias without breaking user code, unless people do introspection on those aliases, which is not a supported usage of aliases anyway. For example, let's say we implement a new sorting algorithm and change `SortKind` to include the new sorting method, user code won't break.\r\n\r\nIf we deleted or renamed an alias, then user code could potentially break. But at least my observation has been (by getting alerts to when anyone makes PRs that change `pandas._typing.py`) that we don't make such changes to `pandas._typing.py` (which would then propagate to `pandas.api.typing.aliases`).\r\n\r\nThe renaming issue probably exists for everything in `pandas.api.typing` - have we committed to those names as well?\r\n",
"> For example, let's say we implement a new sorting algorithm... user code won't break.\r\n\r\nOr remove or rename an existing sorting algorithm?\r\n\r\n> unless people do introspection on those aliases, which is not a supported usage of aliases anyway\r\n\r\nI think you're saying we don't support the enforcement of pandas type-aliases at runtime (e.g. use with Pydantic), is that right? Is this documented?\r\n\r\n> But at least my observation has been... that we don't [delete or rename type aliases]\r\n\r\nThat's fine, but I'm -1 here until we have a plan that is documented about how we would do so if such a case were to come up. I'm very flexible on what that plan could be, but there needs to be a plan.\r\n\r\n> The renaming issue probably exists for everything in `pandas.api.typing` - have we committed to those names as well?\r\n\r\nThese are public classes and need to go through the usual deprecation cycle if we were to remove or rename.",
"> > For example, let's say we implement a new sorting algorithm... user code won't break.\r\n> \r\n> Or remove or rename an existing sorting algorithm?\r\n\r\nSo if we were to change the runtime allowable string for a sorting algorithm, e.g., `\"quicksort\"` becomes `\"Quicksort\"` or we were to remove `\"heapsort\"` from `SortKind`, and someone was using either `\"quicksort\"` or `\"heapsort\"` in their code, the code would fail at runtime. But that is independent of the alias changing its definition. In fact, if we updated the alias to do the renaming and/or removal, the type checker would pick up the change. My point here is that if we change the definition of the alias, if a user is not using the alias, their runtime code would break. If they were using the alias, which presumably would be for type checking, the type checker would pick it up for them.\r\n> \r\n> > unless people do introspection on those aliases, which is not a supported usage of aliases anyway\r\n> \r\n> I think you're saying we don't support the enforcement of pandas type-aliases at runtime (e.g. use with Pydantic), is that right? Is this documented?\r\n\r\nThe code is inconsistent. Sometimes we check that the arguments are of the right possible values, sometimes we don't. But it is not related to the aliases themselves. My sense is that we shouldn't document this at all. We say that the aliases are for type checking.\r\n\r\n> \r\n> > But at least my observation has been... that we don't [delete or rename type aliases]\r\n> \r\n> That's fine, but I'm -1 here until we have a plan that is documented about how we would do so if such a case were to come up. I'm very flexible on what that plan could be, but there needs to be a plan.\r\n\r\nI think we have to treat them like we do other code changes. Not sure where to document that.\r\n> \r\n> > The renaming issue probably exists for everything in `pandas.api.typing` - have we committed to those names as well?\r\n> \r\n> These are public classes and need to go through the usual deprecation cycle if we were to remove or rename.\r\n\r\nSo we can do that if we decide to rename or delete an alias, right?\r\n\r\n",
"Also worth mentioning that @simonjayhawkins suggested making this \"experimental\" in https://github.com/pandas-dev/pandas/issues/55231#issuecomment-2802276493 although I'm not sure that's the right word here. I think the warning you suggested cover this, and I have added that in the most recent commit.",
"> I think we have to treat [changes to type aliases] like we do other code changes. \r\n\r\nI do not think this is possible. To my knowledge we have no process to warn users of the upcoming change to a type alias. This is unlike other parts of the pandas code where we can emit deprecation warnings, put behaviors behind flags, and the like. Happy to be wrong here; to make this explicit could you detail how we'd go about adding or removing a case to `ArrayLike`?\r\n\r\n> My sense is that we shouldn't document this at all. We say that the aliases are for type checking.\r\n\r\nA large part of the community is also enforcing type-hints at runtime, e.g. via Pydantic. It seems to me if we are going to make these public, we should not handcuff users by disallowing this kind of usage.\r\n",
"> > I think we have to treat [changes to type aliases] like we do other code changes.\r\n> \r\n> I do not think this is possible. To my knowledge we have no process to warn users of the upcoming change to a type alias. This is unlike other parts of the pandas code where we can emit deprecation warnings, put behaviors behind flags, and the like. Happy to be wrong here; to make this explicit could you detail how we'd go about adding or removing a case to `ArrayLike`?\r\n\r\nI don't think we have to notify in this case. `TypeAlias` is only used for type checking. There is nothing about the definition that affects runtime behavior.\r\n\r\n> \r\n> A large part of the community is also enforcing type-hints at runtime, e.g. via Pydantic. It seems to me if we are going to make these public, we should not handcuff users by disallowing this kind of usage.\r\n\r\nYes, but I don't think you can enforce `TypeAlias` type-hints at runtime. You can enforce it on classes and basic python types, but not aliases.\r\n\r\n\r\n",
"> Yes, but I don't think you can enforce `TypeAlias` type-hints at runtime. You can enforce it on classes and basic python types, but not aliases.\r\n\r\nFor example - you can't call `isinstance()` on a `TypeAlias`:\r\n```python\r\n>>> from pandas._typing import ArrayLike\r\n>>> ArrayLike\r\ntyping.Union[ForwardRef('ExtensionArray'), numpy.ndarray]\r\n>>> import numpy as np\r\n>>> arr=np.array([1,2,3])\r\n>>> isinstance(arr, np.ndarray)\r\nTrue\r\n>>> isinstance(arr, ArrayLike)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Condadirs\\envs\\pandasstubs\\lib\\typing.py\", line 1260, in __instancecheck__\r\n return self.__subclasscheck__(type(obj))\r\n File \"C:\\Condadirs\\envs\\pandasstubs\\lib\\typing.py\", line 1264, in __subclasscheck__\r\n if issubclass(cls, arg):\r\nTypeError: issubclass() arg 2 must be a class, a tuple of classes, or a union\r\n```\r\n\r\nSo these only have value in type declarations.\r\n",
"```python\r\nfrom pydantic_settings import BaseSettings\r\nfrom pandas._typing import ArrayLike\r\n\r\nclass Foo(BaseSettings):\r\n x: ArrayLike\r\n\r\nFoo(x=np.ndarray([1, 2])) # Succeeds\r\nFoo(x=1) # ValidationError\r\n```",
"> ```python\r\n> from pydantic_settings import BaseSettings\r\n> from pandas._typing import ArrayLike\r\n> \r\n> class Foo(BaseSettings):\r\n> x: ArrayLike\r\n> \r\n> Foo(x=np.ndarray([1, 2])) # Succeeds\r\n> Foo(x=1) # ValidationError\r\n> ```\r\n\r\nI’m without laptop for 2 weeks and on a plane about to take off but I’m pretty sure the type checkers would also flag this as an error. \r\n\r\nI wouldn’t expect people to use the aliases without type checking turned on. So the error above would be caught before runtime, I.e. by the type checkers. So if we assume people importing an alias would type check their code before executing it, then we should be fine. \r\n\r\nI’m fine to put in the docs something that explains that if you think that helps. \r\n\r\n",
"This came up on today's dev call, where the closest I came to an opinion was \"I will offer moral support to both Irv and Richard\".\r\n\r\nThe idea came up of applying special backwards-compatibility rules to this file to the effect of \"Warning: may change without warning\" which I think is reasonable given the difficulty of doing deprecations here.\r\n\r\nAlso AFAICT most of these are lists of string literals which I'm just not going to lose sleep over libraries not having aliases for. That said, I'm happy with my default of \"defer to Irv on anything stubs-adjacent\".\r\n",
"@jorenham would be interested to have your thoughts on our approach of exposing typing aliases and Numpy's approach too",
"> @jorenham would be interested to have your thoughts on our approach of exposing typing aliases and Numpy's approach too\r\n\r\nThanks for the ping :)\r\n\r\n---\r\n\r\nI see that there are *a lot* of type aliases. What will happen if at some later point you want to remove one of them? As far as I know, there's no good way to have them throw a warning at runtime when they're used, and on the static side there's also nothing like `@deprecated` that can be used for it. It's way easier to add type-aliases than to remove them. So my advice here would be is to limit the public types to the most commonly used ones that are battle-tested (and therefore likely to work as intended).\r\n\r\n---\r\n\r\nThe first type I took a closer look at, `AggFuncType`, is one such example of a type that might not work as intended. This is how it is [defined](https://github.com/pandas-dev/pandas/blob/faf3bbb1d7831f7db8fc72b36f3e83e7179bb3f9/pandas/_typing.py#L239-L243):\r\n\r\n```py\r\nAggFuncTypeBase: TypeAlias = Callable | str\r\nAggFuncTypeDict: TypeAlias = MutableMapping[\r\n Hashable, AggFuncTypeBase | list[AggFuncTypeBase]\r\n]\r\nAggFuncType: TypeAlias = AggFuncTypeBase | list[AggFuncTypeBase] | AggFuncTypeDict\r\n```\r\n\r\nFirst thing to note is that `Callable` is missing its required type arguments. Pyright, for example, will consequently fill in the missing type args as `Unknown`. Because of this, users that have pyright configured in strict mode will see a pyright error when they try to use `AggFuncType`.\r\nThe obvious way to avoid this category of problems is by (also) configuring your static type-checkers to run in strict mode, as can be seen on [mypy-play](https://mypy-play.net/?mypy=latest&python=3.13&flags=strict&gist=79bd55e34b7da34b356ca869578601a7) and [pyright-play](https://pyright-play.net/?strict=true&code=GYJw9gtgBAxmA28CmMAuBLMA7AzgOgEMAjGKdCABzBFSgGEDFjkAoUSKVATwvSwHMylarQAqPJAEF46AjhYsAYgFcsMAFxRxFKTLlQAvPUbxmSFkA).\r\n\r\nThe `AggFuncTypeDict` alias uses the `list` and `MutableMapping` types. Both have invariant type parameters. That means that `list[AggFuncTypeBase]`, for example, will **only** accept things whose type is exactly `list[AggFuncTypeBase]`, i.e. `list[Callable | str]`. So it will reject `list[str]`, and it will reject `list[Callable]`.\r\n`MutableMapping` is also invariant in both its key- and value-type parameters. So `dict[str, Any]` will be rejected, because `str` is not equivalent to `Hashable`.\r\n\r\nSince this was the first type-alias I looked at, I'm assuming that there are more types like this might not work as intended.\r\n\r\n---\r\n\r\nIf I were in your shoes, I'd write a whole bunch of *type-tests* to verify that these types accept what you want them to accept, and that they reject what you want them to reject. For the types that you use a lot already (i.e. the battle tested ones), there's a smaller chance that they're not working as intended. So when it comes to making them public, and don't feel like writing type-tests, that's the ones I'd start with those.\r\n\r\nOh and in case you're wondering what I mean with those \"type-tests\", it's probably easiest to just look at some examples of those, e.g. in [`scipy-stubs`](https://github.com/scipy/scipy-stubs/tree/master/tests) or in [`numtype`](https://github.com/numpy/numtype/tree/main/src/numpy-stubs/%40test) (a thorough rework of numpy's typing stubs with a focus on correctness).",
"> If I were in your shoes, I'd write a whole bunch of _type-tests_ to verify that these types accept what you want them to accept, and that they reject what you want them to reject. For the types that you use a lot already (i.e. the battle tested ones), there's a smaller chance that they're not working as intended. So when it comes to making them public, and don't feel like writing type-tests, that's the ones I'd start with those.\r\n\r\nWe do typing tests in `pandas-stubs`. We don't support strict type checking yet with `pyright` because the stubs go beyond what is in `pandas`. E.g., in the stubs, you can have `Series[int]` but also `Series[Unknown]` and `pyright` strict doesn't like the latter. \r\n\r\nThe goal of this PR was to expose some of the internal types used in the stubs (currently in `pandas/_typing.py`) into a public module.\r\n\r\nSo the question we really had for you @jorenham is not about the aliases themselves and how they are defined, but whether we should worry or not about deleting (or changing the definition) of the aliases in the future. There's not a way we can deprecate an alias in the context of type checking. Are you doing anything special in `numpy` to worry about how the aliases might be deleted or changed in the future?\r\n",
"@jorenham \r\n\r\n> As far as I know, there's no good way to have them throw a warning at runtime when they're used\r\n\r\nCan deprecate with a module level `__getattr__`. I don't like doing it, but it's possible, and seems like an okay solution in this case.",
"> E.g., in the stubs, you can have `Series[int]` but also `Series[Unknown]` and `pyright` strict doesn't like the latter.\r\n\r\nPEP 696 type parameter defaults could help with that :)\r\n\r\n> The goal of this PR was to expose some of the internal types used in the stubs (currently in `pandas/_typing.py`) into a public module.\r\n\r\nYea I get that, and I think it's a good idea, and that many users will be very happy about it. However, if those type aliases are not working as intended, then it might cause more problems than it solves. \r\n\r\nIt might help a bit if you explicitly state that pandas does not support strict mode. But that still leaves the issues with invariance, which are unrelated to type-checker configuration.",
"> There's not a way we can deprecate an alias in the context of type checking. Are you doing anything special in `numpy` to worry about how the aliases might be deleted or changed in the future?\r\n\r\nThat's indeed a very tricky thing. Especially if you consider that no type-checker would understand statements like `if pd.__version__ < ...`. For libraries that support multiple pandas versions (e.g. because they follow [SPEC 0](https://scientific-python.org/specs/spec-0000/)), then they'd be in trouble if you change or rename a type alias. \r\n\r\nIn NumPy we recently deprecated `numpy.typing.NBitBase`, but I don't expect that we'll be able to remove that for a couple of years. FWIW; I noticed that even tiny backwards-incompatible typing changes can leads to a lot of frustrated users. I'm guessing that's because no one likes it if CI breaks after you update one of your libraries, even if the motivation behind it makes a lot of sense.",
"> Can deprecate with a module level `__getattr__`\r\n\r\nDiscussed on today's dev call, sounded like this might not work bc type checkers don't actually execute imports.",
"> In NumPy we recently deprecated `numpy.typing.NBitBase`, but I don't expect that we'll be able to remove that for a couple of years.\r\n\r\n@jorenham Did you instrument anything that provides some type of warning to users if someone was using `numpy.typing.NBitBase` in a typing context? If so, what did you do?\r\n\r\n",
"> > In NumPy we recently deprecated `numpy.typing.NBitBase`, but I don't expect that we'll be able to remove that for a couple of years.\r\n> \r\n> @jorenham Did you instrument anything that provides some type of warning to users if someone was using `numpy.typing.NBitBase` in a typing context? If so, what did you do?\r\n\r\nWell, `NBitBase` is secretly not a type alias but a class that's pretending to be one. So we kinda got lucky in that sense. But that also means that it's probably not the best example of how to deprecate a type *alias* 😅.\r\n\r\nAnyway, by exploiting the fact that it's a class, I was able to simply slap a `@typing_extensions.deprecated` onto it. That way, *static* type-checkers will report it as deprecated (although that's not enabled by default in mypy for some reason).\r\n\r\nOn the runtime side of things, I used the same `__getattr__` approach that @rhshadrach mentioned, so that it'll report a `DeprecationWarning` when imported at runtime.\r\n\r\nSee https://github.com/numpy/numpy/pull/28884 for details.",
"@rhshadrach Just pushed a couple of changes. In the dev meeting, you mentioned a concern about naming, and looking at the comments here, it seems that the only issue is `CompressionOptions` so I created a `ParquetCompressionOptions` to clear that up.\r\n\r\nLet me know if there are others you think I should change.\r\n"
] |
3,185,620,996
| 61,734
|
Removal of members from pandas-triage team
|
open
| 2025-06-29T00:42:58
| 2025-07-15T22:23:22
| null |
https://github.com/pandas-dev/pandas/issues/61734
| true
| null | null |
Dr-Irv
| 7
|
If your Github handle is in the list below, we intend to remove you from the `pandas-triage` team due to lack of activity in the `pandas` repository since the beginning of 2024.
If you have any objection to such removal, please make a note in this issue by July 31, 2025. Otherwise, there is no need to respond.
@paulreece
@ParfaitG
@ssche
@AlexKirko
@Moisan
@debnathshoham
@CloseChoice
@DriesSchaumont
@realead
@erfannariman
@alexhlim
@ivanovmg
@afeld
@ShaharNaveh
@charlesdong1991
@dsaxton
@arw2019
@AnnaDaglis
@moink
@smithto1
@jnothman
@martindurant
@fujiaxiang
@hasB4K
@fjetter
@cdknox
|
[
"Admin"
] | 2
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@Dr-Irv although inactive, I would like to extend my triage role, since I plan to make some time here and there.",
"@Dr-Irv No objection here; thanks for reaching out!",
"Thank you for this @Dr-Irv, I would have loved to participate more on pandas, but truth to be told, I lack time to do so. If that's ever change, I will simply contribute by doing some PRs etc. In the meantime, I don't need to be in the triage team at all 😉 \n",
"@DriesSchaumont would it be possible to keep me on the team? As things are easing up at my company, I think I'll be able to contribute to triage once in a while.",
"@Dr-Irv I would like to extend my triage role as well if possible.",
"@Dr-Irv I would also also like to remain a triage team member, if that's an option. I'm trying to contribute as much as I can, juggling other (life) commitments.\n\nLike others perhaps, I may not have contributed much to the repository recently, but I'm reporting and commenting on issues and provide help and support wherever I can.",
"@Dr-Irv I would like to stay a triage member as well. Even though I haven't made any contributions to pandas till 1.5 years, I will try to find some time to contribute."
] |
3,185,343,479
| 61,733
|
DOC: Index.infer_objects is missing from docs
|
closed
| 2025-06-28T18:12:20
| 2025-06-30T17:20:12
| 2025-06-30T17:20:12
|
https://github.com/pandas-dev/pandas/issues/61733
| true
| null | null |
Dr-Irv
| 0
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.Index.html
### Documentation problem
While `infer_objects()` is listed as a method for `pandas.Index`, the link to the actual method documentation is missing.
### Suggested fix for documentation
Probably have to add `Index.infer_objects` into the conversion section of `pandas/doc/source/reference/indexing.rst`
|
[
"Docs",
"good first issue",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,185,280,323
| 61,732
|
API: consistent NaN treatment for pyarrow dtypes
|
open
| 2025-06-28T17:23:26
| 2025-08-11T15:04:31
| null |
https://github.com/pandas-dev/pandas/pull/61732
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61732
|
https://github.com/pandas-dev/pandas/pull/61732
|
jbrockmendel
| 6
|
This is the third of several POCs stemming from the discussion in https://github.com/pandas-dev/pandas/issues/61618 (see #61708, #61716). The main goal is to see how invasive it would be.
Specifically, this changes the behavior of pyarrow floating dtypes to treat NaN as distinct from NA in the constructors and `__setitem__` (xref #32265). Also in to_numpy, .values
Notes:
- [x] <s>This makes the decision to treat NaNs as close-enough to NA when a user explicitly asks for a pyarrow integer dtype. I think this is the right API, but won't check the box until there's a concensus.</s> Changed this following Matt's opinion.
- [x] I still have <s>113</s> <s>89</s> <s>9</s> 0 failing tests locally. <s>Most of these are in json, sql, or test_EA_types (which is about csv round-tripping).</s>
- [x] Finding the mask to pass to pa.array needs optimization.
- [x] <s>The kludge in NDFrame.where is ugly and fragile.</s> Fixed.
- [ ] Need to double-check the new expected in the rank test. Maybe re-write the test with NA instead of NaN?
- [x] Do we change to_numpy() behavior to _not_ convert NAs to NaNs? this would be needed to make test_setitem_frame_2d_values tests pass
|
[
"PDEP missing values"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke when convenient id like to get your thoughts before getting this working. it looks pretty feasible.",
"Generally +1 in this direction. Glad to see the changes to make this work are fairly minimal",
"Not able to judge the implementation, but I'm +1 on the concept.",
"While I am personally in favor of distinguishing NaN and NA, I think most of the changes here involve distinguishing NaN when constructing the arrays? (so eg constructing the pyarro-based EA from user input like numpy arrays?) \r\n\r\nPersonally, I think that is a change we should only make _after_ making those dtypes the default, and probably even years after that after a very long deprecation process. \r\n(currently _everyone_ who is creating pandas DataFrames from numpy data assumes that the NaNs in the numpy data is considered as missing. IMO that is a behaviour that we will have to keep (for a long time) even if we distinguish NaN and NA)\r\n",
"> I think most of the changes here involve distinguishing NaN when constructing the arrays?\r\n\r\nYes. Constructors (which affect read_csv) and `__setitem__` are most of this.\r\n\r\n> I think that is a change we should only make after making those dtypes the default, and probably even years after that after a very long deprecation process.\r\n\r\nMy current thought (will bring up on today's dev call) is that we should add a global flag to enable both never-distinguish (see #61708) as the default and always distinguish (this) as opt-in.",
"Based on last week's dev call, I am adapting this and #61708 from POCs to real PRs. This implements a global flag `\"mode.nan_is_na\"` (default `True`) to choose which behavior we want.\r\n\r\nThis PR only implements this for ArrowEA. #61708 will do the same for the numpy-nullables. (I have a branch trying to do it all at once and it is getting ungainly). A third PR will add tests for the various issues this closes.\r\n"
] |
3,185,018,405
| 61,731
|
ENH: Type support for variables in `DataFrame.query()`
|
closed
| 2025-06-28T14:29:23
| 2025-06-29T05:00:09
| 2025-06-29T05:00:09
|
https://github.com/pandas-dev/pandas/issues/61731
| true
| null | null |
malekzada
| 0
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Now using variables inside `df.query("col > @my_var")` doesn’t produce strong typing:
IDEs don’t catch type mismatches (e.g. `my_var` is a string but `col` is numeric).
### Feature Description
Add type support in `pandas-stubs` so that functions like `query()`:
- Accept variables bound via `@`
- Validate that their types align with the DataFrame column dtype
- Offer **autocomplete** in IDEs
Example:
```python
from typing import TypedDict
class Record(TypedDict):
a: int
b: str
df: DataFrame[Record] = ...
my_var: int = 5
filtered = df.query("a > @my_var")
other_var: str = "foo"
df.query("a > @other_var")
# Should flag type mismatch in IDE/type-checker
### Alternative Solutions
```python
# Validate variable type before calling query
from typing import assert_type
my_var = 5
assert_type(my_var, int) # Mypy will enforce this
df.query("a > @my_var")
```
OR
```python
# Type-safe alternative using boolean indexing
df[df["a"] > my_var] # Fully type-checkable, no strings
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,184,817,013
| 61,730
|
BUG: `read_csv()` : inconsistent dtype and content parsing.
|
closed
| 2025-06-28T10:24:36
| 2025-07-19T15:50:22
| 2025-07-19T15:50:19
|
https://github.com/pandas-dev/pandas/issues/61730
| true
| null | null |
945fc41467
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
"field1" ,"field2" ,"field3" ,"field4" ,"field5" ,"field6" ,"field7"
"1" , 14 , 6 , 21 ,"euia" , 0.54 , 1
"2" , 30 , 5 , 26 ,"euia" , 0.82 , 1
"2" , 1 , 0 , 0 ,"eua" , 0 , 0
"3" , 27 , 7 , 17 ,"euia" , 1 , 1
"4" , 14 , 0 , 9 ,"euia" , 0.64 , 0.92
"4" , 10 , 0 , 0 ,"eua" , 0 , 0
"9" , 17 , 1 , 6 ,"euia" , 0.65 , 0.58
"10" , 27 , 4 , 13 ,"eu" , 1 ,
"10" , , 0 , 0 ,"euia" , 0 ,
"12" , 14 , 1 , 13 ,"uia" , 1 , 0.75
"12" , 5 , 1 , 4 ,"ui eiuaea" , 1 , 1
"13" , 22 , 3 , 7 ," euia" , 0.89 , 1
"6" , 22 , 3 , 5 ,"euia" , 0.84 , 0.79
"7" , 23 , 5 , 4 ,"uia" , 0.78 , 1
"8" , 26 , 11 , 2 ,"euia" , 1.12 , 1.30
"5" , 28 , 3 , 3 ,"euia" , 0.72 , 0.68
import pandas as pd
pd.set_option('display.max_columns', 1000)
pd.set_option('display.max_rows', 1000)
pd.set_option('display.width', 1000)
pd.set_option("display.max_colwidth", None)
df = pd.read_csv("exemple.csv")
# df = pd.read_csv("exemple.csv", quoting=1) # change nothing
list(df.columns)
df.dtypes
list(df["field5 "])
df = pd.read_csv("exemple.csv", sep=r"\s*,\s*", engine="python")
list(df.columns)
df.dtypes
list(df["field5"])
df = pd.read_csv("exemple.csv", quoting=2)
list(df.columns)
df.dtypes
list(df["field5 "])
df = pd.read_csv("exemple.csv", quoting=3)
list(df.columns)
df.dtypes
list(df['"field5" '])
df = pd.read_csv("exemple.csv", quoting=2, dtype={"field1 ": "object",
"field2 ": "Int32", # fail
"field3 ": "int",
"field4 ": "int",
"field5 ": "object",
"field6 ": "float",
"field7": "float" # fail
})
```
### Issue Description
Hello,
I tried to parse a file like the exemple given, and I spent an afternoon just on this. Nothing looks logical to me. So I am sorry, I will make one ticket for everything, cause it would be to long to make one for each problem. Fill free to divide it in several task.
Expected colums dtypes look quite easy to guess to me : the user used quotemarks on `field1` to force a string type. Fields 2-4 are expected to be integers. It could be almost understandable if `field2` was converted to a float because np.int dtype doesn’t manage NA values. But Pandas has a integer type which does. So there is no reason. `Field5` should be string containing text between quotemarks. Field 6 and 7 are expected to be float. Let see what happen
First try : `df = pd.read_csv("exemple.csv")`
* Columns names quotemarks are removed, but trailing space are keeped. That’s quite surprising as there is no logic : Or you consider quotemarks are text delimiters and should be removed, but in this case, why to keep characters outside the delimiters ? Or you consider a everything is part of the string and in this case you must keep everything.
* dtypes are problematic:
- `field1` have been implicitly converted to `int64`. The user explicitly asked for a `str`. The convention “what is between quotemarks is a string” is common to R, C++ and Python and wide spread. Why to not respect it
- `field2` is converted to a string. Missing values are a common case to handle. I would understand a conversion to float, or an error raised. But why a conversion to a string ?
- `field5` have the same problem than column names.
- `field7` is converted to a string. Here it is not understandable at all as np.float handle NA values.
- Other field are correct. Which is also a little surprising. So initials and trailing spaces pose problem in string fields and empty fields, but not in number field ?
Case : `df = pd.read_csv("exemple.csv", sep=r"\s*,\s*", engine="python")`
Here init and trailing spaces are removed, but not quotemarks. This ticket is probably already opened somewhere. Field types are ok, excepted for `field2`, which should be `Int32`.
Case : `df = pd.read_csv("exemple.csv", quoting=2)`
Here I tried to explicitly tel the methods that quotemarks means string. Nonetheless it doesn’t work. But integer field are now floats. Excepted for `field2` and `field7` which are… strings !
Case : `df = pd.read_csv("exemple.csv", quoting=3)`
Here, the parsing of column names and string fields is wrong, but at least logical. It just keep everything.
Fields containing NA values are still converted to string.
Case : `df = pd.read_csv("exemple.csv", quoting=2, dtype={"field1 ": "object",
"field2 ": "Int32", # fail
"field3 ": "int",
"field4 ": "int",
"field5 ": "object",
"field6 ": "float",
"field7": "float" # fail
})`
Raise errors and doesn’t handle fields names correctly.
### Expected Behavior
No implicit conversion. Never.
For string field : I understand I may have to tweak the `quoting` and `quotechar` parameters, but once done, everything between quotemark should be string, not int or float, and white spaces outside should be ignored.
For float fields containing NA values : should be float field with NA values.
For int field containing NA values : ideally should be parsed as pandas `IntXX` which handle NA values. At minimum as a np.float. But never a string.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.13.3
python-bits : 64
OS : Linux
OS-release : 6.12.34-1-MANJARO
Version : #1 SMP PREEMPT_DYNAMIC Thu, 19 Jun 2025 15:49:06 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : fr_FR.UTF-8
LOCALE : fr_FR.UTF-8
pandas : 2.3.0
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"IO CSV"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi, I'd like to work on this issue. I've reproduced it locally and observed the problems as described. Before starting on a fix, I wanted to propose a few possible solution directions and ask for guidance on which approach aligns best with pandas' design philosophy.\n\n# ✅Observed Problems\n## 1. Quoted values being cast to numeric types\n\n#### Example CSV:\n```\n\"field1\",\"field2\"\n\"1\",2\n\"2\",3\n```\n```python\ndf = pd.read_csv(\"example.csv\")\nprint(df.dtypes)\n```\n##### Actual:\n`field1` → `int64`\n##### Expected:\nSince values are quoted, `field1` should be inferred as `object` (string)\n\n### Suggested Fix Options:\n\n#### Option 1: ***[Proposed]*** Add a flag `infer_quoted_strings=True`\n```python\ndf = pd.read_csv(\"example.csv\", infer_quoted_strings=True)\nprint(df.dtypes) # field1 → object\n``` \n#### Option 2: Auto-infer quoted numeric as strings\nInternally if value was quoted, skip numeric coercion\n```python\n# Current: \"1\" → 1 → int \n# Proposed: \"1\" → \"1\" → str\n```\n## 2. Columns with missing values default to `object`\n### Example CSV row:\n```\n\"10\" , , 0 , 0 ,\"euia\", 0 ,\n```\n```python\ndf = pd.read_csv(\"example.csv\")\nprint(df.dtypes)\n```\n#### Actual:\n- `field2` → `object`\n- `field7` → `object`\n#### Expected:\n- `field2` → `Int32` (nullable int)\n- `field7` → `float64`\n\n### Suggested Fix Options:\nAdd a flag `dtype_backend=\"nullable\"` such that if a column has numeric-looking values + NA, fallback should prefer:\n- `Int32` for integers with NA\n- `float64` for float with NA\n```python\ndf = pd.read_csv(\"example.csv\", dtype_backend=\"nullable\") \nprint(df.dtypes) # field2 → Int32, field7 → float64\n```\n## 3. Column names keep trailing spaces\n\n### CSV header:\n\n```\n\"field1 \",\"field2 \",\"field5 \" \n```\n```python\n\ndf = pd.read_csv(\"example.csv\")\nprint(df.columns) # ['field1 ', 'field2 ', 'field5 ']] \n```\nAccess like `df[\"field5\"]` fails unless user matches exact spacing.\n\n### Suggested Fix Options:\n#### Option 1: ***[Proposed]*** Add a flag `strip_column_names=True`\n```python\ndf = pd.read_csv(\"example.csv\", strip_column_names=True)\nprint(df.columns) # ['field1', 'field2', 'field5']\n```\n#### Option 2: Just document a helper:\n```python\ndf.columns = df.columns.str.strip()\n```\n---\n# 🙋♂️ Request for Direction\nWould love to hear your thoughts on:\n- Which of the above ideas (if any) would be acceptable to implement?\n- Should these be separated into multiple issues/PRs or handled together?\n- Is it fine to add optional flags to control these behaviors?\n\nHappy to start by writing tests first, or submitting a patch once the preferred approach is clear. Thanks!",
"import pandas as pd\nimport csv\n\ndf = pd.read_csv(\n \"exemple.csv\",\n sep=r\"\\s*,\\s*\", # remove spacing around commas\n engine=\"python\",\n quoting=csv.QUOTE_MINIMAL,\n na_values=[\"\", \"NA\"], # treat blanks as NA\n dtype={\n \"field1\": \"string\",\n \"field2\": \"Int32\",\n \"field3\": \"Int32\",\n \"field4\": \"Int32\",\n \"field5\": \"string\",\n \"field6\": \"float\",\n \"field7\": \"float\"\n }\n)\n\n# Clean up column names\ndf.columns = df.columns.str.strip().str.replace('\"', '')\n\nprint(df.dtypes)\nprint(df.head())\n",
"> I tried to parse a file like the exemple given, and I spent an afternoon just on this. Nothing looks logical to me. So I am sorry, I will make one ticket for everything, cause it would be to long to make one for each problem. Fill free to divide it in several task.\n\nI understand this is extra effort, but it will go a long way on making your report actionable. As maintainers, we cannot allow issues to be reported as such. It will lead to long and hard to navigate discussions, with a lot of extra time devoted just to seeing if there is anything left in the issue.\n\nPlease open up separate issues. Closing."
] |
3,184,743,016
| 61,729
|
BUG: AttributeError in pandas.core.algorithms.diff when passing non-numeric types
|
closed
| 2025-06-28T08:45:26
| 2025-08-12T17:29:46
| 2025-06-30T17:32:52
|
https://github.com/pandas-dev/pandas/pull/61729
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61729
|
https://github.com/pandas-dev/pandas/pull/61729
|
akshat62
| 1
|
…g non-numeric types
- [ ] closes #61728
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for this PR but closing per https://github.com/pandas-dev/pandas/issues/61728#issuecomment-3020121533"
] |
3,184,738,140
| 61,728
|
BUG: AttributeError in pandas.core.algorithms.diff when passing non-numeric types
|
closed
| 2025-06-28T08:40:10
| 2025-06-30T17:32:41
| 2025-06-30T17:32:41
|
https://github.com/pandas-dev/pandas/issues/61728
| true
| null | null |
akshat62
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.Series([1, 2, 3]).diff("hello")
Raises:
AttributeError: 'str' object has no attribute 'is_integer'
```
### Issue Description
When passing non-numeric types (like strings, None, or other objects) to the `diff` function in `pandas/core/algorithms.py`, it raises an `AttributeError` instead of the expected `ValueError`. This affects any code that uses `Series.diff()`, `DataFrame.diff()`, or calls the `diff` function directly.
The issue occurs because the validation logic tries to call `n.is_integer()` on non-float objects that don't have this method, resulting in an `AttributeError`.
### Expected Behavior
```python
import pandas as pd
pd.Series([1, 2, 3]).diff("hello")
```
Should raise:
```
ValueError: periods must be an integer
```
### Installed Versions
<details>
python : 3.12.10
OS : Linux
OS-release : 4.18.0-553.56.1.el8_10.x86_64
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks I don't see this error on main\n\n```python\nIn [1]: import pandas as pd\n ...: pd.Series([1, 2, 3]).diff(\"hello\")\n\nFile ~/pandas/core/series.py:2837, in Series.diff(self, periods)\n 2835 if not lib.is_integer(periods):\n 2836 if not (is_float(periods) and periods.is_integer()):\n-> 2837 raise ValueError(\"periods must be an integer\")\n 2838 result = algorithms.diff(self._values, periods)\n 2839 return self._constructor(result, index=self.index, copy=False).__finalize__(\n 2840 self, method=\"diff\"\n 2841 )\n\nValueError: periods must be an integer\n```\n\nso closing this PR"
] |
3,184,264,155
| 61,727
|
TST[string]: update expecteds for using_string_dtype to fix xfails
|
closed
| 2025-06-27T22:59:38
| 2025-07-23T08:30:50
| 2025-07-10T16:47:38
|
https://github.com/pandas-dev/pandas/pull/61727
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61727
|
https://github.com/pandas-dev/pandas/pull/61727
|
jbrockmendel
| 7
|
It isn't 100% obvious that the new repr for Categoricals is an improvement, but it's non-crazy. One of the remaining xfails one is for `eval(repr(categorical_index))` round-tripping that won't be fixable unless we revert back to the old repr behavior.
I'm pretty sure that the fix in test_astype_dt64_to_string is correct and the test is just wrong, but merits a close look.
That leaves 12 xfails, including the one un-fixable round-trip one that we'll just remove. Of those...
- [x] test_join_on_key i think is surfacing an unrelated bug that I'll take a look at (xref #61771)
- [x] test_to_dict_of_blocks_item_cache is failing because we don't make series.values read-only for ArrowStringArray. I think @mroeschke can comment on how viable/important that is.
- [ ] test_string_categorical_index_repr is about CategoricalIndex repr that span multiple lines; with the StringDtype the padding is changed.
- [x] 4 in pandas/tests/io/json/test_pandas.py that im hoping @WillAyd can take point on
- [ ] test_to_string_index_with_nan theres a MultiIndex level that reprs with a `nan` instead of `NaN`. Not a huge deal but having mixed-and-matched nans/NaNs in the repr is weird.
- [ ] test_from_records_sequencelike i don't have a good read on
- [x] tests.base.test_misc::test_memory_usage is skipped instead of xfailed, but the reason says that it "doesn't work properly" for arrow strings which seems xfail-adjacent. Instead of skipping can we update the expected behavior cc @jorisvandenbossche ?
(Update: looks like I missed one in test_http_headers and another in test_fsspec)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The JSON issues stem back to the fact that:\r\n\r\n```python\r\n>>> pd.Series([None, '', 'c']).astype(object)\r\n```\r\n\r\nyields different behavior with/without the future string dtype. In the \"old\" world, this would preserve the value of `None` but in the new world, `None` gets cast to a missing value indicator when contained within a series of string values.\r\n\r\nIn theory we could try and work around those semantics by natively supporting an object type in the JSON reader, but that's a ton of effort and I don't think worth it, given JSON does not natively support object storage",
"thanks, will update those tests' `expected`s",
"> we don't make series.values read-only for ArrowStringArray\r\n\r\nCan't speak for the viability but I think this _should_ be read-only per CoW. Is this related to the underlying `ArrowExtensionArray.__setitem__` immutably issue?",
"> Can't speak for the viability but I think this should be read-only per CoW. Is this related to the underlying ArrowExtensionArray.__setitem__ immutably issue?\r\n\r\nI suspect we would need a mechanism like ndarray.flags for EAs to declare an object as read-only. Definitely out of scope for this PR.\r\n\r\nLooking at the test, the pertinent behavior can be tested by making the column specifically object dtype.",
"> * tests.base.test_misc::test_memory_usage is skipped instead of xfailed, but the reason says that it \"doesn't work properly\" for arrow strings which seems xfail-adjacent. Instead of skipping can we update the expected behavior cc @jorisvandenbossche ?\r\n\r\nI quickly checked that one, and I am not entirely sure why we skipped it based on \"doesn't work properly for arrow strings\". The issue with the test seems to be that it is assuming too much about the input data, and its `is_object` definition is no longer correct. For example, a MultiIndex gets considered as object dtype, but now if all levels of the MultiIndex are using `str` dtype, then the test should not actually see it as `is_object`. \r\nSame for `is_categorical` where the test code assumes that a categorical is using object dtype categories, I think.\r\n\r\n(can do a separate PR to fix this one, now that I already dived into it)",
"Looks like #61757 wasn't backported, so i dont think this one should be either",
"> > > we don't make series.values read-only for ArrowStringArray\r\n> > \r\n> > Can't speak for the viability but I think this _should_ be read-only per CoW. Is this related to the underlying `ArrowExtensionArray.__setitem__` immutably issue?\r\n>\r\n> I suspect we would need a mechanism like ndarray.flags for EAs to declare an object as read-only. Definitely out of scope for this PR.\r\n\r\nThat was actually one of the remaining TODO items for the CoW implementation, ot indeed add a similar readonly flag mechanism to EAs to declare them as read-only. \r\n\r\nExperimenting with that in https://github.com/pandas-dev/pandas/pull/61925\r\n"
] |
3,184,170,974
| 61,726
|
CI: temporarily pin numpydoc<1.7 to unblock docstring validation (GH#61720)
|
closed
| 2025-06-27T21:50:17
| 2025-06-27T23:37:37
| 2025-06-27T23:34:50
|
https://github.com/pandas-dev/pandas/pull/61726
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61726
|
https://github.com/pandas-dev/pandas/pull/61726
|
EvMossan
| 1
|
Temporarily pin **numpydoc<1.7** to unblock the *docstring-validation* job.
`numpydoc` 1.7.0 raises
`AttributeError: 'getset_descriptor' object has no attribute '__module__'`
inside `numpydoc/validate.py`, causing pandas’ *Code Checks / Docstring validation*
step to fail before any pandas code is run (see GH #61720).
This PR
* **pins** `numpydoc<1.7` in `environment.yml`
(propagated to `requirements-dev.txt`);
* **fixes** a duplicate **Returns / Yields** section in
`pandas/_config/config.py::option_context`;
* **marks** an expected warning in `doc/user_guide/timeseries.rst`
with `:okwarning:` so Sphinx no longer treats it as an error.
Together these changes restore a green CI across all jobs.
### Notes
* All changes are limited to **dev/CI and docs**—no impact on end users.
* Once an upstream fix lands in numpydoc, we’ll remove the version pin.
* No new tests are required; a successful CI run itself demonstrates the fix.
---
- [x] closes #61720
- [x] CI green locally (`pre-commit run --all-files` & `doc/make.py --warnings-are-errors`)
- [ ] added to `doc/source/whatsnew/vX.X.X.rst` → **not needed** (CI-only)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Superseded by #61725"
] |
3,183,578,485
| 61,725
|
DOC: Pin numpydoc=1.8.0
|
closed
| 2025-06-27T17:24:25
| 2025-06-30T17:22:51
| 2025-06-30T16:53:34
|
https://github.com/pandas-dev/pandas/pull/61725
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61725
|
https://github.com/pandas-dev/pandas/pull/61725
|
fangchenli
| 3
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Is there a way for us to replace our `numpydoc` usage so we could use a `numpydoc>=1.9.0` instead?",
"> Is there a way for us to replace our `numpydoc` usage so we could use a `numpydoc>=1.9.0` instead?\r\n\r\nIt probably needs to be fixed from the numpydoc side. We could pin it for now to make the CI green.",
"Thanks @fangchenli, but I just merged https://github.com/pandas-dev/pandas/pull/61744 which allow us to get around the error (skipping validation for objects that don't have docstrings which is probably the right thing to do).\r\n\r\nGoing to close since we don't need to pin numpydoc anymore"
] |
3,183,262,545
| 61,724
|
DOC: add sections about big new features (CoW, string dtype) to 3.0.0 whatsnew notes
|
closed
| 2025-06-27T15:44:00
| 2025-08-15T11:42:04
| 2025-08-15T11:41:49
|
https://github.com/pandas-dev/pandas/pull/61724
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61724
|
https://github.com/pandas-dev/pandas/pull/61724
|
jorisvandenbossche
| 2
|
We don't actually yet list the bigger features (string dtype, CoW, no silent downcasting) in the 3.0.0 whatsnew page, so starting to do that here.
Already pushed a section about string dtype, will further add a section about CoW and the downcasting.
|
[
"Docs",
"Release"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Going to merge this to have something in the docs, we can always further refine it so more feedback is certainly welcome!"
] |
3,183,208,045
| 61,723
|
DEPS: bump pyarrow minimum version from 10.0 to 12.0
|
closed
| 2025-06-27T15:25:10
| 2025-07-03T08:18:47
| 2025-07-03T08:18:42
|
https://github.com/pandas-dev/pandas/pull/61723
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61723
|
https://github.com/pandas-dev/pandas/pull/61723
|
jorisvandenbossche
| 3
|
For our support window of 2 years, we can bump the minimum pyarrow version to 12.0.1 (see list of release dates here: https://arrow.apache.org/release/, we could also directly bump to 13 assuming the final 3.0 release will happen in 1-2 months).
|
[
"Dependencies",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke this should be ready now (all tests are green)",
"> Could you also update the table under the `Increased minimum versions for dependencies` section in the `v3.0.0.rst` whatsnew?\r\n\r\nDone!",
"Going to merge this so I can update https://github.com/pandas-dev/pandas/pull/61722"
] |
3,182,892,849
| 61,722
|
String dtype: turn on by default
|
closed
| 2025-06-27T13:43:03
| 2025-07-16T22:29:00
| 2025-07-16T17:07:20
|
https://github.com/pandas-dev/pandas/pull/61722
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61722
|
https://github.com/pandas-dev/pandas/pull/61722
|
jorisvandenbossche
| 9
|
Now that 2.3.0 is released, time to switch the default on main to prepare for a 3.0 release (and also such that people using nightlies get this)
I assume this might also still uncover some failing tests. And the docstrings will still fail for sure.
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
|
[
"cc @rhshadrach I remember you gave the feedback when CoW option was removed (actually removed, without ability to turn CoW off), that it might be good to keep the option working for some time? (so that if you run into issues, you can still turn it off temporarily) \r\n\r\nShall we do that here?\r\n\r\nAnd if we do that:\r\n- do we want to keep some CI testing that? (for example I could keep the build currently testing turning it on, but then with turning it off)\r\n- add a warning for people setting the option to False that this is only kept working temporarily?",
"Aside docstrings, the only tests that were failing are the ones with the minimum pyarrow dependency of 10.0.1. For our support window of 2 years, we can bump that to 12.0.1. If that helps, will move it to a separate PR.",
"> Shall we do that here?\r\n\r\nI would like to keep the option for users to switch back to the old string type if needed ",
"Any idea how many xfailed tests are specific to this? I saw some this morning, will see if I can address those.",
"looks like 20 tests (76 with parametrization) are xfailed. At least some of these will be easy-but-tedious of updating `expected`s that I'll get out of the way now. I don't see any reason why these would be a blocker for this PR.",
"To be able to move forward with enabling this on main, I added commits to temporarily disable failing CI for doc build or doctest failures. That will require more fixes, but also can only be done when this is enabled, and I would prefer doing that in separate (potentially multiple) PRs instead of doing it all at once. \r\n\r\nAnother alternative would be to just keep the option disabled specifically in those builds (by setting `PANDAS_FUTURE_INFER_STRINGS=0` in those builds), so they can keep running and testing for now. But for the docstests, that still gives the same problem that then fixing them needs to be done in one go, so I think for doctests I prefer disabling errors temporarily, and then fixing all failures in separate PRs.",
"For the main doc build part, I have opened https://github.com/pandas-dev/pandas/pull/61864 to start fixing the build issues with the string dtype enabled",
"I'm fine with the two-step approach here + #61864. @rhshadrach @mroeschke happy here?",
"+1"
] |
3,182,668,533
| 61,721
|
DOC: https://pandas.pydata.org/pandas-docs/version/2.3 does not work
|
closed
| 2025-06-27T12:29:39
| 2025-07-08T17:52:36
| 2025-07-08T07:47:11
|
https://github.com/pandas-dev/pandas/issues/61721
| true
| null | null |
gaborbernat
| 4
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/version/2.3
### Documentation problem
https://pandas.pydata.org/pandas-docs/version/2.3 does not work, though https://pandas.pydata.org/pandas-docs/version/2.2 does; only https://pandas.pydata.org/pandas-docs/version/2.3.0 is available.
### Suggested fix for documentation
use https://pandas.pydata.org/pandas-docs/version/2.3 to be a floating release for the latest 2.3.x
|
[
"Docs",
"Web"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @mroeschke ",
"Hello maintainers! 👋\n\nI understand that these floating version URLs usually redirect to the latest patch version (like `2.3.0`, `2.3.1`, etc.). I'd love to help fix this by setting up the redirect or symlink, if possible.\n\nCould you please guide me on where the floating version redirects are managed? Is it something I can contribute to via a PR (e.g., in a deployment script, config file, or docs repo)? Happy to follow your instructions!\n\nThanks again!\n",
"I released 2.3.1 yesterday evening, and while doing that fixed our symlinks (when releasing 2.3.0, we forgot to set up the symlink from /2.3/ to /2.3.0/), so that should have fixed this issue.\n\nThis works now: https://pandas.pydata.org/pandas-docs/version/2.3/index.html\n\n(this is managed directly on the web server, so with the current set up only something that can be done by some of the core maintainers)",
"Cool! Thanks for the response!!"
] |
3,182,208,803
| 61,720
|
BUG: CI docstring-validation fails with AttributeError in numpydoc validate.py
|
closed
| 2025-06-27T10:00:20
| 2025-06-27T23:40:55
| 2025-06-27T23:40:55
|
https://github.com/pandas-dev/pandas/issues/61720
| true
| null | null |
EvMossan
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
The failure is visible in GitHub Actions.
**Steps**
1. Push any branch that is up-to-date with `pandas-dev/main`, **or** open a fresh PR.
2. Observe workflow **Code Checks / Docstring validation, typing, and other manual pre-commit hooks**.
3. The job stops in step *Run docstring validation* with the traceback below.
**Example failing runs (public logs)**
- https://github.com/pandas-dev/pandas/actions/runs/15921431436 ← PR #61718
- https://github.com/pandas-dev/pandas/actions/runs/15886481522 ← another PR on latest main
```text
File ".../site-packages/numpydoc/validate.py", line 234, in name
return ".".join([self.obj.__module__, self.obj.__name__])
AttributeError: 'getset_descriptor' object has no attribute '__module__'. Did you mean: '__reduce__'?
```
### Issue Description
* The *docstring-validation* job crashes before any pandas code is executed, so all current PRs fail.
* The stack trace originates inside **numpydoc/validate.py**; no pandas files are involved.
### Expected Behavior
The *docstring-validation* step should complete without errors, allowing the entire CI workflow to finish green.
### Installed Versions
<details>
* python : 3.11.13 (conda-forge)
* pandas : source checkout of current `main` (not installed / build failed locally)
* numpydoc: 1.8.0
* os : Ubuntu-22.04 (GitHub Actions runner)
</details>
|
[
"Bug",
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"If maintainers confirm this as a valid issue that needs fixing, I'm happy to submit a PR with the fix.",
"Superseded by PR #61725, which pins numpydoc==1.8.0 and fixes the CI failure.\nClosing in favour of that solution.\n"
] |
3,182,128,649
| 61,719
|
BUG: mapping categorical with single category to boolean returns category instead of bool dtype
|
open
| 2025-06-27T09:37:26
| 2025-07-01T05:08:49
| null |
https://github.com/pandas-dev/pandas/issues/61719
| true
| null | null |
kdebrab
| 4
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.Series(["a", "a"]).astype("category").map(lambda x: x == "c")
```
### Issue Description
The above snippet erroneously returns category dtype:
```
0 False
1 False
dtype: category
Categories (1, bool): [False]
```
### Expected Behavior
As soon as there are at least two categories, one gets the expected bool dtype:
```python
pd.Series(["a", "b"]).astype("category").map(lambda x: x == "c")
```
returns:
```
0 False
1 False
dtype: bool
```
I would expect the same result if there is only one category involved.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.9
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Belgium.1252
pandas : 2.3.0
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : 2025.3.1
xlrd : None
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Categorical",
"Apply"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"@kdebrab \nThe issue is happening here:\n\nhttps://github.com/pandas-dev/pandas/blob/35b0d1dcadf9d60722c055ee37442dc76a29e64c/pandas/core/arrays/categorical.py#L1583-L1585\n\nIn the first case, `new_categories` would be `Index([False], dtype='bool')` and since its unique, it ends up returning a `CategoricalDtype`. Should note that issue depends on unique categories after the condition is applied. For example in this code snippet:\n```\npd.Series([\"a\", \"a\", \"a\", \"b\"]).astype(\"category\").map(lambda x: x == \"b\")\n```\neven though there are at least 2 categories, the result is still:\n```\n0 False\n1 False\n2 False\n3 True\ndtype: category\nCategories (2, bool): [False, True]\n```\nThis is because the mapping condition does not return duplicate categories. I think this specific code block was added for efficiency purposes by checking a 1:1 mapping.\n\nA simple fix to this would be to instead use:\n\n```\npd.Series([\"a\", \"a\"]).astype(\"category\") == \"c\"\n```\nor\n```\npd.Series([\"a\", \"a\"]).astype(\"category\").eq(\"c\")\n```\nwhich correctly returns:\n```\n0 False\n1 False\ndtype: bool\n```",
"dtype inference at the end of `map` calls is a really tricky problem that has come up before. Maybe someone will find an elegant solution, but this is a \"don't get your hopes up\" situation",
"Yes I agree."
] |
3,181,825,834
| 61,718
|
CI: add PyPI Trusted-Publishing “publish” job to wheels workflow (#61669)
|
open
| 2025-06-27T08:03:45
| 2025-07-29T00:55:31
| null |
https://github.com/pandas-dev/pandas/pull/61718
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61718
|
https://github.com/pandas-dev/pandas/pull/61718
|
EvMossan
| 5
|
- [x] closes #61669
- [x] all code checks passed (`pre-commit` & CI)
- [x] added an entry in `doc/source/whatsnew/v3.0.0.rst`
### Summary
This PR enables **Trusted Publishing (OIDC)** uploads to PyPI when a GitHub release is published.
#### What’s changed
* **.github/workflows/wheels.yml**
* adds a new `publish` job that
1. downloads all wheel / sdist artifacts from upstream jobs;
2. excludes Pyodide wheels (`*pyodide*.whl`);
3. runs `pypa/gh-action-pypi-publish@v1` in the `pypi` environment.
* **doc/source/whatsnew/v3.0.0.rst**
* adds a single *Build / CI* line announcing the switch to Trusted Publishing
* **doc/source/development/maintaining.rst**
* drop manual twine step and note Trusted Publishing
No other files or CI matrix settings were changed.
## Release prerequisites
* GitHub repo must have an environment named **`pypi`** (OIDC token permission enabled).
* The pandas project on PyPI must list **`pandas-dev/pandas` → “pypi”** as a Trusted Publisher (see <https://docs.pypi.org/trusted-publishers/>).
|
[
"Enhancement",
"Build",
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"CI failed in the docstring-validation step with \r\n ```\r\nAttributeError: 'getset_descriptor' object has no attribute '__module__'. Did you mean: '__reduce__'?\r\n ``` \r\nThis occurs before my code runs, so it isn’t caused by the changes in this PR. \r\nIt looks related to the latest numpydoc release. \r\nI’ll re-run CI once the upstream fix lands.\r\n",
"This workflow runs every day, on all pushes, and on all pull requests, but you said \"uploads to PyPI when a release tag is pushed\" in the description. Perhaps you want to put this in a new `publish.yml` workflow file, with [`on: { release: { types: [published] } }`](https://docs.github.com/en/actions/reference/events-that-trigger-workflows#release).\r\n\r\nBecause this makes the `download-artifacts` step run in a different workflow, you'll need to figure out the `run-id` argument.\r\n\r\nYou could alternatively add `on: { release: { types: [published] } }` to the `wheels.yml` workflow.\r\n\r\n---\r\n\r\nYou need to update [the release process documentation](https://github.com/pandas-dev/pandas/blob/main/doc/source/development/maintaining.rst#release-process) to change the new method for publishing (remove step 5: \"Upload wheels to PyPI\").\r\n\r\n---\r\n\r\nYou likely want to document (in this pull request's description, and in [the release process documentation](https://github.com/pandas-dev/pandas/blob/main/doc/source/development/maintaining.rst#release-process)) the new `pypi` GitHub environment needs to exist, and the corresponding publisher needs to be added to the project in PyPI.",
"@EpicWink All requested changes are in. Let me know if anything else’s needed - thanks!",
"All comments addressed and conversations resolved - ready for another look. Thanks!",
"> I suppose we can only test this really once doing a next pre-release?\r\n\r\nYes, fortunately PyPI (production) supports uploading pre-releases.\r\n\r\n> Could we already test the first part of the added job by temporarily commenting out the last `pypa/gh-action-pypi-publish` step, but so at least we can check the downloading and filtering of the artifacts is working?\r\n\r\nThat could work, as long as it's in a protected tag, but you would be creating a tag just to test this, unless you change the workflow definition further to support a non-version-tag ref."
] |
3,181,520,505
| 61,717
|
BUG: Raise OutOfBoundsDatetime in DataFrame.replace when value exceeds datetime64[ns] bounds (GH#61671)
|
open
| 2025-06-27T06:19:20
| 2025-08-14T00:08:30
| null |
https://github.com/pandas-dev/pandas/pull/61717
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61717
|
https://github.com/pandas-dev/pandas/pull/61717
|
iabhi4
| 5
|
Fixes a bug where `DataFrame.replace` would raise a generic `AssertionError` when trying to replace `np.nan` in a `datetime64[ns]` column with an out-of-bounds `datetime.datetime` object (e.g., `datetime(3000, 1, 1)`).
This PR fixes that by explicitly raising `OutOfBoundsDatetime` when the replacement datetime can't safely fit into the `datetime64[ns]` dtype.
- [x] closes #61671
- [x] Added a test that reproduces the issue
- [x] Pre-commit hooks passed
- [x] Added a changelog entry under `Datetimelike` for 3.0.0
Let me know if you'd like to test other edge cases or if there's a more idiomatic way to handle this!
|
[
"Bug",
"Datetime",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"looking into the CI failures",
"Regarding CI failures —\r\n\r\nSo after the changes in `find_result_type`, we're now catching cases like `Timestamp('1677-09-21 00:12:43.145224193')` early and raising `OutOfBoundsDatetime` during coercion itself, which makes sense and is in line with what #56410 was aiming for (no silent truncation between datetime units).\r\n\r\nBecause of that, `test_clip_with_timestamps_and_oob_datetimes_non_nano` is now failing since it hits the error earlier with, Just wanted to confirm, should I go ahead and update the test to reflect this message? or is the earlier failure point problematic?\r\n\r\nHappy to revert or gate the check if needed.\r\n",
"Thanks for the review and suggestions @simonjayhawkins @jbrockmendel!\r\nI did some testing to better understand how different assignment and replacement operations behave with out-of-bounds datetimes, both tz-naive and tz-aware. Here's what I found:\r\n\r\n---\r\n\r\n### Observed Behavior (Confirmed via Logs)\r\n\r\n| Operation | Value Type | Outcome | Notes |\r\n|------------------------------|--------------------------------|----------------------------------|-------|\r\n| `replace(np.nan, ts)` | `Timestamp(\"3000-01-01\")` | Raises `OutOfBoundsDatetime` | Expected behavior, works as intended |\r\n| `replace(np.nan, ts)` | `Timestamp(\"3000-01-01\", tz)` | Succeeds silently | Column upcasts to `object` dtype silently |\r\n| `df.iloc[0, 0] = ts` | `Timestamp(\"3000-01-01\")` | Raises `OutOfBoundsDatetime` | Same as above, correct behavior |\r\n| `df.iloc[0, 0] = ts` | `Timestamp(\"3000-01-01\", tz)` | Raises `TypeError` | Due to tz-naive column (`datetime64[ns]`) being incompatible with tz-aware value |\r\n\r\n---\r\n\r\n### Additional Context\r\n\r\n- For **tz-naive** out-of-bounds values:\r\n - Both `replace()` and `iloc` correctly raise `OutOfBoundsDatetime`.\r\n- For **tz-aware** values:\r\n - `replace()` allows insertion silently by upcasting the column to `object` dtype (confirmed via debug logs).\r\n - `iloc` correctly raises a `TypeError` because of tz-awareness mismatch (`datetime64[ns]` vs tz-aware).\r\n\r\n---\r\n\r\n### Next Steps from My Side\r\n\r\nBefore I add tests for these cases, just wanted to check:\r\n- Should we treat tz-aware out-of-bounds timestamps as valid (fallback to object)?\r\n- Or do we want to enforce stricter checks across the board?\r\n\r\nHappy to add the tests once I get a bit of guidance on how we want to handle these edge cases consistently.",
"> Column upcasts to object dtype silently\r\n\r\nI would expect this to raise rather than silently upcast.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,180,645,077
| 61,716
|
POC: PDEP16 default to masked nullable dtypes
|
closed
| 2025-06-26T22:30:58
| 2025-07-29T16:09:42
| 2025-07-29T16:09:42
|
https://github.com/pandas-dev/pandas/pull/61716
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61716
|
https://github.com/pandas-dev/pandas/pull/61716
|
jbrockmendel
| 1
|
This is the second of several POCs stemming from the discussion in #61618 (see also #61708). The main goal is to see how invasive it would be.
Specifically, this implements the part of PDEP16 #58988 that changes the default numeric/bool dtypes to use numpy-nullable dtypes. So `pd.Series(foo)` will behave roughly like `pd.Series(pd.array(foo))` does in main.
Notes:
- For POC purposes this takes the stand that we _never_ give numpy numeric/bool dtypes and always map `dtype=np.int64` to the masked dtype.
- The get_option checks will need to be updated to user a more performant check like for `using_string_dtype`
- The simplification in core.internals.construction will eventually be reverted as the MaskedArrays are updated to support 2D.
- This does *not* incorporate #61708.
- Currently 16773 tests failing locally (with `-m "not slow and not db"`). 705 in window, 2036 in io (almost all of pytables is failing), 1997 (plus a ton more I already xfailed bc they get RecursionError) in computation. tests.groupby.test_raises has 1110 that look to be mostly about getting the wrong class of exception or exception message. Many in sparse too, though I don't have a number ATM. Some of these merit issues:
- [ ] #61709
- [ ] #30188
- [ ] RangeIndex.equals issue, see comments in asserters.py diff.
|
[
"PDEP missing values"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This served its purpose of getting an idea how difficult getting this working would be. Closing."
] |
3,180,637,092
| 61,715
|
BUG/API: floordiv by zero in Int64Dtype
|
closed
| 2025-06-26T22:26:46
| 2025-06-27T01:50:41
| 2025-06-27T01:50:40
|
https://github.com/pandas-dev/pandas/issues/61715
| true
| null | null |
jbrockmendel
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
ser = pd.Series([0, 1])
ser2 = ser.astype("Int64")
>>> ser // 0
0 NaN
1 inf
dtype: float64
>>> ser2 // 0
0 0
1 0
dtype: Int64
# with int64[pyarrow] this just raises pyarrow.lib.ArrowInvalid: divide by zero
```
### Issue Description
We patch the results of floordiv in dispatch_fill_zeros, but don't do this for the masked dtypes, and the pyarrow one raises.
### Expected Behavior
Ideally these would be consistent across backends.
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"Closing as duplicate of #30188"
] |
3,179,457,537
| 61,714
|
BUG: doing df.to_parquet and then reading it with pd.read_parquet causes KeyError
|
open
| 2025-06-26T15:09:33
| 2025-07-19T16:52:16
| null |
https://github.com/pandas-dev/pandas/issues/61714
| true
| null | null |
elbg
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(
{
"model": ["model1", "model2"],
"second_index": [(1, 2), (3, 4)],
"first_index": [0, 1],
}
)
df = df.set_index(["first_index", "second_index"], append=True)
df.to_parquet("temp.parquet")
pd.read_parquet("temp.parquet") # >> KeyError
import polars as pl
pl.read_parquet('temp.parquet') #--> OK
```
### Issue Description
I am writing a dataframe with a multiindex, some level of the multiindex contains tuples.
I can save it to parquet, and the obtained parquet seems to be valid since `polars` reads it correctly.
I can't load it back to `pandas`, it produces a key error.
### Expected Behavior
I expected `pd.read_parquet` to give batck the `df` from `df.to_parquet`. This produces the correct result:
```
df = pl.read_parquet("temp.parquet").to_pandas()
df["second_index"] = df["second_index"].apply(lambda x: tuple(x))
df = df.set_index(["first_index", "second_index"])
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.10
python-bits : 64
OS : Linux
OS-release : 5.15.0-139-generic
Version : #149-Ubuntu SMP Fri Apr 11 22:06:13 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.3.0
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.1.2
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.11.0
fsspec : 2024.9.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 7.4.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"MultiIndex",
"IO Parquet",
"Nested Data"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"replace \n```py \ndf.to_parquet(\"temp.parquet\")\n``` \nwith \n```py \ndf.reset_index().to_parquet(\"temp.parquet\")\n```\nit worked for me, maybe it works for you too.",
"Yup that's another workaround. However, I would rather have pandas handling the index, rather than resetting it when saving to parquet and setting it back again when loading the parquet\n",
"Thanks for the report. Further investigations welcome. This may also be an upstream issue in PyArrow."
] |
3,179,319,690
| 61,713
|
BUG: Inconsitent behaviour for different backends due to nullable bool values
|
closed
| 2025-06-26T14:25:31
| 2025-07-19T17:00:10
| 2025-07-19T17:00:06
|
https://github.com/pandas-dev/pandas/issues/61713
| true
| null | null |
SSchleehauf
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
all(pd.Series([None, 3,5], dtype=float) > 3)
all(pd.Series([None, 3,5], dtype='float[pyarrow]') > 3)
```
### Issue Description
Do to the pyarrow nullable bool type, there is a TypeError and the behaviour is inconsistent:
```python
all(pd.Series([None, 3,5], dtype=float) > 3)
```
Out[10]: False`
```python
all(pd.Series([None, 3,5], dtype='float[pyarrow]') > 3)
```
Traceback (most recent call last):
File "C:\Users\Schleehauf\PycharmProjects\viodata\viotools\.venv\Lib\site-packages\IPython\core\interactiveshell.py", line 3672, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-9-43ea68ea33b1>", line 1, in <module>
all(pd.Series([None, 3,5], dtype='float[pyarrow]') > 3)
File "pandas/_libs/missing.pyx", line 392, in pandas._libs.missing.NAType.__bool__
TypeError: boolean value of NA is ambiguous
### Expected Behavior
Be consitant, fill the bool NA value with False for the next xxx releases and maybe add a DeprecationWarning
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : de_DE.cp1252
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : 2025.5.1
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"PDEP missing values"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report!\n\n> Be consitant, fill the bool NA value with False\n\nI think you're asking for pandas to somehow override the behavior of just `all(...)` and not impact `bool(pd.NA)`. I do not think this is possible.\n\nOn the other hand, there are many issues and discussions about `bool(pd.NA)`, I do not think it is useful to have another one. Closing."
] |
3,178,844,240
| 61,712
|
BUG: .round causes TypeError / NaN-
|
open
| 2025-06-26T11:46:35
| 2025-08-08T06:57:21
| null |
https://github.com/pandas-dev/pandas/issues/61712
| true
| null | null |
SSchleehauf
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pd.DataFrame([{'start': pd.Timestamp('2025-01-01 10:00:00'), 'end':pd.Timestamp('2025-01-01 10:00:15.12345678')},
{'start': pd.Timestamp('2025-01-01 10:00:30.999999'), 'end':pd.Timestamp('2025-01-01 10:00:45')}])
df['pause_duration'] = (df['start'].shift(-1) - df['end']).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds())
df['pause_duration'].round(1)
```
### Issue Description
In pandas 2.2.3 rounding of Nan values just silently failed (values did not get rounded) while the same code causes a TypeError in 2.3.0
Sample data preparation:
```python
import pandas as pd
df = pd.DataFrame([{'start': pd.Timestamp('2025-01-01 10:00:00'), 'end':pd.Timestamp('2025-01-01 10:00:15.12345678')},
{'start': pd.Timestamp('2025-01-01 10:00:30.999999'), 'end':pd.Timestamp('2025-01-01 10:00:45')}])
df['pause_duration'] = (df['start'].shift(-1) - df['end']).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds())
df['pause_duration']
Version: 2.2.3
Out[4]:
0 15.876542
1 <NA>
Name: pause_duration, dtype: object```
Round fails for 2.2.3:
```python
df['pause_duration'].round(1)
Out[5]:
0 15.876542
1 <NA>
```
In 2.3.0 this causes a TypeError instead:
```python
import pandas as pd
print('Version:', pd.__version__)
df = pd.DataFrame([{'start': pd.Timestamp('2025-01-01 10:00:00'), 'end':pd.Timestamp('2025-01-01 10:00:15.12345678')},
{'start': pd.Timestamp('2025-01-01 10:00:30.999999'), 'end':pd.Timestamp('2025-01-01 10:00:45')}])
df['pause_duration'] = (df['start'].shift(-1) - df['end']).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds())
df['pause_duration']
Version: 2.3.0
Out[14]:
0 15.876542
1 <NA>
Name: pause_duration, dtype: object
```
Round causes a TypeError:
```python
>>> df['pause_duration'].round(1)
Traceback (most recent call last):
File "C:\Users\Schleehauf\PycharmProjects\viodata\viotools\.venv\Lib\site-packages\IPython\core\interactiveshell.py", line 3672, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-15-0e359609e34a>", line 1, in <module>
df['pause_duration'].round(1)
File "C:\Users\Schleehauf\PycharmProjects\viodata\viotools\.venv\Lib\site-packages\pandas\core\series.py", line 2818, in round
TypeError: Expected numeric dtype, got object instead.
```
For both versions, type conversion (and rounding) **only works with pyarrow:**
```python
df['pause_duration'].astype('float[pyarrow]').round(1)
Out[20]:
0 15.9
1 <NA>
Name: pause_duration, dtype: float[pyarrow]
```
And fails with TypeError:
```python
df['pause_duration'].astype(float).round(1)
Traceback (most recent call last):
...
TypeError: float() argument must be a string or a real number, not 'NAType'
```
### Expected Behavior
1. Do not throw an exception but warn instead
2. When subtracting Timestamps the datatype shoulde be _timedelta_ and not _object_ even when there are NaT values
3. an timedelta-NaN that has a _total_seconds()-method_ returning _float-nan_ such that
```python
df['pause_duration'].apply(lambda x: x.total_seconds())
Traceback (most recent call last):
...
AttributeError: 'float' object has no attribute 'total_seconds'
```
will just work in the future and yields the same result as ```df['pause_duration'].astype('float[pyarrow]').round(1)```
### Installed Versions
<details>
pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : de_DE.cp1252
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : 2025.5.1
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@SSchleehauf anything you can do to trim down the example to focus on the relevant issue would be helpful (https://matthewrocklin.com/minimal-bug-reports.html)",
"@jorisvandenbossche i think this would be addressed by implementing `NA.__round__()` to return NA. Thoughts?",
"I added a minimal example below. After breaking it down, I think the real problem is **dtype: object** instead of **dtype: float64** . This is caused by the use of **pd.NA.**. \n\nProbably the title should be changed to : \"_The use of pd.NA in apply prevents automatic casting and results in dtype: object of the resulting Series_\"\n\n\n\n```python\nimport pandas as pd\npd.__version__\n```\n\nRounding of NaT is working properly:\n\n\n```python\nNaT_time_delta = pd.Timestamp('2025-01-01 3:10:33.1234567') - pd.NaT\nNaT_time_delta, type(NaT_time_delta)\n```\n\n\n\n\n (NaT, pandas._libs.tslibs.nattype.NaTType)\n\n\n\n\n```python\nNaT_time_delta.round('1min')\n```\n\n\n\n\n NaT\n\n\n\nThis works for Series as well:\n\n\n```python\nseries = pd.Series([pd.Timestamp('2025-01-01 3:10:33.1234567') , pd.Timestamp('2025-01-01')])\nseries\n```\n\n\n\n\n 0 2025-01-01 03:10:33.123456700\n 1 2025-01-01 00:00:00.000000000\n dtype: datetime64[ns]\n\n\n\n\n```python\nseries - series.shift(-1)\n```\n\n\n\n\n 0 0 days 03:10:33.123456700\n 1 NaT\n dtype: timedelta64[ns]\n\n\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: x.round('min'))\n```\n\n\n\n\n 0 0 days 03:11:00\n 1 NaT\n dtype: timedelta64[ns]\n\n\n\nWorking with seconds and automatomatic casting to float\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 NaN\n dtype: float64\n\n\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: x.total_seconds()).round(1)\n```\n\n\n\n\n 0 11433.1\n 1 NaN\n dtype: float64\n\n\n\n**Probably the acctual cause of the problem is the use of ```pd.NA``` in the if-else statement resulting in _dtype: object_**\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 <NA>\n dtype: object\n\n\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds()).round(1)\n```\n\n\n ---------------------------------------------------------------------------\n\n TypeError Traceback (most recent call last)\n\n Cell In[19], line 1\n ----> 1 (series - series.shift(-1)).apply(lambda x: pd.NA if pd.isna(x) else x.total_seconds()).round(1)\n\n\n File ~/uvenv/.venv/lib/python3.12/site-packages/pandas/core/series.py:2818, in Series.round(self, decimals, *args, **kwargs)\n 2816 nv.validate_round(args, kwargs)\n 2817 if self.dtype == \"object\":\n -> 2818 raise TypeError(\"Expected numeric dtype, got object instead.\")\n 2819 new_mgr = self._mgr.round(decimals=decimals, using_cow=using_copy_on_write())\n 2820 return self._constructor_from_mgr(new_mgr, axes=new_mgr.axes).__finalize__(\n 2821 self, method=\"round\"\n 2822 )\n\n\n TypeError: Expected numeric dtype, got object instead.\n\n\n**It looks like the wrong type conversion is due to pd.NA, for float-nan and numpy-nan (might be the same) things work fine:**\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: 42 if pd.isna(x) else x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 42.000000\n dtype: float64\n\n\n\n\n```python\n(series - series.shift(-1)).apply(lambda x: float('nan') if pd.isna(x) else x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 NaN\n dtype: float64\n\n\n\n\n```python\nimport numpy as np\n\n(series - series.shift(-1)).apply(lambda x: np.nan if pd.isna(x) else x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 NaN\n dtype: float64\n\n\n\n### For pyarrow I am not sure if I am Using the correct na value\n\n\n```python\nimport pyarrow as pa\n\n(series - series.shift(-1)).apply(lambda x: pa.NA if pd.isna(x) else x.total_seconds())\n```\n\n\n\n\n 0 11433.123456\n 1 None\n dtype: object\n\n\n\n\n```python\n\n```",
"@SSchleehauf - in crafting a minimal bug report, always take your last operation and produce it directly.\n\n```python\nser = pd.Series([0.5, pd.NA])\nprint(ser.round(0))\n\n# 0 0.5\n# 1 <NA>\n# dtype: object <--- pandas 2.2.x\n\n# TypeError: Expected numeric dtype, got object instead. <-- main\n```\n\nOnly if this does not reproduce the issue should you then add prior operations.",
"Hi, I’ve read through the discussion here and I’d like to try tackling this issue if it’s still available.\n\nI see that the error seems to stem from how np.round is being used on extension types like Int64 with NaN, particularly since np.round returns a float instead of preserving the dtype. My plan is to first reproduce the error as described, then explore a fix that either avoids calling np.round directly on these extension arrays or ensures the type and NaN handling are preserved properly.\n\nPlease let me know if I can be assigned to this. Thanks!",
"take"
] |
3,178,587,943
| 61,711
|
Writing data to mysql database using df.to_sql method gives exception
|
open
| 2025-06-26T10:10:16
| 2025-07-15T21:40:06
| null |
https://github.com/pandas-dev/pandas/issues/61711
| true
| null | null |
MohammadHilal1
| 1
|
In my airflow project i am trying to load data to mysql database using df.to_mysql method but it gives me exception
AttributeError: 'Connection' object has no attribute 'cursor'
The code i am trying to execute is
def load_market_data(flattened_df_json):
records = json.loads(flattened_df_json)
df = pd.DataFrame(records)
df['from'] = pd.to_datetime(df['from']).dt.date
df['volume'] = df['volume'].astype('Int64')
engine = create_engine("mysql+pymysql://root:1111@localhost:3307/etl_project")
with engine.connect() as conn:
print("MySQL connection test:", conn.execute("SELECT 1").scalar())
try:
with engine.begin() as connection:
df.to_sql(name="market_data", con=connection, if_exists="append", index=False)
print("✅ Data loaded successfully")
except Exception as e:
print("Exception while inserting to db:", str(e))
raise
pandas version is 2.3.0
sqlalchemy version is 1.4.54
|
[
"IO SQL"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hey @MohammadHilal1 \n\nThis is not a pandas issue. Pandas is only compatible with `SQLAlchemy` versions greater than 2.0.0."
] |
3,178,378,120
| 61,710
|
ENH: Enabled prefix, suffix, and sep to DataFrame.shift
|
closed
| 2025-06-26T08:59:20
| 2025-06-30T17:51:25
| 2025-06-30T17:51:24
|
https://github.com/pandas-dev/pandas/pull/61710
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61710
|
https://github.com/pandas-dev/pandas/pull/61710
|
RUTUPARNk
| 1
|
…e periods (#61696)
- [X] closes #61696
(Replace 61696 with the GitHub issue number)
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
- Enabled `prefix`, `suffix`, and `sep` arguments to `DataFrame.shift` for iterable periods
- Now it's cleaner and customizable column renaming
- Introduced new test in `test_shift_with_iterable_check_other_arguments`
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks you for the PR, but the original issue needs to be triaged by the team and agreed upon to pursue before opening a PR so going to close until those items are done"
] |
3,177,539,488
| 61,709
|
BUG: Index[Float64].insert(1, False) casts False to 0
|
open
| 2025-06-26T02:47:49
| 2025-07-19T16:39:57
| null |
https://github.com/pandas-dev/pandas/issues/61709
| true
| null | null |
jbrockmendel
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
idx = pd.Index(pd.array([1., 2., 3., 4]))
>>> idx.insert(1, False)
Index([1.0, 0.0, 2.0, 3.0, 4.0], dtype='Float64')
```
### Issue Description
Discovered while adapting tests.indexing.test_coercion tests to nullable dtypes.
### Expected Behavior
To be consistent with other behavior this should keep the False as False and cast to object.
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Dtype Conversions",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"On initial inspection, this seems to be because dtype compatibility checks were bypassed when dealing with `ExtensionArray`. This coerces False -> 0.0 without warning. \n\nA workaround is to do something like\n```\npd.Index([1., 2., 3], dtype=\"object\").insert(1, False)\n```\n\nIf we need a fix for this, we can add a dtype compatibility check before inserting into `ExtensionArray`. Should I open a PR? "
] |
3,177,097,263
| 61,708
|
POC: NA-only behavior for numpy-nullable dtypes
|
closed
| 2025-06-25T22:06:31
| 2025-08-04T15:43:31
| 2025-08-04T15:43:23
|
https://github.com/pandas-dev/pandas/pull/61708
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61708
|
https://github.com/pandas-dev/pandas/pull/61708
|
jbrockmendel
| 2
|
This is the first of several POCs stemming from the discussion in #61618. The main goal is to see how invasive it would be.
Specifically, this implements the NaN behavior described in PDEP16 #58988.
Functionally this makes it so that:
1) With a Float64Dtype or Float32Dtype, you will *never* get a NaN, only a NA.
2) Users transitioning from numpy dtypes will be maximally-backwards-compatible
As a result, I expect implementing this would solve most issues labeled as "PDEP missing values". e.g. I just checked and it does address #54876.
|
[
"PDEP missing values"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Im thinking the check should be done once at dtype instantiation",
"Closing in favor of #62040"
] |
3,177,087,462
| 61,707
|
BUG: .describe() doesn't work for EAs
|
open
| 2025-06-25T22:01:01
| 2025-06-30T09:32:01
| null |
https://github.com/pandas-dev/pandas/issues/61707
| true
| null | null |
andrewgsavage
| 5
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd, pint_pandas
s = pd.Series([1, 2, 3], dtype='pint[kg]')
s.describe()
DimensionalityError Traceback (most recent call last)
...
```
### Issue Description
https://github.com/hgrecco/pint-pandas/issues/279
`Series.describe` sets the dtype for the results to `Float64Dtype` when the input is an EA. pint's `Quantity`
cannot be casted to `Float64Dtype`. https://github.com/pandas-dev/pandas/blob/35b0d1dcadf9d60722c055ee37442dc76a29e64c/pandas/core/methods/describe.py#L255
### Expected Behavior
.describe should return a Series of objectdtype, or the dtype of the EA
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Needs Discussion",
"ExtensionArray"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"I suggest waiting for a maintainer response before working on this. The\r\nlogic for the dtype is not trivial. For example, taking the standard\r\ndeviation a temperature Series gives a delta temperature unit, while min,\r\nmax etc have a temperature unit. These cannot be stored in the same dtype.\r\n\r\nOn Thu, Jun 26, 2025 at 12:42 PM Arjhun S ***@***.***> wrote:\r\n\r\n> *kernelism* left a comment (pandas-dev/pandas#61707)\r\n> <https://github.com/pandas-dev/pandas/issues/61707#issuecomment-3008190164>\r\n>\r\n> take\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/issues/61707#issuecomment-3008190164>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ADEMLEF6FGIVG25HF6SJCBT3FPMBFAVCNFSM6AAAAACAEQEDQKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAMBYGE4TAMJWGQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@andrewgsavage I’ve been exploring the codebase (still new here), and my initial thought is to check if the list of calculated statistics contains multiple types. If so, setting `dtype=None` would cause the result to be a Series with object dtype, which should resolve the issue. Does that sound right?",
"I think that is a good solution. I wonder how it would deal with an Series with int dtype. Should that give objects or float dtype?, since mean would give a float while mean would give int",
"> I think that is a good solution. I wonder how it would deal with an Series with int dtype. Should that give objects or float dtype?, since mean would give a float while mean would give int\n\nIn that case, it would never go into the if block checking if the series dtype is EA. It would automatically get a float type based on existing logic. Our fix would only pertain to EAs.\n\nhttps://github.com/pandas-dev/pandas/blob/35b0d1dcadf9d60722c055ee37442dc76a29e64c/pandas/core/methods/describe.py#L256-L258\n\nShould I open a PR?"
] |
3,176,241,270
| 61,706
|
WEB: add note to PDEP-10 about delayed timeline for requiring pyarrow
|
open
| 2025-06-25T16:21:35
| 2025-08-23T00:07:50
| null |
https://github.com/pandas-dev/pandas/pull/61706
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61706
|
https://github.com/pandas-dev/pandas/pull/61706
|
jorisvandenbossche
| 15
|
This is the PR that I _should_ have done a year ago after PDEP-14 got voted on (https://github.com/pandas-dev/pandas/pull/58551, voting issue at https://github.com/pandas-dev/pandas/issues/59160). With that PDEP, and at the time with the intent of releasing pandas 3.0 already somewhere last year, the plan was to add this new default string dtype that would use pyarrow, but only if installed and otherwise still fallback on the object-dtype based implementation of the string dtype.
Essentially that PDEP detailed the plan for a string dtype mentioned in PDEP-10, while also delaying the rest of PDEP-10 until after pandas 3.0 (i.e. not yet making pyarrow a required dependency for pandas 3.0).
I think that if I would have done this PR directly after the vote on PDEP-14, it would have been clear in that context and that such an amendment to a PDEP would not necessarily require a vote or a separate PDEP (since the amendment is reflecting that a subsequent PDEP changed/superseded a part of a previous PDEP).
However, given the current context with the revived discussions about defaulting to pyarrow dtypes and requiring pyarrow and the vote on the PDEP about rejecting PDEP-10 (and with this PR being done long after the PDEP-14 acceptance), I entirely understand that this PR is now a lot more controversial (all the more reason that I should have only opened this PR with proper context .. again apologies about that).
To conclude, with this PR I mostly want to illustrate what I personally would do on the short term for pandas 3.0, as I also mentioned in the PDEP-15 (reject PDEP-10) discussion two days ago at https://github.com/pandas-dev/pandas/pull/58623#issuecomment-2996630716.
(and apologies for not having done this PR much sooner .. I should have handled the public communication after PDEP-14's acceptance better)
|
[
"Web",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@datapythonista sorry for opening this PR without the proper context, I should just have waited with opening it until I had written that up.\r\n\r\n[moved the context I added here to the top post]\r\n\r\nEDIT: and with the above, I hope that I clarified that it is not my intent to apply double standards\r\n",
"Now that we finally could vote on PDEP-15 and know what the team wants, and since it seems like most people is happy to consider PDEP-10 officially rejected, I'm ok with this.\r\n\r\nI still think it's very poor team working to fully ignore the governance procedure when it's convenient, and use it as if we were lawyers in other cases (not that you personally did the latter).",
"Perhaps this should wait until PDEP-14 is actually implemented. The new string dtype is behind a \"future/experimental/options\" whatever we want to call it flag until we are ready to make it the default. None of the code samples in https://github.com/pandas-dev/pandas/pull/61705 raise any warnings to end users that the behavior is changing. Since PDEP-14, the team has approved PDEP-17, Backwards compatibility and deprecation policy, and we need at least 2 releases to allow \"Breaking changes should go through a deprecation cycle before being implemented if possible.\"\r\n\r\nNow, this would be a further delay to 3.0 that probably nobody wants.\r\n\r\nWe perhaps should have a milestone/gated/readiness, whatever you want to call it, review of the status of the PDEP-14 implementation. If not \"flipping the switch\" on the new dtype we could probably get 3.0 out the door tomorrow?",
"> None of the code samples in https://github.com/pandas-dev/pandas/pull/61705 raise any warnings to end users that the behavior is changing.\r\n\r\n@simonjayhawkins That is on purpose, because we are planning to do this as a breaking change (as described in PDEP-14, see its section on \"Backward compatibility\"). If you want to reconsider this, please open a separate discussion issue to bring this up. \r\n\r\n(yes, we have PDEP-17 to describe our _standard_ deprecation policy, and it mentions that breaking changes should go through a deprecation cycle _if possible_. But so there are cases where this is not possible (because it would be very hard to implement or too noisy for users), and PDEPs are especially the place to agree on such \"not possible\" cases). ",
"The first part of the comment was...\r\n\r\n> Perhaps this should wait until PDEP-14 is actually implemented.\r\n\r\nMy reasoning being that updating PDEP-10 with regard to PDEP-14 may be premature as PDEP-14 is not yet fully implemented. However, I am also all for clarifying the current situation to the community.\r\n\r\nSo wording along the lines that none of the reasons for requiring PyArrow for 3.0 are yet applicable and therefore the requirement is deferred. \r\n",
"> If you want to reconsider this, please open a separate discussion issue to bring this up.\r\n\r\nI won't open an issue about the warnings specifically, but maybe a discussion on the readiness of the dtype with a frank discussion on realistic timescales for 3.0rc release. I have made some comments in #61590 towards the goal of establishing a realistic release date.",
"> The first part of the comment was...\r\n\r\nI know, and I haven't yet responded to that. I just wanted to quickly point out a (in my mind) misunderstanding about the plan for the string dtype (that we would do a deprecation cycle for this). I don't really agree with your reasoning for delaying adding this note, but that requires some more time to write a thoughtful response.",
"> I don't really agree with your reasoning for delaying adding this note\r\n\r\nmaybe the note could be as simple as following feedback from the community the requirement has been deferred and that the pandas team still intend to deliver the benefits of PyArrow in a future release?",
"> that the pandas team still intend to deliver the benefits of PyArrow in a future release?\r\n\r\nassuming PDEP-15 is rejected of course.",
"> > that the pandas team still intend to deliver the benefits of PyArrow in a future release?\r\n> \r\n> assuming PDEP-15 is rejected of course.\r\n\r\nso @jorisvandenbossche I guess we are both guilty of getting ahead of ourselves and assuming that PDEP-15 is already rejected:)\r\n\r\n> I don't really agree with your reasoning for delaying adding this note\r\n\r\nlet's then wait for the outcome of the vote? What if PDEP-15 is accepted? Would this PR still be required?\r\n\r\nI think the vote closes on 8th July. By then we should have more clarity.",
"> as PDEP-14 is not yet fully implemented\r\n\r\nPDEP-14 is implemented, I would say, except that we indeed did not yet switch the default on main. We had discussed before to do that after releasing 2.3.0, but so that now has happened.\r\n\r\n> maybe the note could be as simple as following feedback from the community the requirement has been deferred and that the pandas team still intend to deliver the benefits of PyArrow in a future release?\r\n\r\nI am fine with rewording to something more like that. But essentially that community feedback let to the creation of PDEP-14 (which mentions this), so that is very similar in my mind (but I probably also too much with my head into it to judge what is a good wording)\r\n\r\n> I guess we are both guilty of getting ahead of ourselves and assuming that PDEP-15 is already rejected:)\r\n\r\nFWIW, I don't consider myself getting ahead of things. Of course the situation is now a bit different with the ongoing vote, but as I mentioned in the top post above, IMO I _should_ have done this PR directly after PDEP-14's acceptance, so I am actually much behind instead of ahead ;)\r\n\r\nJoking aside, sure let's wait until there is clarity around what the vote on PDEP-15 actually means ..",
"> FWIW, I don't consider myself getting ahead of things. Of course the situation is now a bit different with the ongoing vote, but as I mentioned in the top post above, IMO I _should_ have done this PR directly after PDEP-14's acceptance, so I am actually much behind instead of ahead ;)\r\n> \r\n> Joking aside, sure let's wait until there is clarity around what the vote on PDEP-15 actually means ..\r\n\r\nThat's a fair point since I assume that PDEP-15 would have been withdrawn long ago if this had been done sooner.\r\n\r\nWith our current governance only people who participated in the discussion can give a negative vote on PDEP-15, and I think most have done that. So the outcome is perhaps known even though we need to wait until we can tally the figures.",
"Thanks @jorisvandenbossche for this PR, and for getting pandas ready for 3.0.\r\n\r\nSince PDEP-15 will be rejected and PDEP-10 won't be enforced but theoretically still approved, I think it'd be good to highlight this note more. What I'd do is to add the admonition mardown extension in `web/pandas/config.yml`, and then you can use the next syntax to render as a proper note or warning (it'll probably need adding styles in our css too):\r\n\r\n```\r\n!!! warning\r\n Your message in the note.\r\n```\r\n\r\nAlso, if this note is the main place where we communicate that we changed our plans (we had the warning, social media posts..., so I'd bet many people is still expecting PyArrow in pandas 3.0), I'd provide more information. There is some in PDEP-15, but it'll be rejected, and it's outdated, so probably this note is the main reference.\r\n\r\nIn particular, I think it'll be good for users to understand WHY we changed the plans, not just letting them know that we did. I assume the main reason is the extra 288Mb in disk pandas would necessarily have if installed always with PyArrow (not sure if people had other reasons as they weren't shared in PDEP-15 vote). Also, since my guess is that just a minority of users would really care about the extra disk space, I think it'd also be good to explain why we don't simple add PyArrow to the `pandas` package, and release a second package `pandas-lite` for these users who care.\r\n\r\nI'd also prefer waiting to merge this until the vote in PDEP-15 is over. I don't expect any surprise, and I'm not a big fan of following our governance blindly when it doesn't make sense. But if this doesn't delay the release, which I don't think it does, seems a bit better to not assume the result of votes while they are still happening.",
"@jorisvandenbossche we've had some PDEPs that have their status updated to \"Implemented\". Would we also want to do that here for PDEP-14? or separate PR?",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,175,897,800
| 61,705
|
DOC: add pandas 3.0 migration guide for the string dtype
|
closed
| 2025-06-25T14:32:04
| 2025-07-07T11:09:52
| 2025-07-07T11:08:56
|
https://github.com/pandas-dev/pandas/pull/61705
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61705
|
https://github.com/pandas-dev/pandas/pull/61705
|
jorisvandenbossche
| 4
|
This PR starts adding a migration guide with some typical issues one might run into regarding the new string dtype when upgrading to pandas 3.0 (or when enabling it in pandas 2.3).
(for now I just added it to the user guide, which is already a long list of pages, so we might need to think about better organizing this or putting it elsewhere)
Closes #59328
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@simonjayhawkins thanks a lot for the proofreading!\r\n",
"/preview",
"Added three more sections based on the items listed in https://github.com/pandas-dev/pandas/issues/59328",
"cc @rhshadrach since you also already updated some code to work with the future string dtype, feel free to take a look and some feedback is certainly still welcome"
] |
3,173,322,179
| 61,704
|
DOC: update Slack invite link in community dos
|
closed
| 2025-06-24T21:15:41
| 2025-06-25T11:23:02
| 2025-06-25T11:23:02
|
https://github.com/pandas-dev/pandas/pull/61704
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61704
|
https://github.com/pandas-dev/pandas/pull/61704
|
niruta25
| 0
|
- [x] closes #61690
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
-- tested manually
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Slack Link was getting expired very 14 days, New link from #61690 is set to never expire.
|
[
"Docs",
"Community"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,173,201,508
| 61,703
|
TST: Refactor S3 tests
|
closed
| 2025-06-24T20:24:49
| 2025-06-30T23:42:51
| 2025-06-30T23:40:45
|
https://github.com/pandas-dev/pandas/pull/61703
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61703
|
https://github.com/pandas-dev/pandas/pull/61703
|
fangchenli
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Testing",
"IO Network"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @fangchenli "
] |
3,172,829,883
| 61,702
|
DOC: simplify theme footer config (fixes #60647)
|
closed
| 2025-06-24T17:56:34
| 2025-06-25T15:56:38
| 2025-06-25T15:55:40
|
https://github.com/pandas-dev/pandas/pull/61702
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61702
|
https://github.com/pandas-dev/pandas/pull/61702
|
AswathyAZ
| 1
|
This PR removes custom footer settings in `conf.py` to use the new default footer provided by the updated `pydata-sphinx-theme`.
- Commented out: `footer_start`
Added: Contribution_plan.md file
Fixes #60647
|
[
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This appears similar to https://github.com/pandas-dev/pandas/pull/61685 which is unnecessary"
] |
3,172,752,088
| 61,701
|
Issue #28283, Finalize coverage for DataFrame.merge
|
closed
| 2025-06-24T17:29:09
| 2025-06-24T19:03:05
| 2025-06-24T19:03:05
|
https://github.com/pandas-dev/pandas/pull/61701
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61701
|
https://github.com/pandas-dev/pandas/pull/61701
|
niruta25
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Issue #28283
# Evaluation and Solution Summary
## Issue Analysis:
The GitHub issue #28283 is about improving the coverage of NDFrame.__finalize__ in pandas. Specifically, many pandas methods (including DataFrame.merge) don't properly call __finalize__ to propagate metadata like attrs and flags from input DataFrames to the result.
## Problem:
When you perform a merge operation on DataFrames that have metadata (stored in .attrs), the resulting DataFrame loses this metadata because the merge methods don't call __finalize__.
Solution Components:
Core Fix: Modify the merge-related functions in pandas to call __finalize__ after creating the result DataFrame.
Key Files to Modify:
- pandas/core/frame.py - DataFrame.merge method
- pandas/core/reshape/merge.py - merge and merge_asof functions
- pandas/tests/generic/test_finalize.py - Add comprehensive tests
## Implementation Strategy:
Add result.__finalize__(left, method="merge") calls after merge operations
Use the left DataFrame as the primary source for metadata propagation
Ensure all merge variants (inner, outer, left, right, asof) are covered
Handle both DataFrame-DataFrame and DataFrame-Series merges
## Testing Strategy:
- Test all merge types (inner, outer, left, right)
- Test index-based merges
- Test merges with suffixes
- Test merge_asof functionality
- Test DataFrame-Series merges
## Benefits of the Fix:
Preserves important metadata during merge operations
Maintains consistency with other pandas operations that already call __finalize__
Enables better data lineage tracking
Supports custom metadata propagation workflows
## Implementation Notes:
The fix follows pandas' existing pattern of calling __finalize__ in similar operations
Metadata conflicts are resolved by preferring the left DataFrame's attributes
The solution is backward compatible and doesn't change the existing API
Performance impact is minimal since __finalize__ is only called once per operation
This solution addresses the specific DataFrame.merge part of the broader issue #28283 and provides a template for fixing other methods mentioned in the issue.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Closing as was opened by mistake."
] |
3,172,420,709
| 61,700
|
ENH: When read_csv reports an error, include the column name and row number
|
open
| 2025-06-24T15:24:31
| 2025-07-20T07:53:11
| null |
https://github.com/pandas-dev/pandas/issues/61700
| true
| null | null |
johann-petrak
| 3
|
### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
When reading a CSV file that has been written with pandas I get the error message:
`TypeError: 'str' object cannot be interpreted as an integer`
This probably happens because some value in some column has an unexpected type. But with a dataframe of hundreds of columns and tens of thousands of rows, how is one supposed to find out where the problem lies?
Pandas obvious knows exactly the column and row, but does not care to tell the user.
Please make it give this info to the user, this is pretty basic practice when showing an error message: provide all the information to make it easy for the user to figure out how to fix the problem!
### Feature Description
Give the details about which column and row in the data causes the problem
### Alternative Solutions
Fiddling around endlessly.
### Additional Context
_No response_
|
[
"Enhancement",
"Error Reporting",
"IO CSV",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"Hi @johann-petrak , could you provide a reproducible example? I think pandas might not directly raise this error.",
"This happens when data is read and a converter is specified. You are right, in this case the error message was from a converter that did something unusual, but the problem really shows, whenever there is an exception thrown in any converter. Here is a simple example with just \"int\" as a converter:\n\n```\nimport pandas as pd\n\ndata = dict(a=[1,2,3,4,5], b=[\"a\", \"b\", \"c\", \"d\", \"e\"], c=[1,2,3,\"asdf\",5])\ndf = pd.DataFrame.from_dict(data)\ndf.to_csv(\"test1.tsv\", sep=\"\\t\", index=False)\ndf2 = pd.read_csv(\"test1.tsv\", sep=\"\\t\", converters=dict(c=int))\n```\n\nThis produces the following traceback:\n\n```\nTraceback (most recent call last):\n File \"/home/johann/tmp/pandas-issue61700/test1.py\", line 6, in <module>\n df2 = pd.read_csv(\"test1.tsv\", sep=\"\\t\", converters=dict(c=int))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/johann/software/anaconda/envs/pandas/lib/python3.12/site-packages/pandas/io/parsers/readers.py\", line 1026, in read_csv\n return _read(filepath_or_buffer, kwds)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/johann/software/anaconda/envs/pandas/lib/python3.12/site-packages/pandas/io/parsers/readers.py\", line 626, in _read\n return parser.read(nrows)\n ^^^^^^^^^^^^^^^^^^\n File \"/home/johann/software/anaconda/envs/pandas/lib/python3.12/site-packages/pandas/io/parsers/readers.py\", line 1923, in read\n ) = self._engine.read( # type: ignore[attr-defined]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/johann/software/anaconda/envs/pandas/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py\", line 234, in read\n chunks = self._reader.read_low_memory(nrows)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"pandas/_libs/parsers.pyx\", line 838, in pandas._libs.parsers.TextReader.read_low_memory\n File \"pandas/_libs/parsers.pyx\", line 921, in pandas._libs.parsers.TextReader._read_rows\n File \"pandas/_libs/parsers.pyx\", line 1045, in pandas._libs.parsers.TextReader._convert_column_data\n File \"pandas/_libs/parsers.pyx\", line 2116, in pandas._libs.parsers._apply_converter\nValueError: invalid literal for int() with base 10: 'asdf'\n```\n\nHere again, pandas does not give any indication in which row and for which column the problem was encountered. \nI cannot imagine it being to hard to catch the problem in the pandas code that calls the converter and show an error message that indicates the row and column info and then let the exception bubble up as before? "
] |
3,171,030,077
| 61,699
|
[backport 2.3.x] BUG: DataFrame.explode fails with str dtype (#61623)
|
closed
| 2025-06-24T08:50:45
| 2025-06-24T13:46:52
| 2025-06-24T13:46:47
|
https://github.com/pandas-dev/pandas/pull/61699
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61699
|
https://github.com/pandas-dev/pandas/pull/61699
|
jorisvandenbossche
| 0
|
Backport of https://github.com/pandas-dev/pandas/pull/61623
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,171,013,440
| 61,698
|
DOC: move relevant whatsnew changes from 2.3.0 to 2.3.1 file
|
closed
| 2025-06-24T08:46:27
| 2025-07-01T11:47:46
| 2025-06-30T17:51:54
|
https://github.com/pandas-dev/pandas/pull/61698
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61698
|
https://github.com/pandas-dev/pandas/pull/61698
|
jorisvandenbossche
| 3
|
Follow-up on https://github.com/pandas-dev/pandas/pull/61654, now moving some content from 2.3.0 to the 2.3.1 file for changes that only got backported for 2.3.1
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 dc1e367598a6b0b2c0ee700b3805f72aaccbda86\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61698: DOC: move relevant whatsnew changes from 2.3.0 to 2.3.1 file'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61698-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61698 on branch 2.3.x (DOC: move relevant whatsnew changes from 2.3.0 to 2.3.1 file)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Backport -> https://github.com/pandas-dev/pandas/pull/61751"
] |
3,169,779,099
| 61,697
|
TST: Increase test coverage for pandas.io.formats.excel.py
|
closed
| 2025-06-23T23:59:42
| 2025-06-25T15:58:04
| 2025-06-25T15:57:57
|
https://github.com/pandas-dev/pandas/pull/61697
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61697
|
https://github.com/pandas-dev/pandas/pull/61697
|
lsgordon
| 1
|
Sorry for the appauling number of commits. Git was being unkind, and I accidentally pushed .venv/ to gitignore, which then when I tried to revert, would reinclude the venv on the commit, you get the picture. It won't happen again.
The purpose of this PR is to add some test coverage to the first class of pandas.io.formats.excel.py, in the CSSToExcelConverter class. This class is missing some coverage, and there were a few unused functions and lines of code that were also causing some code coverage problems, so they have been dealt with.
<img width="1345" alt="Screenshot 2025-06-23 at 6 58 27 PM" src="https://github.com/user-attachments/assets/9610b4c6-ed5f-4323-9a9b-76de3ae0dcbd" />
<img width="1338" alt="Screenshot 2025-06-23 at 6 59 14 PM" src="https://github.com/user-attachments/assets/11a159de-3ca4-40a9-ade0-49347cf12157" />
<img width="1303" alt="Screenshot 2025-06-23 at 6 59 24 PM" src="https://github.com/user-attachments/assets/e3f967c1-9ef4-4927-b070-1c8af9ba00d1" />
These screenshots show the areas I am adding coverage too.
|
[
"Testing",
"IO Excel"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @lsgordon "
] |
3,168,649,277
| 61,696
|
ENH: Add prefix, suffix and sep arguments to shift method
|
open
| 2025-06-23T15:42:13
| 2025-06-24T02:56:08
| null |
https://github.com/pandas-dev/pandas/issues/61696
| true
| null | null |
ArturoSbr
| 2
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Hi all!
I'm currently working on a forecasting model and had to create multiple lags of many columns. Doing this made me realize that the `shift` method does not have `prefix`, `prefix_sep`, `suffix` nor `suffix_sep` arguments.
I think adding some (or all) of these arguments would be super useful and could help standardize the method with others such as `pd.get_dummies`. Additionally, this is already implemented to some extent because when `periods` is a list, it adds a suffix to each lagged column.
### Feature Description
Obviously this is redundant because the method calls itself, but I think it conveys the idea well.
Suppose `suffix` and `suffix_sep` are strings (eg `'lag'` and `'_'`) and that `columns` is an iterable.
```
if suffix and suffix_sep:
for column in columns:
for period in periods:
data[f'{column}{suffix_sep}{suffix}{period}] = data[column].shift(period)
```
### Alternative Solutions
Here's what I'm currently doing to add `_lagX` as a suffix:
```
lags = [1, 2, 3, 6, 9, 12]
_temp = df[cols_og_feats].shift(periods=lags) # Lag each column by each lag in lags
_temp.columns = [
'_'.join(col.split('_')[:-1]) + '_lag' + col.split('_')[-1] for col in _temp.columns
] # add '_lagX' suffix
```
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"lags = [1, 2, 3]\ncols = ['sales', 'revenue']\n\n# Efficiently create lagged columns with suffix `_lag{n}`\ndf_lagged = pd.concat(\n [df[col].shift(lag).rename(f\"{col}_lag{lag}\") for col in cols for lag in lags],\n axis=1\n)\n\n# Optionally combine with original dataframe\ndf = pd.concat([df, df_lagged], axis=1)\n",
"Thank you!\nI see what you mean @sajansshergill but I wanted to pass an iterable to the method.\nIs list comp more efficient than passing an iterable?"
] |
3,167,964,670
| 61,695
|
Create contribution plan_1
|
closed
| 2025-06-23T12:09:49
| 2025-06-23T19:26:49
| 2025-06-23T19:26:48
|
https://github.com/pandas-dev/pandas/pull/61695
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61695
|
https://github.com/pandas-dev/pandas/pull/61695
|
harinarayananmastech
| 1
|
As part of use case presentation, following use case has been completed.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Not sure if this is an assignment, but you contribution plans may be for your instructor, not the pandas repo. Feel free to open a PR here once you have the intended changes to the code."
] |
3,167,776,915
| 61,694
|
Raise MergeError on mismatched signed/unsigned int merge keys
|
open
| 2025-06-23T11:08:47
| 2025-07-26T00:08:57
| null |
https://github.com/pandas-dev/pandas/pull/61694
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61694
|
https://github.com/pandas-dev/pandas/pull/61694
|
RITAMIT2023
| 2
|
This resolves issue - https://github.com/pandas-dev/pandas/issues/61688
|
[
"Bug",
"Reshaping",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,167,743,997
| 61,693
|
61636
|
closed
| 2025-06-23T10:56:48
| 2025-06-23T19:24:32
| 2025-06-23T19:24:31
|
https://github.com/pandas-dev/pandas/pull/61693
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61693
|
https://github.com/pandas-dev/pandas/pull/61693
|
Ranjana-babu
| 1
|
This PR addresses [#61636](https://github.com/pandas-dev/pandas/issues/61636), which reports inconsistent dtype coercion during groupby aggregation on PyArrow-backed DataFrames. Specifically, aggregations like 'sum' or 'first' on columns with Arrow dtypes (e.g., int32, uint64) may return outputs with unexpected pandas-native dtypes like float64.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @Ranjana-babu. You should submit to this repo the final changes to fix the bug. Not sure who this contribution plan is addess to, but not us. I'll close this PR, feel free to open a new one when you fixed the bug."
] |
3,167,356,484
| 61,692
|
ENH: Add `excel_sep_hint` parameter to `to_csv` for Excel compatibility
|
open
| 2025-06-23T08:56:08
| 2025-06-23T19:50:54
| null |
https://github.com/pandas-dev/pandas/issues/61692
| true
| null | null |
EwoutH
| 1
|
### Feature Type
- [x] Adding new functionality to pandas
### Problem Description
It would be great if Pandas could generate CSV files that Excel automatically opens with the correct delimiter. When using semicolons or other non-comma separators, Excel often opens CSV files with all data in one column unless a `sep=` hint is present at the beginning of the file. Currently, users must write custom code to add this hint.
### Feature Description
Add a boolean parameter `excel_sep_hint` to `to_csv()` that prepends a delimiter hint for Excel:
```python
def to_csv(self, ..., excel_sep_hint=False):
"""
excel_sep_hint : bool, default False
If True, prepend 'sep=' line to help Excel detect delimiter
"""
```
Usage:
```python
df.to_csv('data.csv', sep=';', excel_sep_hint=True)
```
Output:
```
sep=;
col1;col2;col3
val1;val2;val3
```
### Alternative Solutions
Users currently write wrapper functions or handle files manually:
```python
with open('file.csv', 'w') as f:
f.write('sep=;\n')
df.to_csv('file.csv', sep=';', mode='a')
```
### Additional Context
The `sep=` hint is a standard Excel feature for delimiter detection. This would improve pandas-Excel interoperability, especially useful in European locales where semicolons are common CSV separators.
|
[
"Enhancement",
"IO CSV",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"def to_csv(\n self,\n path_or_buf=None,\n sep=',',\n ...,\n excel_sep_hint=False,\n):\n \"\"\"\n Parameters\n ----------\n excel_sep_hint : bool, default False\n If True, prepend a 'sep={sep}\\n' line to help Microsoft Excel detect the delimiter automatically.\n \"\"\"\n\ndf.to_csv(\"data.csv\", sep=\";\", excel_sep_hint=True)\n\n\n\n- Alternative Solutions (Current Workarounds)\nSince pandas doesn’t support this natively yet, users often do something like:\n\nwith open('data.csv', 'w', encoding='utf-8') as f:\n f.write('sep=;\\n')\n df.to_csv(f, sep=';', index=False)\n\n- Or wrap this logic in a helper function:\n\ndef to_csv_with_excel_hint(df, filename, sep=';', **kwargs):\n with open(filename, 'w', encoding=kwargs.get('encoding', 'utf-8')) as f:\n f.write(f'sep={sep}\\n')\n df.to_csv(f, sep=sep, **kwargs)\n"
] |
3,167,153,125
| 61,691
|
Proposal: Add pd.check(df) utility function for quick dataset diagnostics
|
open
| 2025-06-23T07:49:28
| 2025-06-24T21:45:39
| null |
https://github.com/pandas-dev/pandas/issues/61691
| true
| null | null |
CS-Ponkoj
| 1
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
While working with pandas DataFrames during exploratory data analysis (EDA), analysts frequently perform the same manual steps to understand their dataset:
- Count null and non-null values
- Check unique value counts
- Estimate missing percentages
These operations are often repeated multiple times, especially after data cleaning, filtering, or merging. Currently, users rely on combinations like:
```
df.isnull().sum()
df.nunique()
df.notnull().sum()
```
There is no single built-in pandas utility that offers this all-in-one diagnostic view.
### Feature Description
Add a utility function pd.check(df) that returns a concise column-wise summary of a DataFrame’s structure, including:
- Unique values per column
- Non-null value counts
- Missing value counts
- Missing percentages (rounded to 2 decimals by default)
This function is designed to streamline early-stage exploratory data analysis by combining multiple common pandas operations into one, reusable utility.
Suggested API:
`def check(df: pd.DataFrame, round_digits: int = 2) -> pd.DataFrame:
...
`
- Optional round_digits parameter to control percentage precision
- Returns a pandas DataFrame
- No side effects (no printing)
- Aligns well with other utility functions like pd.describe()
### Alternative Solutions
There are existing pandas functions like:
- `df.info()` – shows non-null counts and data types
- `df.describe() `– provides statistical summaries (only for numeric data)
- `df.isnull().sum()` – shows missing values per column
- `df.nunique() `– shows unique counts
However, none of these provide a combined summary in a single DataFrame format. Users must manually combine several operations, which can be repetitive and error-prone.
Third-party options:
**pandas-profiling** and **sweetviz** offer full data profiling, but they are heavy-weight, generate HTML reports, and not ideal for lightweight inspection or script-based pipelines.
My package [pandas_eda_check](https://pypi.org/project/pandas-eda-check/) implements this specific summary cleanly and could be a minimal addition to pandas.
### Additional Context
Why in pandas?
- Aligns with pandas’ mission of being a one-stop shop for tabular data operations
- Adds convenience and consistency to common EDA workflows
- Minimal overhead and easy to implement
- Could serve as a precursor to a more comprehensive eda submodule in the future
Reference Implementation
I've implemented this in an open-source utility here:
🔗 https://github.com/CS-Ponkoj/pandas_eda_check
PyPI: https://pypi.org/project/pandas-eda-check/
Open to Feedback
I’d love to hear from the maintainers and community about:
- Whether this function aligns with pandas’ philosophy
- Suggestions to improve API or return format
- If accepted, I’m happy to submit a PR with tests and docs
Thanks for your time and consideration.
Ponkoj Shill
PhD Candidate, ML Engineer
Email: [[email protected]](mailto:[email protected])
|
[
"Enhancement",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
3,166,721,615
| 61,690
|
DOC: The Slack invite link to join the Pandas Dev community is broken
|
closed
| 2025-06-23T04:24:46
| 2025-06-25T11:23:03
| 2025-06-25T11:23:03
|
https://github.com/pandas-dev/pandas/issues/61690
| true
| null | null |
ButteryPaws
| 3
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/development/community.html#community-slack
### Documentation problem
This issue was raised before #61298 but I do not have permission to reopen the issue. On clicking the link, one lands on [this page](https://pandas-dev-community.slack.com/join/shared_invite/zt-2blg6u9k3-K6_XvMRDZWeH7Id274UeIg#/shared-invite/error)

### Suggested fix for documentation
Someone from the Slack admin team needs to update the link to the documentation.
|
[
"Docs",
"Needs Triage",
"Community"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for noticing! \nDoes this link work? https://join.slack.com/t/pandas-dev-community/shared_invite/zt-3813u5fme-hmp5izpbeFl9G8~smrkE~A",
"take",
"@jorisvandenbossche This link does work. Can we close the issue once @niruta25's PR is merged?\nThanks and Regards. "
] |
3,166,069,126
| 61,689
|
GÜL
|
closed
| 2025-06-22T16:00:53
| 2025-06-22T16:01:10
| 2025-06-22T16:01:10
|
https://github.com/pandas-dev/pandas/pull/61689
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61689
|
https://github.com/pandas-dev/pandas/pull/61689
|
GulAkkoca
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,165,985,470
| 61,688
|
BUG: Merge duplicates and validation failure when columns have type int64 and uint64
|
open
| 2025-06-22T14:00:46
| 2025-07-21T21:45:12
| null |
https://github.com/pandas-dev/pandas/issues/61688
| true
| null | null |
mratkin0
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
da = pd.DataFrame()
db = pd.DataFrame()
da["t"] = np.array([1721088000012322083, 1721088047408560273, 1721088047408560451], dtype=np.int64) # Note different types here
db["t"] = np.array([1721088000012322083, 1721088047408560273, 1721088047408560451], dtype=np.uint64) # Note different types here
da["i"] = 1
db["i"] = 1
da["p"] = [3, 6, 2]
db["q"] = [1, 2, 2]
print(pd.merge(da, db, on=["i", "t"], how="left", validate="1:1"))
print(pd.merge(da, db, on=["t"], how="left", validate="1:1"))
```
### Issue Description
Running the example produces some very strange results:
The first print returns:
---------------t-------------------i-p-q
0 1721088000012322083 1 3 1
1 1721088047408560273 1 6 2
2 1721088047408560273 1 6 2
3 1721088047408560451 1 2 2
4 1721088047408560451 1 2 2
Firstly I wouldn't expect there to be a collision of join keys despite an implicit cast between uint64 and int64. Even allowing for this, the collision doesn't trigger the validate='1:1' check.
Stranger still it seems if you drop the first trivial join key, then the merge is clean!
-------------t------------------i_x-p-i_y-q
0 1721088000012322083 1 3 1 1
1 1721088047408560273 1 6 1 2
2 1721088047408560451 1 2 1 2
### Expected Behavior
I would expect the output to be:
-------------t------------------i_x-p-i_y-q
0 1721088000012322083 1 3 1 1
1 1721088047408560273 1 6 1 2
2 1721088047408560451 1 2 1 2
in both cases or for validate to throw in the first case.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.11
python-bits : 64
OS : Linux
OS-release : 6.1.0-37-amd64
Version : #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22)
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
|
[
"Bug",
"Reshaping",
"Dtype Conversions",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"I'm wondering if this should be deprecated and pandas should raise when the user tries to merge on unsigned/signed. There are two different strategies I can see users desiring (they are different so count as a miss vs they are the same after converting so count as a hit) and I think we should have them resolve prior to the merge.",
"Even if we do go this route, it might be fruitful to look into why the 1:1 validation is failing."
] |
3,165,826,764
| 61,687
|
BUG: DataFrame.mul() corrupts data by setting values to zero
|
closed
| 2025-06-22T10:03:03
| 2025-07-23T09:12:16
| 2025-07-23T09:12:16
|
https://github.com/pandas-dev/pandas/issues/61687
| true
| null | null |
fheisigx
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
import sys
# Create DataFrame with datetime index and multiple columns
# This reproduces the bug with ~6 years of hourly data (2033-2038)
np.random.seed(42)
date_range = pd.date_range('2033-01-01', '2038-12-31 23:00:00', freq='H')
n_cols = 40
data = np.random.rand(len(date_range), n_cols) * 0.1 # Values between 0 and 0.1
df = pd.DataFrame(data, index=date_range, columns=range(n_cols))
# Create a Series of ones with the same index
ones_series = pd.Series(1.0, index=df.index)
print(f"DataFrame shape: {df.shape}")
print(f"Memory usage (MB): {df.memory_usage(deep=True).sum() / 1024**2:.2f}")
print(f"Original data sample (should be > 0):")
print(df.iloc[32:37, 23]) # Show some sample values
# Perform the multiplication that causes corruption
print("\nPerforming multiplication...")
result = df.mul(ones_series, axis=0)
# Check for corruption
print(f"After multiplication (should be identical):")
print(result.iloc[32:37, 23])
# Verify corruption
are_equal = df.equals(result)
print(f"\nDataFrames equal: {are_equal}")
if not are_equal:
# Count corrupted values
diff_mask = df.values != result.values
n_corrupted = diff_mask.sum()
print(f"CORRUPTION DETECTED: {n_corrupted} values corrupted!")
# Show corruption details
corrupted_rows, corrupted_cols = np.where(diff_mask)
if len(corrupted_rows) > 0:
print(f"\nCorruption sample:")
for i in range(min(5, len(corrupted_rows))):
row, col = corrupted_rows[i], corrupted_cols[i]
original = df.iloc[row, col]
corrupted = result.iloc[row, col]
date = df.index[row]
print(f" {date}, Col {col}: {original:.4f} -> {corrupted:.4f}")
# Verify that corrupted values are zeros
corrupted_values = result.values[diff_mask]
all_zeros = np.all(corrupted_values == 0.0)
print(f"\nAre all corrupted values zero? {all_zeros}")
# Show which columns are affected
unique_affected_cols = np.unique(corrupted_cols)
print(f"Number of affected columns: {len(unique_affected_cols)}")
print(f"Affected columns: {unique_affected_cols}")
# Demonstrate that numpy approach works correctly
print(f"\nTesting numpy workaround...")
numpy_result = pd.DataFrame(
df.to_numpy() * ones_series.to_numpy()[:, None],
index=df.index,
columns=df.columns
)
numpy_works = df.equals(numpy_result)
print(f"Numpy approach works correctly: {numpy_works}")
```
### Issue Description
The DataFrame.mul() method is corrupting data by setting non-zero values to zero when multiplying a DataFrame with datetime index by a Series of ones. This occurs only under specific conditions related to DataFrame size and affects data integrity.
### Expected Behavior
When multiplying a DataFrame by a Series of ones using df.mul(ones_series, axis=0), all original values should be preserved (multiplied by 1.0).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.14393
machine : AMD64
processor : Intel64 Family 6 Model 85 Stepping 4, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.3.0
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : 8.1.3
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : None
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.37
tables : 3.10.2
tabulate : None
xarray : 2025.1.1
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : 0.23.0
tzdata : 2025.2
qtpy : 2.4.3
pyqt5 : None
</details>
|
[
"Bug",
"Numeric Operations",
"Needs Info",
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This bug doesn't occurs on recent versions of pandas. Our test runs perfectly fine on the last version and was related with the numexrp that is activated when the elements is more than 1M for optimizations. ",
"I also cannot reproduce on main (though on mac)",
"@MarceloVelludo: what does `recent versions of pandas` mean explicitly? OP is reporting on 2.2.3, I just want to make sure you're reporting not seeing this issue there. Also, what architecture are you on? Posting `pd.show_versions()` would be helpful.",
"> [@MarceloVelludo](https://github.com/MarceloVelludo): what does `recent versions of pandas` mean explicitly? OP is reporting on 2.2.3, I just want to make sure you're reporting not seeing this issue there. Also, what architecture are you on? Posting `pd.show_versions()` would be helpful.\n\n\n<details>\n<summary>INSTALLED VERSIONS</summary>\n\n```\nINSTALLED VERSIONS\n------------------\ncommit : c888af6d0bb674932007623c0867e1fbd4bdc2c6\npython : 3.13.5\npython-bits : 64\nOS : Linux\nOS-release : 6.15.6-1-MANJARO\nVersion : #1 SMP PREEMPT_DYNAMIC Thu, 10 Jul 2025 15:38:04 +0000\nmachine : x86_64\nprocessor : \nbyteorder : little\nLC_ALL : None\n\npandas : 2.3.1\nnumpy : 2.3.1\npytz : 2025.2\ndateutil : 2.9.0.post0\npip : 25.1.1\nCython : None\nsphinx : None\nIPython : 9.4.0\nadbc-driver-postgresql: None\nadbc-driver-sqlite : None\nbs4 : 4.13.4\nblosc : None\nbottleneck : None\ndataframe-api-compat : None\nfastparquet : None\nfsspec : None\nhtml5lib : None\nhypothesis : None\ngcsfs : None\njinja2 : 3.1.6\nlxml.etree : None\nmatplotlib : 3.10.3\nnumba : None\nnumexpr : None\nodfpy : None\nopenpyxl : None\npandas_gbq : None\npsycopg2 : None\npymysql : None\npyarrow : 19.0.1\npyreadstat : None\npytest : None\npython-calamine : None\npyxlsb : None\ns3fs : None\nscipy : 1.16.0\nsqlalchemy : 2.0.41\ntables : None\ntabulate : None\nxarray : None\nxlrd : None\nxlsxwriter : None\nzstandard : 0.23.0\ntzdata : 2025.2\nqtpy : None\npyqt5 : None\n```\n\n</details>",
"Thanks @MarceloVelludo - I also tested this on 2.2.x and that is fine as well.",
"Thanks @fheisigx for the report. It does not seem to have been reproduced. Closing this but feel free to post further evidence if you are still encountering an issue and we can reopen."
] |
3,165,680,751
| 61,686
|
fix for building docs on Windows
|
closed
| 2025-06-22T05:39:02
| 2025-06-22T17:35:02
| 2025-06-22T17:34:50
|
https://github.com/pandas-dev/pandas/pull/61686
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61686
|
https://github.com/pandas-dev/pandas/pull/61686
|
Dr-Irv
| 1
|
- [x] closes #60149
I was having two issues with building the docs on Windows.
1. Building a single page with `--single` was very slow because it was reading in lots of files. So there is a fix in `conf.py` that changes any backslashes to `/` in the paths. Probably a better fix is to not be using `os.join` and use `pathlib`, but that's a larger change.
2. For #60149, there are 2 issues with building `enhancingperf.rst`:
- As mentioned in https://github.com/pandas-dev/pandas/issues/60149#issuecomment-2600578029 , having double quotes in the ipython strings messes up Windows, so changing them to single quotes makes it work
- The cython functions expect `int64` dtypes, but the defaults coming from `numpy` when building the DF are `int32`, so the `astype(int)` calls fix that
|
[
"Docs",
"Windows"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @Dr-Irv "
] |
3,165,050,340
| 61,685
|
create contribution_plan.md file
|
closed
| 2025-06-21T12:23:44
| 2025-06-21T16:55:11
| 2025-06-21T16:55:11
|
https://github.com/pandas-dev/pandas/pull/61685
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61685
|
https://github.com/pandas-dev/pandas/pull/61685
|
harinarayananmastech
| 1
|
Created contribution_plan.md file
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is already covered in our documentation, https://pandas.pydata.org/docs/development/index.html. Closing"
] |
3,164,996,398
| 61,684
|
add contribution_plan.md file
|
closed
| 2025-06-21T10:53:28
| 2025-06-21T16:55:03
| 2025-06-21T16:55:02
|
https://github.com/pandas-dev/pandas/pull/61684
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61684
|
https://github.com/pandas-dev/pandas/pull/61684
|
Ranjana-babu
| 1
|
Added add contribution_plan.md file
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is already covered in our documentation, https://pandas.pydata.org/docs/development/index.html. Closing"
] |
3,164,954,736
| 61,683
|
docs: Add CONTRIBUTION_PLAN.md for GitHub use case
|
closed
| 2025-06-21T09:44:16
| 2025-06-21T16:54:54
| 2025-06-21T16:54:54
|
https://github.com/pandas-dev/pandas/pull/61683
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61683
|
https://github.com/pandas-dev/pandas/pull/61683
|
sabari191
| 1
|
This Pull Request adds a detailed CONTRIBUTION_PLAN.md that documents the process of evaluating and contributing to the pandas-dev/pandas project. The plan includes environment setup, issue identification, and a step-by-step guide to raising a PR.
This is part of a simulated GitHub use case to demonstrate contribution readiness.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is already covered in our documentation, https://pandas.pydata.org/docs/development/index.html. Closing"
] |
3,164,257,622
| 61,682
|
BUG/TST: added TypeError if object dtypes are detected in dataframe
|
open
| 2025-06-20T19:59:46
| 2025-08-17T19:59:49
| null |
https://github.com/pandas-dev/pandas/pull/61682
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61682
|
https://github.com/pandas-dev/pandas/pull/61682
|
sharkipelago
| 8
|
- [ ] closes #55114
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This PR addresses concern 1 of #55114 - Having consistent behavior with `Series.round` & `DataFrame.round`.
My solution was to raise a `TypeError ` in a way similar way to #61206.
I changed the following existing tests, but a little worried that might break some things, so any feedback is appreciated.
1. I deleted `tests/frrame/methods/test_round.py`'s `test_round_mixed_type` as I felt that test conflicted with the current intended behavior of `DataFrame.round`
2. I edited `tests/copy_view/test_methods.py`'s `test_round` as it was using a dataframe with strings and ints in its test
|
[
"Bug",
"DataFrame",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The second concern of the issue #55114 was `round()` did not work for `Series` or `Dataframe` when using `decimal.Decimal` from python's default decimal module. It seems like this is because both NumPy and pandas give array-like structures of `decimal.Decimal` objects a `dtype` of `object`. When I tested `np.array([decimal.Decimal(\"1.2234242333\")]).round()`, it raised an error.\r\n\r\nIf `Series.round()` and `DataFrame.round()` not raising an error on `decimal.Decimal` objects is still wanted, I thought a clean solution would be to make a new custom dtype for `decimal.Decimal`. However, that seemed like a pretty big change so wanted to check if there was another way I should be thinking about this bugfix.",
"i would expect this to attempt to operate pointwise (which would still raise on e.g. strings)",
"> i would expect this to attempt to operate pointwise (which would still raise on e.g. strings)\r\n\r\nDo you mean I should rewrite the code so that it attempts to round every column individually and then raise if there is a non-numeric column? As opposed to looking at the `self.dtypes.values` ?",
"Ohh because `StringDtype` also exists? and other non-numeric dtypes outside of object? Could I use `pandas.api.types.is_numeric_dtype`?",
"> and other non-numeric dtypes outside of object? \r\n\r\nim specifically thinking of object dtype columns containing numeric entries",
"Ah okay, makes sense. \r\n\r\nI think the current behavior for `series.round()` is to raise when an object dtype column containing numeric entries is called though.\r\n\r\nThe `test_round_dtype_object()` test in `pandas/tests/series/methods/test_round.py` is this i think:\r\n```\r\n def test_round_dtype_object(self):\r\n # GH#61206\r\n ser = Series([0.2], dtype=\"object\")\r\n msg = \"Expected numeric dtype, got object instead.\"\r\n with pytest.raises(TypeError, match=msg):\r\n ser.round()\r\n```\r\n\r\nShould I submit a PR to change this behavior first before implementing your pointwise solution?\r\n",
"@jbrockmendel Hi, no worries if too busy to look into this right now just curious if you had any insight on the above comment for making a different PR first, thanks!",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,163,336,088
| 61,681
|
ENH: the error behaviour in pandas operation should be consistent rename erros are ignore by default whereas drop erros are raise
|
open
| 2025-06-20T13:21:37
| 2025-07-18T07:19:41
| null |
https://github.com/pandas-dev/pandas/issues/61681
| true
| null | null |
Furqan-s
| 3
|
### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description

### Feature Description
can we make the pandas function consistent in how they raise errors drop should be default to ignore and not likely to break.
### Alternative Solutions
def drop(labels=None, axis=0, index=None, columns=None, Level=None, inplace=false, errors='ignore'):
### Additional Context
_No response_
|
[
"Enhancement",
"API - Consistency",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hello!! Well, I noticed that, this leads to unexpected errors for users when dropping non-existent columns or index labels, even though renaming silently skips missing keys.\n\nWe can either change the default:\n\ni.e. `errors='raise'` to `errors='ignore'` in `DataFrame.drop()`:\n\nOr we can add a warning, if this causes inconsistency. \nI would like to work on this issue! Let me know if it’s okay for me to work on it, and whether changing the default behavior of `errors` in `DataFrame.drop()` is acceptable from a compatibility perspective.",
"I prefer to set rename's default to errors=\"raise\" instead. I was debugging my code for a while with zero idea that the problem was in rename. When I set `errors=\"raise\"` manually, it instantly made it clear that the bug was that I was using rename incorrectly. I made the same mistake as the issue I tagged above, where I didn't specify \"columns\" in the arguments, so it was **silently** trying to rename indexes without telling me. I feel like it's a very easy to make that mistake since it's feels more natural to rename columns instead of indexes.",
"Thanks!! It helped me understand both sides of this.\n\nYou're absolutely right that silently passing in `rename` can definitely hide bugs, especially when someone forgets to set `axis='columns'`. At the same time, I also find `.drop()`'s `errors='raise'` default a bit too aggressive, especially in exploratory or dynamic settings.\n\nSo this feels like a bigger API consistency discussion.\n\nSo, what would make sense now?->\n- Keep current behavior for now (to avoid breaking users)\n- Add a warning when `rename()` is used without matching keys, encouraging users to explicitly set `errors`\n- Possibly align both functions in a future major version (e.g., pandas 3.0)\n\n I would like to assist with creating a warning message or updating the doc regarding this!\n"
] |
3,161,255,732
| 61,680
|
Backport PR #61654 on branch 2.3.x (DOC: Add release notes template for 2.3.1)
|
closed
| 2025-06-19T20:12:55
| 2025-06-20T15:54:28
| 2025-06-20T15:54:28
|
https://github.com/pandas-dev/pandas/pull/61680
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61680
|
https://github.com/pandas-dev/pandas/pull/61680
|
meeseeksmachine
| 0
|
Backport PR #61654: DOC: Add release notes template for 2.3.1
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,160,847,160
| 61,679
|
BUG: Fix lost precision with common type of uint64/int64
|
open
| 2025-06-19T16:21:50
| 2025-08-21T00:07:13
| null |
https://github.com/pandas-dev/pandas/pull/61679
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61679
|
https://github.com/pandas-dev/pandas/pull/61679
|
pbrochart
| 7
|
- [x] closes #61676 (Replace xxxx with the GitHub issue number)
- [x] closes #61688 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"Dtype Conversions",
"Stale",
"isin"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"I think this patch is more safe and provides better performance",
"I test another fix that can be benefit for other issues (e.g #61688) without any changes\r\nBut it breaks some tests (the fix changes the behavior of np_find_common_type)",
"pre-commit.ci autofix",
"Hi,\r\n\r\nNumpy documentation is not explicit about it but the API give the best compromise for this kind of types as it's a hardware limitation.\r\nIIRC the precision start to lose if the uint64 number is above 2^53.\r\nThe API should not be deprecated in my opinion but rather be aware of this limitation.\r\nI don't see any other solution than using an object.",
"> I don't see any other solution than using an object.\r\n\r\nthere is another PR open to address the issue #61694. The approach there is to raise a ValueError instead. ",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.