Dataset Viewer
repo
stringclasses 1
value | instance_id
stringlengths 20
22
| problem_statement
stringlengths 126
60.8k
| merge_commit
stringlengths 40
40
| base_commit
stringlengths 40
40
|
---|---|---|---|---|
python/cpython
|
python__cpython-137655
|
# flow-graph fails to re-initialize removed instructions to 0 resulting in re-used exception handler
# Bug report
### Bug description:
```python
# pyre-ignore-all-errors
def foo():
try:
[x for x in abc]
except OSError:
pass
return
import dis
dis.dis(foo)
```
When this code gets compiled a `NOT_TAKEN` instruction gets inserted after the exception handler pass has run and no except handler block is added to it. But it re-uses an instruction that got NOP'd out and previously had an instruction handler associated with it. The end result is that the NOT_TAKEN has a rather random exception handler block associated with it.
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-137655
<!-- /gh-linked-prs -->
|
b78e9c05b627f2b205fa43f31d4d3d14ad3eb13b
|
089a324a4258fd5783b7e65acc29231d1042646e
|
python/cpython
|
python__cpython-137669
|
# ord() for bytes and bytearray is not documented
`ord()` is only documented for one-character strings, but it works also for `bytes` and `bytearray` objects of length 1.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137669
* gh-137703
* gh-137704
<!-- /gh-linked-prs -->
|
35759fe2faf1443455dfcb15ef7c435e34b492c7
|
639df39bf0b7e1172ebc4df84c1ae097ea7c0c22
|
python/cpython
|
python__cpython-137588
|
# Regression in ssl module between 3.13.5 and 3.13.6: reading from a TLS-encrypted connection blocks
# Bug report
### Bug description:
The script below works with 3.13.5 and fails with 3.13.6.
It's a straightforward socket server and client with TLS enabled. Under 3.13.5, it runs successfully. Under 3.13.6, when the server calls `recv()`, it blocks and never receives what the client sent with `sendall()`.
This is a minimal reproduction version of https://github.com/python-websockets/websockets/issues/1648. I performed the reproduction on macOS while the person reporting the bug was on Linux so I think it's platform-independent.
To trigger the bug, the client must read from the connection in a separate thread. If you remove that thread, the bug doesn't happen. (For context, I do this because websockets is architecture with a Sans-I/O layer so I need a background thread to pump bytes received from the network into the Sans-I/O parser.)
Before you run the script, you must download https://github.com/python-websockets/websockets/blob/main/tests/test_localhost.pem and store it next to the file where you saved the Python script.
```python
import os
import socket
import ssl
import threading
TLS_HANDSHAKE_TIMEOUT = 1
print("If Python locks hard:")
print("kill -TERM", os.getpid())
print()
# Create TLS contexts with a self-signed certificate. Download it here:
# https://github.com/python-websockets/websockets/blob/main/tests/test_localhost.pem
server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
server_context.load_cert_chain(b"test_localhost.pem")
# Work around https://github.com/openssl/openssl/issues/7967
server_context.num_tickets = 0
client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
client_context.load_verify_locations(b"test_localhost.pem")
# Start a socket server. Nothing fancy here. In a realistic server, we would
# have `serve_forever` with a `while True:` loop. For a minimal reproduction,
# `serve_one` is enough, as the bug occurs on the first request.
server_sock = socket.create_server(("localhost", 0))
server_port = server_sock.getsockname()[1]
server_sock = server_context.wrap_socket(
server_sock,
server_side=True,
# Delay TLS handshake until after we set a timeout on the socket.
do_handshake_on_connect=False,
)
def conn_handler(sock, addr) -> None:
print("server accepted connection from", addr)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
sock.settimeout(TLS_HANDSHAKE_TIMEOUT)
assert isinstance(sock, ssl.SSLSocket)
sock.do_handshake()
sock.settimeout(None)
handshake = sock.recv(4096)
print("server rcvd:")
print(handshake.decode())
print()
def serve_one():
sock, addr = server_sock.accept()
handler_thread = threading.Thread(target=conn_handler, args=(sock, addr))
handler_thread.start()
print("server listening on port", server_port)
server_thread = threading.Thread(target=serve_one)
server_thread.start()
# Connect a client to the server. Again, nothing fancy.
client_sock = socket.create_connection(("localhost", server_port))
client_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
client_sock.settimeout(TLS_HANDSHAKE_TIMEOUT)
client_sock = client_context.wrap_socket(
client_sock,
server_hostname="localhost",
)
client_sock.settimeout(None)
### The bug happens only when we're reading from the client socket too! ###
def recv_one_event():
msg = client_sock.recv(4096)
print("client rcvd:")
print(msg.decode())
print()
client_background_thread = threading.Thread(target=recv_one_event)
client_background_thread.start()
### If you remove client_background_thread.start(), it doesn't happen. ###
handshake = (
b"GET / HTTP/1.1\r\n"
b"Host: 127.0.0.1:51970\r\n"
b"Upgrade: websocket\r\n"
b"Connection: Upgrade\r\n"
b"Sec-WebSocket-Key: jjSVQ7XPjx2GIXKfQ49QDQ==\r\n"
b"Sec-WebSocket-Version: 13\r\n"
b"Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits\r\n"
b"User-Agent: Python/3.13 websockets/15.0.1\r\n"
b"\r\n"
)
print("client send:")
print(handshake.decode())
print()
client_sock.sendall(handshake)
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-137588
* gh-137613
<!-- /gh-linked-prs -->
|
55788a90967e82a9ea05b45c06a293b46ec53d72
|
046a4e39b3f8ac5cb13ea292418c9c3767b0074d
|
python/cpython
|
python__cpython-137515
|
# Use a common interface for FT-only mutexes
Throughout the codebase, there are two common patterns for using a mutex only on the free-threaded build:
1.
```c
#ifdef Py_GIL_DISABLED
PyMutex_Lock(/* ... */);
#endif
/* ... */
#ifdef Py_GIL_DISABLED
PyMutex_Unlock(/* ... */);
#endif
```
2.
```c
#ifdef Py_GIL_DISABLED
#define LOCK() PyMutex_Lock(/* ... */)
#define UNLOCK() PyMutex_Unlock(/* ... */)
#else
#define LOCK()
#define UNLOCK()
#endif
static void
something()
{
LOCK();
/* ... */
UNLOCK();
}
```
I think we can eliminate some redundancy by adding a common wrapper similar to the latter, but takes a mutex argument rather than being hardcoded into the macro.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137515
<!-- /gh-linked-prs -->
|
082f370cdd0ec1484b033c70ec81b4b7a972ee2c
|
dec624e0afe6d22d38409d2e7dd9636ea0170378
|
python/cpython
|
python__cpython-137500
|
# Dead link in the time library page
# Documentation
https://docs.python.org/3/library/time.html#time.CLOCK_TAI
The description of `time.CLOCK_TAI` includes a link to a "page not found" at NIST
The current link
https://www.nist.gov/pml/time-and-frequency-division/nist-time-frequently-asked-questions-faq#tai
Leads to a page that says "Sorry, we cannot find that page."
Perhaps the link should instead be
https://www.nist.gov/pml/time-and-frequency-division/what-time-it-faqs
<img width="867" height="242" alt="Image" src="https://github.com/user-attachments/assets/71ce0bea-e718-4448-8849-ba73647a9003" />
<img width="1185" height="392" alt="Image" src="https://github.com/user-attachments/assets/09723e17-8c17-40cd-8d66-16c24d82370c" />
<!-- gh-linked-prs -->
### Linked PRs
* gh-137500
* gh-137501
* gh-137502
<!-- /gh-linked-prs -->
|
3c1471d971ea2759d9de76e22230cd71cf4b7a07
|
3000594e929aea768fe0dd2437e0722ecfa2dbdc
|
python/cpython
|
python__cpython-137467
|
# Remove deprecated and undocumented `glob.glob0` and `glob.glob1` functions
# Feature or enhancement
These were deprecated in #117337
<!-- gh-linked-prs -->
### Linked PRs
* gh-137467
<!-- /gh-linked-prs -->
|
f0a3c6ebc9bee22ddb318db1143317dc2cf06de1
|
481d5b54556e97fed4cf1f48a2ccbc7b4f7aaa42
|
python/cpython
|
python__cpython-137567
|
# `importlib.abc.SourceLoader` issues `DeprecationWarning` because it inherits from `ResourceLoader`
# Bug report
### Bug description:
Problem
---
- `importlib.abc.SourceLoader` itself does not seem to be deprecated as far as the docs go, either in 3.14 or 3.15.
- However, since it inherits from the deprecated `importlib.abc.ResourceLoader`, a `DeprecationWarning` is issued at instantiation time since 3.14.
Example
---
```python
import os
import sys
import warnings
from collections.abc import Iterable
from importlib.abc import SourceLoader
from importlib.machinery import ModuleSpec
from typing import Protocol
class FinderLike(Protocol):
def find_spec(self, fullname: str) -> ModuleSpec | None: ...
class MyLoader(SourceLoader):
"""Bare-bone `SourceLoader` subclass, only providing implementations for the abstract methods."""
@staticmethod
def get_data(path: os.PathLike | str) -> bytes:
with open(path, mode='rb') as fobj:
return fobj.read()
@staticmethod
def get_filename(fullname: str, finders: Iterable[FinderLike] | None = None) -> str:
if finders is None:
finders = (
f for f in sys.meta_path
if callable(getattr(f, 'find_spec', None))
if not isinstance(f, MyLoader) # Avoid circular dependence
)
for finder in finders:
try:
spec = finder.find_spec(fullname)
if spec is None: raise TypeError
except (TypeError, ImportError):
continue
if spec.origin is not None and os.path.isfile(spec.origin):
return str(spec.origin)
raise ImportError(f'cannot find filename for `{fullname}`')
with warnings.catch_warnings():
warnings.filterwarnings('error', category=DeprecationWarning)
loader = MyLoader() # DeprecationWarning: importlib.abc.ResourceLoader is deprecated in favour of supporting resource loading through importlib.resources.abc.TraversableResources.
```
Questions
---
- Is this to be considered a bug? Or is `SourceLoader` supposed to be deprecated too?
- If the former, how can it be fixed? If the latter, should the docs be updated?
Possibly related issues
---
#89710, #121604
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-137567
* gh-137654
<!-- /gh-linked-prs -->
|
7140b99b0d0952167f7fdd747e6c28a8c8a2d768
|
0c83daaf458389517989bc28625e8ba8cf24e651
|
python/cpython
|
python__cpython-137413
|
# `test_hashlib` has incorrect `default_builtin_hashes` values
# Bug report
### Bug description:
In `test_hashlib`, we have
```py
default_builtin_hashes = {'md5', 'sha1', 'sha256', 'sha512', 'sha3', 'blake2'}
```
which is a relicate when we had 2 distinct modules for SHA-2 and one for SHA-3 only. Now we have 'sha2' only (see configure.ac)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137413
* gh-137534
* gh-137535
<!-- /gh-linked-prs -->
|
deb385a14337bc3e38442b4cee3aac4a57364adc
|
375f484f976a1ed84c145a6ce4e467cd5b57db75
|
python/cpython
|
python__cpython-137398
|
# test_os_open in SocketEINTRTest hangs indefinitely on NetBSD
# Bug report
### Bug description:
The `SocketEINTRTest.test_os_open` test in `test_eintr` hangs indefinitely on NetBSD 10.0(x86_64). This appears to be a NetBSD-specific issue with FIFO operations under frequent signal interruption, similar to the issue described [here](https://github.com/python/cpython/issues/69309).
### Configuration
```sh
./configure --with-pydebug
```
### Test Output
```python
Warning -- files was modified by test_eintr
Warning -- Before: []
Warning -- After: ['@test_16354_tmpæ']
test test_eintr failed -- Traceback (most recent call last):
File "/home/blue/Desktop/cpython/Lib/test/test_eintr.py", line 17, in test_all
script_helper.run_test_script(script)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/home/blue/Desktop/cpython/Lib/test/support/script_helper.py", line 324, in run_test_script
assert_python_ok("-u", script, "-v")
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/home/blue/Desktop/cpython/Lib/test/support/script_helper.py", line 182, in assert_python_ok
return _assert_python(True, *args, **env_vars)
File "/home/blue/Desktop/cpython/Lib/test/support/script_helper.py", line 167, in _assert_python
res.fail(cmd_line)
~~~~~~~~^^^^^^^^^^
File "/home/blue/Desktop/cpython/Lib/test/support/script_helper.py", line 80, in fail
raise AssertionError(f"Process return code is {exitcode}\n"
...<10 lines>...
f"---")
AssertionError: Process return code is 1
command line: ['/home/blue/Desktop/cpython/python', '-X', 'faulthandler', '-I', '-u', '/home/blue/Desktop/cpython/Lib/test/_test_eintr.py', '-v']
stdout:
---
---
stderr:
---
test_flock (__main__.FCNTLEINTRTest.test_flock) ... ok
test_lockf (__main__.FCNTLEINTRTest.test_lockf) ... ok
test_read (__main__.OSEINTRTest.test_read) ... ok
test_readinto (__main__.OSEINTRTest.test_readinto) ... ok
test_wait (__main__.OSEINTRTest.test_wait) ... ok
test_wait3 (__main__.OSEINTRTest.test_wait3) ... ok
test_wait4 (__main__.OSEINTRTest.test_wait4) ... ok
test_waitpid (__main__.OSEINTRTest.test_waitpid) ... ok
test_write (__main__.OSEINTRTest.test_write) ... ok
test_devpoll (__main__.SelectEINTRTest.test_devpoll) ... skipped 'need select.devpoll'
test_epoll (__main__.SelectEINTRTest.test_epoll) ... skipped 'need select.epoll'
test_kqueue (__main__.SelectEINTRTest.test_kqueue) ... ok
test_poll (__main__.SelectEINTRTest.test_poll) ... ok
test_select (__main__.SelectEINTRTest.test_select) ... ok
test_sigtimedwait (__main__.SignalEINTRTest.test_sigtimedwait) ... ok
test_sigwaitinfo (__main__.SignalEINTRTest.test_sigwaitinfo) ... ERROR
test_accept (__main__.SocketEINTRTest.test_accept) ... ok
test_open (__main__.SocketEINTRTest.test_open) ... ok
test_os_open (__main__.SocketEINTRTest.test_os_open) ... Timeout (0:10:00)!
Thread 0x00007c47bfed2800 (most recent call first):
File "/home/blue/Desktop/cpython/Lib/test/_test_eintr.py", line 378 in os_open
File "/home/blue/Desktop/cpython/Lib/test/_test_eintr.py", line 364 in _test_open
File "/home/blue/Desktop/cpython/Lib/test/_test_eintr.py", line 384 in test_os_open
File "/home/blue/Desktop/cpython/Lib/unittest/case.py", line 613 in _callTestMethod
File "/home/blue/Desktop/cpython/Lib/unittest/case.py", line 667 in run
File "/home/blue/Desktop/cpython/Lib/unittest/case.py", line 723 in __call__
File "/home/blue/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/home/blue/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/home/blue/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/home/blue/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/home/blue/Desktop/cpython/Lib/unittest/runner.py", line 257 in run
File "/home/blue/Desktop/cpython/Lib/unittest/main.py", line 270 in runTests
File "/home/blue/Desktop/cpython/Lib/unittest/main.py", line 104 in __init__
File "/home/blue/Desktop/cpython/Lib/test/_test_eintr.py", line 552 in <module>
---
```
### Reproduction
I created a minimal C program that reproduces the same issue.
```c
#include <errno.h>
#include <fcntl.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/wait.h>
#include <time.h>
#include <unistd.h>
volatile sig_atomic_t signal_count = 0;
/**
* Signal handler for SIGALRM.
*/
void handle_signal(int sig) {
(void)sig;
signal_count++;
}
/**
* Retry open() if it's interrupted by a signal (EINTR).
*/
int safe_open(const char *path, int flags) {
int fd;
while ((fd = open(path, flags)) < 0) {
if (errno != EINTR) {
perror("open");
exit(EXIT_FAILURE);
}
write(STDOUT_FILENO, ".", 1);
}
return fd;
}
/**
* Retry close() if it's interrupted by a signal (EINTR).
*/
int safe_close(int fd) {
int ret;
while ((ret = close(fd)) < 0) {
if (errno != EINTR) {
perror("close");
return ret;
}
write(STDOUT_FILENO, "C", 1);
}
return ret;
}
/**
* Sleep for a specified number of milliseconds.
*/
void sleep_ms(long ms) {
struct timespec ts;
ts.tv_sec = ms / 1000;
ts.tv_nsec = (ms % 1000) * 1000000L;
nanosleep(&ts, NULL);
}
/**
* Set up a timer to send SIGALRM every 10 milliseconds.
*/
void setup_timer(void) {
struct sigaction sa = {0};
sa.sa_handler = handle_signal;
sigaction(SIGALRM, &sa, NULL);
struct itimerval timer = {
.it_value = {0, 10000}, // Start after 10ms
.it_interval = {0, 10000} // Repeat every 10ms
};
if (setitimer(ITIMER_REAL, &timer, NULL) < 0) {
perror("setitimer");
exit(EXIT_FAILURE);
}
}
int main() {
printf("EINTR test - Ctrl+C to stop\n");
setup_timer();
for (int i = 1; i <= 50; ++i) {
char fifo[64];
snprintf(fifo, sizeof(fifo), "/tmp/test_fifo_%d", i);
unlink(fifo);
if (mkfifo(fifo, 0666) < 0) {
perror("mkfifo");
exit(EXIT_FAILURE);
}
pid_t pid = fork();
if (pid < 0) {
perror("fork");
exit(EXIT_FAILURE);
}
if (pid == 0) {
// Child opens FIFO for reading
sleep_ms(50); // 50ms delay to let parent open writer
int fd = safe_open(fifo, O_RDONLY);
safe_close(fd);
exit(EXIT_SUCCESS);
}
else {
// Parent opens FIFO for writing
int fd = safe_open(fifo, O_WRONLY);
safe_close(fd);
wait(NULL);
unlink(fifo);
printf("Loop %d OK (signals: %d)\n", i, signal_count);
signal_count = 0;
}
sleep_ms(1); // Small pause before next iteration
}
printf("Test complete.\n");
return EXIT_SUCCESS;
}
```
Output:
```sh
╭─blue@home ~
╰─$ gcc reproducer.c -o reproducer
╭─blue@home ~
╰─$ ./reproducer
EINTR test - Ctrl+C to stop
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................^C
```
Similar output as expected:
```
╰─$ ./reproducer
EINTR test - Ctrl+C to stop
.....Loop 1 OK (signals: 5)
.....Loop 2 OK (signals: 5)
.....Loop 3 OK (signals: 5)
.....Loop 4 OK (signals: 5)
.....Loop 5 OK (signals: 5)
.....Loop 6 OK (signals: 5)
.....Loop 7 OK (signals: 6)
.....Loop 8 OK (signals: 5)
.....Loop 9 OK (signals: 5)
.....Loop 10 OK (signals: 5)
.....Loop 11 OK (signals: 5)
.....Loop 12 OK (signals: 5)
.....Loop 13 OK (signals: 5)
.....Loop 14 OK (signals: 6)
.....Loop 15 OK (signals: 5)
.....Loop 16 OK (signals: 5)
.....Loop 17 OK (signals: 5)
.....Loop 18 OK (signals: 5)
.....Loop 19 OK (signals: 5)
.....Loop 20 OK (signals: 5)
.....Loop 21 OK (signals: 6)
.....Loop 22 OK (signals: 5)
.....Loop 23 OK (signals: 5)
.....Loop 24 OK (signals: 5)
.....Loop 25 OK (signals: 5)
.....Loop 26 OK (signals: 5)
.....Loop 27 OK (signals: 5)
.....Loop 28 OK (signals: 6)
.....Loop 29 OK (signals: 5)
.....Loop 30 OK (signals: 5)
.....Loop 31 OK (signals: 5)
.....Loop 32 OK (signals: 5)
.....Loop 33 OK (signals: 5)
.....Loop 34 OK (signals: 5)
.....Loop 35 OK (signals: 6)
.....Loop 36 OK (signals: 5)
.....Loop 37 OK (signals: 5)
.....Loop 38 OK (signals: 5)
.....Loop 39 OK (signals: 5)
.....Loop 40 OK (signals: 5)
.....Loop 41 OK (signals: 5)
.....Loop 42 OK (signals: 6)
.....Loop 43 OK (signals: 5)
.....Loop 44 OK (signals: 5)
.....Loop 45 OK (signals: 5)
.....Loop 46 OK (signals: 5)
.....Loop 47 OK (signals: 5)
.....Loop 48 OK (signals: 5)
.....Loop 49 OK (signals: 6)
.....Loop 50 OK (signals: 5)
Test complete.
```
```sh
╰─$ ktrace -f ktrace.out ./reproducer
EINTR test - Ctrl+C to stop
...............................................................................................................................^C
╰─$ kdump -f ktrace.out | tail -50
17368 17368 reproducer CALL setcontext(0x7f7fff444000)
17368 17368 reproducer RET setcontext JUSTRETURN
17368 17368 reproducer CALL write(1,0x401173,1)
17368 17368 reproducer GIO fd 1 wrote 1 bytes
"."
17368 17368 reproducer RET write 1
17368 17368 reproducer CALL open(0x7f7fff4443d0,1,1)
17368 17368 reproducer NAMI "/tmp/test_fifo_1"
17368 17368 reproducer RET open -1 errno 4 Interrupted system call
17368 17368 reproducer PSIG SIGALRM caught handler=0x400dba mask=(): code=SI_TIMER sent by pid=0, uid=0 with sigval 0x0)
17368 17368 reproducer CALL setcontext(0x7f7fff444000)
17368 17368 reproducer RET setcontext JUSTRETURN
17368 17368 reproducer CALL write(1,0x401173,1)
17368 17368 reproducer GIO fd 1 wrote 1 bytes
"."
17368 17368 reproducer RET write 1
17368 17368 reproducer CALL open(0x7f7fff4443d0,1,1)
17368 17368 reproducer NAMI "/tmp/test_fifo_1"
17368 17368 reproducer RET open -1 errno 4 Interrupted system call
17368 17368 reproducer PSIG SIGALRM caught handler=0x400dba mask=(): code=SI_TIMER sent by pid=0, uid=0 with sigval 0x0)
17368 17368 reproducer CALL setcontext(0x7f7fff444000)
17368 17368 reproducer RET setcontext JUSTRETURN
17368 17368 reproducer CALL write(1,0x401173,1)
17368 17368 reproducer GIO fd 1 wrote 1 bytes
"."
17368 17368 reproducer RET write 1
17368 17368 reproducer CALL open(0x7f7fff4443d0,1,1)
17368 17368 reproducer NAMI "/tmp/test_fifo_1"
17368 17368 reproducer RET open -1 errno 4 Interrupted system call
17368 17368 reproducer PSIG SIGALRM caught handler=0x400dba mask=(): code=SI_TIMER sent by pid=0, uid=0 with sigval 0x0)
17368 17368 reproducer CALL setcontext(0x7f7fff444000)
17368 17368 reproducer RET setcontext JUSTRETURN
17368 17368 reproducer CALL write(1,0x401173,1)
17368 17368 reproducer GIO fd 1 wrote 1 bytes
"."
17368 17368 reproducer RET write 1
17368 17368 reproducer CALL open(0x7f7fff4443d0,1,1)
17368 17368 reproducer NAMI "/tmp/test_fifo_1"
17368 17368 reproducer RET open -1 errno 4 Interrupted system call
17368 17368 reproducer PSIG SIGALRM caught handler=0x400dba mask=(): code=SI_TIMER sent by pid=0, uid=0 with sigval 0x0)
17368 17368 reproducer CALL setcontext(0x7f7fff444000)
17368 17368 reproducer RET setcontext JUSTRETURN
17368 17368 reproducer CALL write(1,0x401173,1)
17368 17368 reproducer GIO fd 1 wrote 1 bytes
"."
17368 17368 reproducer RET write 1
17368 17368 reproducer CALL open(0x7f7fff4443d0,1,1)
17368 17368 reproducer NAMI "/tmp/test_fifo_1"
17368 17368 reproducer RET open RESTART
17368 17368 reproducer PSIG SIGINT SIG_DFL: code=SI_NOINFO
╭─blue@home ~
```
### CPython versions tested on:
CPython main branch, 3.15, 3.14, 3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-137398
* gh-137406
* gh-137407
<!-- /gh-linked-prs -->
|
7f416c867445dd94d11ee9df5f1a2d9d6eb8d883
|
0af7556b94eac47041957f36e98e230650b56bbf
|
python/cpython
|
python__cpython-137342
|
# Duplicated words again
# Bug report
### Bug description:
```python
# Add a code block here, if required
```
In the documentation, under [BaseHandler.http_error_default()](https://docs.python.org/3/library/urllib.request.html#urllib.request.BaseHandler.http_error_default), it said 'as as'.
[BaseHandler Objects](https://docs.python.org/3/library/urllib.request.html#basehandler-objects)
### CPython versions tested on:
3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137342
* gh-137346
* gh-137347
<!-- /gh-linked-prs -->
|
1612dcbafe763014deefd679fe75ac5831a14a43
|
0153d82a5ab0c6ac16c046bdd4438ea11b58d59d
|
python/cpython
|
python__cpython-137328
|
# Raw f-string format spec undocumented behavior change between 3.11 and 3.12
# Bug report
### Bug description:
I found this while working on https://github.com/astral-sh/ruff/pull/19546 .
Between 3.11 and 3.12, the behavior of raw f-string format specifiers changed.
On 3.11 and earlier, the rawness of the f-string is respected in the format specifier:
```powershell
PS ~>uvx python@3.11 -c @'
class UnchangedFormat:
def __format__(self, format):
return format
print("Non-raw output:", repr(f"{UnchangedFormat():\xFF}"))
print("Raw output:", repr(rf"{UnchangedFormat():\xFF}"))
'@
Non-raw output: 'ÿ'
Raw output: '\\xFF'
```
On 3.12 and later, the rawness of the f-string is not respected:
```powershell
PS ~>uvx python@3.12 -c @'
class UnchangedFormat:
def __format__(self, format):
return format
print("Non-raw output:", repr(f"{UnchangedFormat():\xFF}"))
print("Raw output:", repr(rf"{UnchangedFormat():\xFF}"))
'@
Non-raw output: 'ÿ'
Raw output: 'ÿ'
```
This new behavior looks to have been carried to t-strings on 3.14 and later:
```powershell
PS ~>uvx python@3.14 -c @'
class UnchangedFormat:
def __format__(self, format):
return format
print("Non-raw output:", repr(t"{UnchangedFormat():\xFF}"))
print("Raw output:", repr(rt"{UnchangedFormat():\xFF}"))
'@
Non-raw output: Template(strings=('', ''), interpolations=(Interpolation(<__main__.UnchangedFormat object at 0x00000183F9904EC0>, 'UnchangedFormat()', None, 'ÿ'),))
Raw output: Template(strings=('', ''), interpolations=(Interpolation(<__main__.UnchangedFormat object at 0x00000183F9908A50>, 'UnchangedFormat()', None, 'ÿ'),))
```
I could not find any information/documentation about how the rawness of f-strings affect their format specifiers, despite looking in:
- [The input output tutorial f-string section](https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals)
- [The lexical analysis section on f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings)
- [PEP 498 – Literal String Interpolation](https://peps.python.org/pep-0498/)
- [PEP 701 – Syntactic formalization of f-strings](https://peps.python.org/pep-0701/)
- [The 3.12 what's new section](https://docs.python.org/3/whatsnew/3.12.html)
- Multiple CPython github issue searches
- Asking in the Python discord
### CPython versions tested on:
3.12, 3.11, 3.14
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-137328
* gh-137344
* gh-137345
<!-- /gh-linked-prs -->
|
0153d82a5ab0c6ac16c046bdd4438ea11b58d59d
|
676748d4da3671205f537ecd61a492861e37b77b
|
python/cpython
|
python__cpython-137318
|
# `compile` fails on 3.14 on a valid expression when `-OO` is set
# Bug report
### Bug description:
Given the following code snippet:
```python
import ast
source = b'class A:\n """\n """\n'
compile(ast.parse(source), "a", "exec")
```
`python3.13 -OO test.py` passes, but `python3.14 -OO test.py` fails, with:
```python
Traceback (most recent call last):
File "/Users/tybug/Desktop/sandbox2.py", line 7, in <module>
compile(ast.parse(source), "a", "exec")
ValueError: empty body on ClassDef
```
Python: 3.14.0rc1
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-137318
* gh-137322
<!-- /gh-linked-prs -->
|
b74f3bed51378896f2c7c720e505e87373e68c79
|
fe0e921817a7f96c62c91085884ab910859328ce
|
python/cpython
|
python__cpython-137292
|
# Support perf profiler with an evaluation hook
# Feature or enhancement
### Proposal:
Currently the perf profiler doesn't support running with an evaluation hook in place. But this is easy to do - it just needs to capture the previous one and forward to it.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137292
<!-- /gh-linked-prs -->
|
375f484f976a1ed84c145a6ce4e467cd5b57db75
|
e3ad9003c5af314ae82d4e9f40d9c0375a34149f
|
python/cpython
|
python__cpython-137310
|
# Python implicit boolean conversion in logical operations bypasses try/except on 3.14.0rc1
# Bug report
### Bug description:
Throwing an exception during an implicit cast to a boolean as part of a logical operation (`or`, `and`) bypasses surrounding try/except statements.
The following example works on Python 3.12.3 but fails on Python 3.14.0rc1:
Example:
```python
class Foo:
def __bool__(self):
raise NotImplementedError()
a = Foo()
b = Foo()
# Works
try:
c = bool(a)
except:
print("passed c = bool(a)")
# Fails
try:
c = a or b
except:
print("passed c = a or b")
```
Output on Python 3.12.3 (expected behavior):
```
passed c = bool(a)
passed c = a or b
```
Output on Python 3.14.0rc1:
```pytb
passed c = bool(a)
Traceback (most recent call last):
File "/Users/justinfu/code/test_bool.py", line 15, in <module>
c = a or b
^^^^^^
File "/Users/justinfu/code/test_bool.py", line 3, in __bool__
raise NotImplementedError()
NotImplementedError
```
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-137310
* gh-137427
* gh-137664
* gh-137665
* gh-137667
<!-- /gh-linked-prs -->
|
1f2026b8a239b3169c0cad0157eb08358152b4c1
|
525784aa65d35a5609aba53c873a9a3a578f992b
|
python/cpython
|
python__cpython-137214
|
# Tab completion / dir broken on concurrent.futures
# Bug report
### Bug description:
I just noticed that in 3.14rc1, tab completion is broken on `concurrent.futures`. Trying `dir`, I got:
```
>>> import concurrent.futures
>>> dir(concurrent.futures)
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
dir(concurrent.futures)
~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/henryschreiner/.local/share/uv/python/cpython-3.14.0rc1+freethreaded-macos-x86_64-none/lib/python3.14t/concurrent/futures/__init__.py", line 47, in __dir__
return __all__ + ('__author__', '__doc__')
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: can only concatenate list (not "tuple") to list
```
There's a typo in the `__dir__` function; it is trying to concatenate a list and a tuple.
Bug introduced in https://github.com/python/cpython/pull/136381. Fix in #137214.
### CPython versions tested on:
3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137214
* gh-137284
<!-- /gh-linked-prs -->
|
2a87af062b79d914ce0120f1f1763213c1ebe8c4
|
d18f73ae1349ed005fa05ea2d852e1ab51dbc087
|
python/cpython
|
python__cpython-137277
|
# Micro-ops that have side-exits are sometimes marked as "escaping", when they should not be.
We tag micro-ops as "escaping", using `HAS_ESCAPES_FLAG`, when they call out to a function that might change the state of the world.
However, if that call only happens on an exit branch, then there is no escape if execution stays on trace.
In those cases we should not mark the micro-op as escaping.
For example: `_GUARD_IS_NONE_POP` is defined as follows:
```C
int is_none = PyStackRef_IsNone(val);
if (!is_none) {
PyStackRef_CLOSE(val);
SYNC_SP();
EXIT_IF(1);
}
DEAD(val);
```
The call to `PyStackRef_CLOSE` only occurs on the exit branch. So, if execution continues past `_GUARD_IS_NONE_POP` then no escaping call has been made. Therefore we shouldn't mark `_GUARD_IS_NONE_POP` as escaping.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137277
<!-- /gh-linked-prs -->
|
801cf3fcdd27d8b6dd0fdd3c39e6c996e2b2f7fa
|
7475887e1e5d7abc0e48c8ea50e4fe123582cdbd
|
python/cpython
|
python__cpython-137300
|
# locale.setlocale() crashes on Windows for long locale name
# Crash report
`locale.setlocale(locale.LC_CTYPE, 'ks_IN.UTF-8@devanagari')` crashes.
`locale.setlocale(locale.LC_CTYPE, 'ks_IN.UTF8@devanagari')` just raises a locale.Error.
It's not just about length. Standard locale names in Windows (like 'English_United States.1252') are pretty long.
Tested and reproduced in 3.12, 3.13, 3.14, and main, on Windows 10.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137300
* gh-137305
* gh-137306
<!-- /gh-linked-prs -->
|
718e0c89ba0610bba048245028ac133bbf2d44c2
|
e99bc7fd44bbbf2464c37d5a57777ac0e1264c37
|
python/cpython
|
python__cpython-137258
|
# Update bundled pip to 25.2
# Feature or enhancement
### Proposal:
`ensurepip`'s bundled version of pip gets updated to the current latest release, 25.2.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137258
* gh-137361
* gh-137362
* gh-137377
<!-- /gh-linked-prs -->
|
506542b5966073203f0da71a487de24e596b7979
|
57eab1b8f78039142d58d207d39ae734d67952cf
|
python/cpython
|
python__cpython-137186
|
# Android CI and binary releases
### Proposal:
At the PyCon US language summit a couple of months ago, we discussed adding the following features to the mobile platforms:
* Continuous integration on GitHub Actions
* Official binary releases on python.org
Details of the discussion are in the link below, but I think it's fair to say that there was a consensus in favor of both features. I've talked about this with @hugovk, and he's open to including Android binaries in the 3.14 release series, as long as the process is well-automated, and there are no objections from the core team.
I've created the following PRs to implement this:
* https://github.com/python/cpython/pull/137186
* Add a script which builds and tests on Android, and invoke it from the CI workflow.
* This takes about 20 minutes, which is slightly faster than the existing Windows free-threading job. So the overall workflow is not slowed down.
* https://github.com/python/release-tools/pull/265
* Invoke the same script from the existing workflow which builds the source and docs releases.
* Again, there is a slower existing job, so the overall workflow is not slowed down.
* The Android release artifacts are attached to the workflow, from where the release manager can download them. This will be the only additional manual step required in the release process.
* Add Android support to run_release.py and add_to_pydotorg.py.
* https://github.com/python/pythondotorg/pull/2762
* Describes some one-time setup which will need to be done manually through the Django admin interface, and updates the test data to show what that would look like.
* Adds an explanatory note at the top of the Android downloads page.
* https://github.com/python/peps/pull/4541
* Updates the release checklist.
### Links to previous discussion of this feature:
* [The Python Language Summit 2025: Python on Mobile - Next Steps](https://pyfound.blogspot.com/2025/06/python-language-summit-2025-python-on-mobile.html)
<!-- gh-linked-prs -->
### Linked PRs
* gh-137186
* gh-137683
* gh-137684
* gh-137768
<!-- /gh-linked-prs -->
|
f660ec37531b5e368a27ba065f73d31ff6fb6680
|
be56464c4b672ada378d3b9cc8076af56d96cf7b
|
python/cpython
|
python__cpython-137241
|
# heapq __all__ not updated for the maxheap methods
Currently we have:
```python
>>> import heapq
>>> heapq.__all__
['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', 'nlargest', 'nsmallest', 'heappushpop']
```
This should be:
```python
['heapify', 'heapify_max', 'heappop', 'heappop_max',
'heappush', 'heappush_max', 'heappushpop',
'heappushpop_max', 'heapreplace', 'heapreplace_max',
'merge', 'nlargest', 'nsmallest']
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-137241
* gh-137256
<!-- /gh-linked-prs -->
|
5f35f9b8fad50670604552062c1df8fbdff835ab
|
dc05d475c149289749f6131e3c0c4c1d2c492c8e
|
python/cpython
|
python__cpython-137227
|
# ForwardRef.evaluate() mishandles type_params
# Bug report
### Bug description:
The implementation of `annotationlib.ForwardRef.evaluate` handles its `type_params` argument in a complex yet incorrect way. Fixing this unfortunately leads to some test failures in `typing.get_type_hints()` because it does additional confusing things to the globals and locals. Still, I think we should fix the behavior of the new public `.evaluate()` method.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137227
* gh-137709
<!-- /gh-linked-prs -->
|
089a324a4258fd5783b7e65acc29231d1042646e
|
70730ad0414e4661d2e94710d865edf1f7f164a1
|
python/cpython
|
python__cpython-137229
|
# Setting the frame's line number causes `SystemError` to be raised if done in callback from `BRANCH_LEFT` or `BRANCH_RIGHT` event
# Bug report
### Bug description:
https://github.com/python/cpython/blob/main/Objects/frameobject.c#L1711
The two two new events `BRANCH_LEFT` and `BRANCH_RIGHT` should be added under `
case PY_MONITORING_EVENT_BRANCH:`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137229
* gh-137280
<!-- /gh-linked-prs -->
|
d18f73ae1349ed005fa05ea2d852e1ab51dbc087
|
438cbd857a875efc105b4215b1ae131e67af37e1
|
python/cpython
|
python__cpython-137195
|
# test.support.requires_debug_ranges raise SkipTest not returning decorator when `_testcapi` doesn't exist
# Bug report
### Bug description:
```python
@requires_debug_ranges()
class ...:
```
`requires_debug_ranges` here is expected to skip the decorated test. But when `_testcapi` doesn't exist, it raises `unittest.SkipTest` when calling the decorator.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137195
* gh-137274
* gh-137275
<!-- /gh-linked-prs -->
|
0282eef880c8c8db782a2088b0257250e0f76d48
|
b723c8be071afcf3f865c55a5efb6da54f7695a0
|
python/cpython
|
python__cpython-137281
|
# `TypeError` when omitting a `Protocol` type argument with default
# Bug report
### Bug description:
This is as minimal as I was able to get the repro. I'm suppose it speaks for itself:
```python
from typing import Generic, Protocol, TypeVar
T1 = TypeVar("T1")
T2 = TypeVar("T2", default=object)
class A(Protocol[T1]): ...
class B1(A[T2], Protocol, Generic[T1, T2]): ... # the workaround
class B2(A[T2], Protocol[T1, T2]): ... # the problem
B1[str] # ok
B2[str] # TypeError
```
on `3.13.5`:
```pytb
Traceback (most recent call last):
File "/home/joren/huh.py", line 11, in <module>
B2[str] # TypeError
~~^^^^^
File "/home/joren/.pyenv/versions/3.13.5/lib/python3.13/typing.py", line 432, in inner
return func(*args, **kwds)
File "/home/joren/.pyenv/versions/3.13.5/lib/python3.13/typing.py", line 1242, in _generic_class_getitem
args = prepare(cls, args)
TypeError: Too few arguments for <class '__main__.B2'>; actual 1, expected at least 2
```
on `3.14.0rc1`:
```pytb
Traceback (most recent call last):
File "/home/joren/huh.py", line 11, in <module>
B2[str] # TypeError
~~^^^^^
File "/home/joren/.pyenv/versions/3.14.0rc1/lib/python3.14/typing.py", line 401, in inner
return func(*args, **kwds)
File "/home/joren/.pyenv/versions/3.14.0rc1/lib/python3.14/typing.py", line 1133, in _generic_class_getitem
args = prepare(cls, args)
TypeError: Too few arguments for <class '__main__.B2'>; actual 1, expected at least 2
```
As the repro shows, the workaround is to parametrize an additional `Generic` instead of `Protocol`.
### CPython versions tested on:
3.13, 3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-137281
<!-- /gh-linked-prs -->
|
158b28dd1906c5d3fac7955f87ba808f1e89fdad
|
801cf3fcdd27d8b6dd0fdd3c39e6c996e2b2f7fa
|
python/cpython
|
python__cpython-137184
|
# The `w` typecode of `array.array` is new in Python 3.13
# Documentation
The `w` typecode was added in Python 3.13, but the docs do not mention it.
Furthermore, the documentation on `u` recommends using `w` as an alternative in a note that applies to Python 3.3+, which gives the impression that `w` has been available at least since Python 3.3. This is misleading.
```
Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import array
>>> array.array('w')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: bad typecode (must be b, B, u, h, H, i, I, l, L, q, Q, f or d)
```
<img width="667" height="220" alt="Image" src="https://github.com/user-attachments/assets/22d00504-2258-4812-b50a-3a82607d64cc" />
<img width="834" height="221" alt="Image" src="https://github.com/user-attachments/assets/43f06eff-d6d3-46cb-8aa6-146abd382467" />
<!-- gh-linked-prs -->
### Linked PRs
* gh-137184
* gh-137208
* gh-137209
<!-- /gh-linked-prs -->
|
0b4e13c2658c5a267fc50ee045ffb7b6408b2e3b
|
11a8652e25341e696b06d8dc7a18e8c3ee8059e4
|
python/cpython
|
python__cpython-137135
|
# Update macOS and Windows installers to SQLite 3.50.4
# Bug report
### Bug description:
We need to update the SQLite version shipped with our binary releases to 3.50.3+ to pickup upstream security updates. https://nvd.nist.gov/vuln/detail/CVE-2025-6965
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137135
* gh-137436
* gh-137455
* gh-137458
* gh-137457
* gh-137459
* gh-137462
<!-- /gh-linked-prs -->
|
532c37695d03f84fc6d12f891d26b901ef402ac4
|
44ff6b545149ea59837fc74122d435572f21e489
|
python/cpython
|
python__cpython-137094
|
# `test_embed.test_bpo20891` is racy under free-threading
# Bug report
### Bug description:
I haven't seen this in CI, but when running regrtest locally, I noticed a crash on `test_embed.test_bpo20891` regarding unlocking a mutex that wasn't locked.
The two offending pieces of code are here:
https://github.com/python/cpython/blob/d5e75c07682864e9d265e11f5e4730147e7d4842/Programs/_testembed.c#L412-L417
https://github.com/python/cpython/blob/d5e75c07682864e9d265e11f5e4730147e7d4842/Programs/_testembed.c#L382-L395
The problem is that in rare cases, the created thread can hit the call to `PyThread_release_lock` before it's held by the main thread, which causes a fatal error. This isn't an issue on the GILful build, because `PyGILState_Ensure` will block until the main thread releases the GIL.
I think the best fix would be to use a `PyEvent` to signal to the main thread rather than a lock.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-137094
<!-- /gh-linked-prs -->
|
9b451fb457a5de9ed535a0e2f41161dfaa9a419a
|
fece15d29f28e89f1231afa80508c80ed28dc37d
|
python/cpython
|
python__cpython-137091
|
# Remove redundant statement
# Documentation
> All that said, interpreters do naturally support certain flavors of
> concurrency, as a powerful side effect of that isolation.
> **There's a powerful side effect of that isolation.** It enables a
> different approach to concurrency than you can take with async or
> threads.
Remove the statement in bold.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137091
* gh-137108
<!-- /gh-linked-prs -->
|
1e69cd1634e4f0f8c375be85d11925bd12deef23
|
e047a35b23c1aa69ab8d5da56f36319cec4d36b8
|
python/cpython
|
python__cpython-137085
|
# [refactoring] Do not call get_gc_state from inside loop in expand_region_transitively_reachable
# Feature or enhancement
### Proposal:
It is minor refactoring. But I believe it is worth to do in the light of increasing usage of Immortal objects.
`get_gc_state` is not so lightweight and may have extra costs when calling for large number of immortal objects.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137085
<!-- /gh-linked-prs -->
|
d7db0ee7ee2af48f666a8b5a9321161b2dbd85ab
|
9b451fb457a5de9ed535a0e2f41161dfaa9a419a
|
python/cpython
|
python__cpython-137060
|
# `url2pathname()` mishandles URL with Windows drive in netloc
# Bug report
### Bug description:
Windows-specific regression in Python 3.14 caused by d783d7b51d31db568de6b3438f4e805acff663da
Some programs generate file URLs by adding a `file://` prefix to a path. On Windows, this can result in URLs like `file://C:/foo`. Though these URLs are malformed, they were correctly handled by `urllib.request.url2pathname()` until 3.14.
```python
>>> from urllib.request import url2pathname
>>> url2pathname('//C:/foo')
'\\\\C:\\foo' # expected 'C:\\foo'
```
### CPython versions tested on:
3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137060
* gh-137144
<!-- /gh-linked-prs -->
|
10a925c86db4cbcb9324c7269f69f813d3e7ed79
|
ae8b7d710020dfd336edd399fa35525dfe8fc049
|
python/cpython
|
python__cpython-137127
|
# `pyport.h`: use `__STDC_VERSION__ >= 202311L` instead of `__STDC_VERSION__ > 201710L`
# Bug report
### Bug description:
This code in `pyport.h` neds to be updated:
```c
// Static inline functions should use _Py_NULL rather than using directly NULL
// to prevent C++ compiler warnings. On C23 and newer and on C++11 and newer,
// _Py_NULL is defined as nullptr.
#if !defined(_MSC_VER) && \
((defined (__STDC_VERSION__) && __STDC_VERSION__ > 201710L) \
|| (defined(__cplusplus) && __cplusplus >= 201103))
# define _Py_NULL nullptr
#else
# define _Py_NULL NULL
#endif
```
It is using the wrong definition of C23 which should be 202311L as noted in the [Wiki page](https://en.wikipedia.org/wiki/C23_(C_standard_revision)) referenced in this [commit](https://github.com/python/cpython/commit/c965cf6dd1704a0138a4ef0a9c670e297cf66797).
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-137127
* gh-137409
<!-- /gh-linked-prs -->
|
cfd6da849a3c40904cddd23ae1700605877673fb
|
9cbf46d9920c269fe736ed689236d00223545f73
|
python/cpython
|
python__cpython-137057
|
# DTrace Build Fails on NetBSD
# Bug report
### Bug description:
CPython fails to build with DTrace support on NetBSD.
1. `System library conflicts` - NetBSD dtrace requires `-x nolibs` flag to avoid system library conflicts
2. `Make automatic variable expansion failure` - `$<` automatic variable doesn't expand properly in NetBSD Make
3. `Configure detection failure` - The configure script's DTrace linking test fails on NetBSD, causing `DTRACE_OBJS` to remain empty
### Environment
OS: NetBSD 10.0
Architecture: x86_64
```sh
$ dtrace -V
dtrace: Sun D 1.13
```
### Configuration
```sh
./configure --with-dtrace --with-pydebug
```
### Build
```sh
$ make
```
Output:
```sh
--- check-clean-src ---
--- check-app-store-compliance ---
--- Include/pydtrace_probes.h ---
--- build/scripts-3.15/idle3.15 ---
--- build/scripts-3.15/pydoc3.15 ---
--- python-config ---
--- Programs/_freeze_module.o ---
--- Modules/getpath_noop.o ---
--- Include/pydtrace_probes.h ---
mkdir -p Include
--- Programs/_freeze_module.o ---
gcc -pthread -c -fno-strict-overflow -Wsign-compare -g -Og -Wall -O2 -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Programs/_freeze_module.o Programs/_freeze_module.c
--- python-config ---
sed -e "s,@EXENAME@,/usr/local/bin/python3.15d," < ./Misc/python-config.in >python-config.py
--- build/scripts-3.15/idle3.15 ---
sed -e "s,/usr/bin/env python3,/usr/local/bin/python3.15d," < ./Tools/scripts/idle3 > build/scripts-3.15/idle3.15
--- Include/pydtrace_probes.h ---
CC="gcc -pthread" CFLAGS="-O2" /usr/sbin/dtrace -o Include/pydtrace_probes.h -h -s
--- Modules/getpath_noop.o ---
gcc -pthread -c -fno-strict-overflow -Wsign-compare -g -Og -Wall -O2 -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Modules/getpath_noop.o Modules/getpath_noop.c
--- python-config ---
LC_ALL=C sed -e 's,\$(\([A-Za-z0-9_]*\)),\$\{\1\},g' < Misc/python-config.sh >python-config
--- Include/pydtrace_probes.h ---
dtrace: option requires an argument -- s
Usage: dtrace [-32|-64] [-aACeFGhHlqSvVwZ] [-b bufsz] [-c cmd] [-D name[=def]]
[-I path] [-L path] [-o output] [-p pid] [-s script] [-U name]
[-x opt[=val]] [-X a|c|s|t]
[-P provider [[ predicate ] action ]]
[-m [ provider: ] module [[ predicate ] action ]]
[-f [[ provider: ] module: ] func [[ predicate ] action ]]
[-n [[[ provider: ] module: ] func: ] name [[ predicate ] action ]]
[-i probe-id [[ predicate ] action ]] [ args ... ]
predicate -> '/' D-expression '/'
action -> '{' D-statements '}'
-32 generate 32-bit D programs and ELF files
-64 generate 64-bit D programs and ELF files
-a claim anonymous tracing state
-A generate driver.conf(4) directives for anonymous tracing
-b set trace buffer size
-c run specified command and exit upon its completion
-C run cpp(1) preprocessor on script files
-D define symbol when invoking preprocessor
-e exit after compiling request but prior to enabling probes
-f enable or list probes matching the specified function name
-F coalesce trace output by function
-G generate an ELF file containing embedded dtrace program
-h generate a header file with definitions for static probes
-H print included files when invoking preprocessor
-i enable or list probes matching the specified probe id
-I add include directory to preprocessor search path
-l list probes matching specified criteria
-L add library directory to library search path
-m enable or list probes matching the specified module name
-n enable or list probes matching the specified probe name
-o set output file
-p grab specified process-ID and cache its symbol tables
-P enable or list probes matching the specified provider name
-q set quiet mode (only output explicitly traced data)
-s enable or list probes according to the specified D script
-S print D compiler intermediate code
-U undefine symbol when invoking preprocessor
-v set verbose mode (report stability attributes, arguments)
-V report DTrace API version
-w permit destructive actions
-x enable or modify compiler and tracing options
-X specify ISO C conformance settings for preprocessor
-Z permit probe descriptions that match zero probes
*** [Include/pydtrace_probes.h] Error code 2
make: stopped in /home/blue/Desktop/cpython
--- build/scripts-3.15/pydoc3.15 ---
sed -e "s,/usr/bin/env python3,/usr/local/bin/python3.15d," < ./Tools/scripts/pydoc3 > build/scripts-3.15/pydoc3.15
1 error
make: stopped in /home/blue/Desktop/cpython
```
### Issue 1: System Library Conflicts
NetBSD dtrace requires -x nolibs flag to avoid conflicts with system dtrace libraries. Without this flag, dtrace fails with:
```sh
dtrace: failed to compile script: "/usr/lib/dtrace/psinfo.d", line 46: syntax error near "u_int"
```
### Issue 2: Make Automatic Variable Expansion
NetBSD Make has issues with `$<` automatic variable expansion in complex command lines. When the Makefile.pre.in uses:
```
CC="$(CC)" CFLAGS="$(CFLAGS)" $(DTRACE) $(DFLAGS) -o $@ -h -s $<
```
The `$<` expands to nothing instead of `$(srcdir)/Include/pydtrace.d`, causing:
```
dtrace: option requires an argument -- s
```
### Issue 3: Configure Detection
The configure script's DTrace linking test fails on NetBSD, causing `DTRACE_OBJS` to remain empty.
This results in `DTRACE_OBJS=""` causing linking errors:
```sh
ld: Python/gc.o: in function `_PyGC_Collect':
/home/blue/Desktop/cpython/Python/gc.c:2048: undefined reference to `__dtraceenabled_python___gc__start'
ld: /home/blue/Desktop/cpython/Python/gc.c:2065: undefined reference to `__dtraceenabled_python___gc__done'
ld: /home/blue/Desktop/cpython/Python/gc.c:2066: undefined reference to `__dtrace_python___gc__done'
ld: /home/blue/Desktop/cpython/Python/gc.c:2065: undefined reference to `__dtraceenabled_python___gc__done'
ld: /home/blue/Desktop/cpython/Python/gc.c:2048: undefined reference to `__dtraceenabled_python___gc__start'
ld: /home/blue/Desktop/cpython/Python/gc.c:2049: undefined reference to `__dtrace_python___gc__start'
ld: Python/import.o: in function `import_find_and_load':
/home/blue/Desktop/cpython/Python/import.c:3725: undefined reference to `__dtraceenabled_python___import__find__load__start'
ld: /home/blue/Desktop/cpython/Python/import.c:3731: undefined reference to `__dtraceenabled_python___import__find__load__done'
ld: /home/blue/Desktop/cpython/Python/import.c:3732: undefined reference to `__dtrace_python___import__find__load__done'
ld: /home/blue/Desktop/cpython/Python/import.c:3726: undefined reference to `__dtrace_python___import__find__load__start'
ld: Python/sysmodule.o: in function `sys_audit_tstate':
/home/blue/Desktop/cpython/./Python/sysmodule.c:271: undefined reference to `__dtraceenabled_python___audit'
ld: Python/sysmodule.o: in function `should_audit':
/home/blue/Desktop/cpython/./Python/sysmodule.c:239: undefined reference to `__dtraceenabled_python___audit'
ld: Python/sysmodule.o: in function `sys_audit_tstate':
/home/blue/Desktop/cpython/./Python/sysmodule.c:304: undefined reference to `__dtrace_python___audit'
ld: Python/sysmodule.o: in function `should_audit':
/home/blue/Desktop/cpython/./Python/sysmodule.c:239: undefined reference to `__dtraceenabled_python___audit'
*** [Programs/_freeze_module] Error code 1
make: stopped in /home/blue/Desktop/cpython
1 error
make: stopped in /home/blue/Desktop/cpython
```
### CPython versions tested on:
CPython main branch, 3.13, 3.14, 3.15
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-137057
* gh-137444
* gh-137445
<!-- /gh-linked-prs -->
|
54a5fdffc8e20f111e7a7d2df352e8be057177ff
|
c2428ca9ea0c4eac9c7f2b41aff5f77660f21298
|
python/cpython
|
python__cpython-137055
|
# Remove obsolete counting of objects in young generation of GC
# Feature or enhancement
### Proposal:
There is no template and label for refactoring yet.
I have found a dead code in gc.c for young generation. https://github.com/python/cpython/blob/ec7fad79d24e79961b86e17177a32b32bb340fe5/Python/gc.c#L1334-L1342
It is superseded by counting of `object_visits` via `OBJECT_STAT_INC` and adjusted `gc_stats[gen].object_visits` at the end of the collection.
https://github.com/python/cpython/blob/ec7fad79d24e79961b86e17177a32b32bb340fe5/Python/gc.c#L2073-L2079
I believe it is worth to remove dead code to decrease code base (yeah, by 9 lines).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-137055
<!-- /gh-linked-prs -->
|
e93c30d4666f925c6a5a2cc4952f69782760d101
|
ec02db5caa546cb4759999453bd6efc1d517b95c
|
python/cpython
|
python__cpython-137042
|
# Free-threading documentation should mention PyList_GET_ITEM
https://docs.python.org/3/howto/free-threading-extensions.html#borrowed-references mentions that PyList_GetItem is thread-unsafe due to borrowed references. PyList_GET_ITEM should be added next to it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-137042
* gh-137045
* gh-137046
<!-- /gh-linked-prs -->
|
38b936cc9912fc6847265917f94af53f0bf228e9
|
80a7017d2649ad5d7d1f83758eeeef50e5eba6b1
|
python/cpython
|
python__cpython-137215
|
# Documentation enhancement proposal: explainer for asyncio
# Documentation
I've used Python's asyncio a couple times now, but never really felt confident in my mental model of how it fundamentally works and therefore how I can best leverage it. The official docs provide decent documentation for each specific function in the package, but, in my opinion, lack a cohesive overview of the systems design and architecture. Something that could help the user understand the why and how behind the recommended patterns. And a way to help the user make informed decisions about which tool in the asyncio toolkit they ought to grab, or to recognize when asyncio is the entirely wrong toolkit.
I spent a long time digging into the internals and then decided to take a stab at filling that perceived gap by writing a fairly thorough long-form article: [A conceptual overview of asyncio](https://github.com/anordin95/a-conceptual-overview-of-asyncio/blob/main/readme.md).
I also submitted the article to HackerNews where it got some traction: https://news.ycombinator.com/item?id=44638710
I imagine there's a few ways forward here.
- Linking to the Github article from the asyncio docs
- Integrating the article directly into the Python docs (along with stylistic & content modifications to match)
- Y'all decide the article's mediocre at best and you don't want it merged. (Fair enough!).
Either way, let me know what y'all figure makes the most sense. :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-137215
* gh-137581
* gh-137582
<!-- /gh-linked-prs -->
|
3964f974894eff1653913dda437971e0bbfa8ccc
|
d7dbde895884d58e3da7ed4107fd33171afad7cb
|
python/cpython
|
python__cpython-137040
|
# `http.cookies` should mention that `samesite=None` is valid as per RFC6265bis
# Documentation
The http.cookies.rst mentions this:
> The attribute [:attr:`samesite`](https://github.com/python/cpython/blob/main/Doc/library/http.cookies.rst#id39) specifies that the browser is not allowed to send the cookie along with cross-site requests. This helps to mitigate CSRF attacks. Valid values for this attribute are "Strict" and "Lax".
But the samesite spec now also allows "None" and the code already allows it.
```
>>> import http.cookies
>>> sk = http.cookies.SimpleCookie()
>>> sk['test'] = ''
>>> sk['test']['samesite'] = 'None'
>>> sk.output()
'Set-Cookie: test=""; SameSite=None'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-137040
* gh-137140
* gh-137141
<!-- /gh-linked-prs -->
|
ae8b7d710020dfd336edd399fa35525dfe8fc049
|
cfd6da849a3c40904cddd23ae1700605877673fb
|
python/cpython
|
python__cpython-136981
|
# Unused C tracing code in bdb
# Feature or enhancement
### Proposal:
Remove unused C tracing code in bdb
The `c_call`, `c_return`, and `c_exception` events have historically (since c69ebe8d50529eae281275c841428eb9b375a442) been dispatched to `c_profilefunc` and never `c_tracefunc`.
Dead codes related to `c_tracefunc` dispatching needs to be removed.
Where it was introduced
https://github.com/python/cpython/commit/c69ebe8d50529eae281275c841428eb9b375a442#diff-c22186367cbe20233e843261998dc027ae5f1f8c0d2e778abfa454ae74cc59deR3426-R3461
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136981
<!-- /gh-linked-prs -->
|
dc27218225fc3cfc452c46764e7cb0c1a52792b3
|
ec7fad79d24e79961b86e17177a32b32bb340fe5
|
python/cpython
|
python__cpython-136999
|
# Typo "algorthm" in 7 zstd module prologue comment files
The top comment in 7 zstd module files has a typo "algorthm" instead of "algorithm" (missing i).
Example:
https://github.com/python/cpython/blob/9a21df7c0a494e2819775eabd522ebec994d96c0/Modules/_zstd/zstddict.h#L1
Introduced 2 months ago in https://github.com/python/cpython/pull/133860
For all instances, see: https://github.com/search?q=repo%3Apython%2Fcpython%20%22algorthm%22&type=code
@AA-Turner would you welcome a PR to fix this?
<!-- gh-linked-prs -->
### Linked PRs
* gh-136999
* gh-137003
<!-- /gh-linked-prs -->
|
b6d324224474c54061a6aaeace50bc5666dc1779
|
c13cc4af793ba4ae27521df1693653920cafbf99
|
python/cpython
|
python__cpython-136973
|
# Fortify usages of macros in cryptographic modules
# Feature or enhancement
### Proposal:
I love macros because I can reduce the number of duplicated code to write. But at the same, macros make the code harder to read, especially because of the lack of IntelliSense. Therefore, I suggest to convert macros in hashlib and hmac into regular functions. I will measure the performance impact but I honestly doubt it will change much the bottleneck in such calls is in the hash computation itself.
Because of the code being shared by all cryptographic modules, I really want to have a dedicated folder with utils inside because it starts becoming annoying to have to declare everything as `static inline` (even large functions) or as macros...
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
There is precedent here: PEP-670 but the scope of this project was the entire C API. Here, I really want to target internal functions that were either added recently or that I was really annoyed with these past few days.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136973
* gh-137160
<!-- /gh-linked-prs -->
|
eefd70f0ed51e46fa9ff3d465dcf977bd4af28de
|
4e40f2bea7edfa5ba7e2e0e6159d9da9dfe4aa97
|
python/cpython
|
python__cpython-136933
|
# Ensure that `hashlib.<name>` does not raise `AttributeError`
# Feature or enhancement
### Proposal:
Let's summarize the current behavior of hashlib. We have two interfaces for getting digests: `hashlib.new(digest, ...)` and `hashlib.<digest>()`.
With `hashlib.new()` it depends on the presence of OpenSSL. If OpenSSL is present, and if it's not a BLAKE-2 (this is a special case that I'll talk about it later), we check if OpenSSL recognizes the digest *and* the security policy allows it. If this is not the case, we fall back to the built-in implementation, and we don't care about the security policy here. If the built-in doesn't exist, then we raise an *exact* ValueError.
With `hashlib.md5()` (and anything else except "blake2"), this is much more subtle. Named constructors are determined at *import* time and solely depend on the presence of OpenSSL. More precisely, if OpenSSL and the security policy allows it, then `hashlib.md5` is set to `_hashlib.openssl_md5`. And this doesn't change for the interpreter's lifetime.
On the other hand, if the security policy *doesn't* allow it, then we *still* set `hashlib.md5` to `_hashlib.openssl_md5`. This means that we will *not* be able to use it unless we explicitly pass `usedforsecurity=False` here. Now, without OpenSSL, we set the named constructors to the corresponding built-in HACL functions.
Now, as I said, the problem is about import hashlib when neither OpenSSL nor HACL* are present. Instead of raising an AttributeError when trying to access the function, we should either raise an ImportError, or create mock functions for hash functions that raise ValueError at runtime (which would be ideal IMO). That way, we can ensure that tests using cryptographic hashes are decorated with "@requires_hashdigest" and make build bots that are match by "FIPS" successful.
----
The case for *blake2* is a bit different because we actually do *not* care about OpenSSL at all! IOW, `hashlib.blake2` is solely HACL* implemented **except** that we can still access it via `hashlib.new("blake2b512", ...)`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136933
<!-- /gh-linked-prs -->
|
7ce2f101c4b1c123860c90bf67ccc20a7805ea48
|
ea06ae5b5e7b335efbdff03c087fad9980a53f69
|
python/cpython
|
python__cpython-136930
|
# DocTests for functools.cache()-decorated functions have no line number
# Bug report
### Bug description:
(edited to reflect discussion comments)
doctests of functions decorated with `functools.cache` (or `lru_cache()`) do not get their line number retrieved, e.g.:
```python
# file /tmp/t.py
import functools
@functools.cache
def f(x):
"""cached
>>> f(1)
-2
"""
return -x
```
```console
$ python -m doctest /tmp/t.py
**********************************************************************
File "/tmp/t.py", line ?, in t.f
Failed example:
f(1)
Expected:
-2
Got:
-1
**********************************************************************
1 item had failures:
1 of 1 in t.f
***Test Failed*** 1 failure.
```
where we can see that the `File "..."` line is missing the line number of the example.
Also:
```python
>>> import doctest
>>> import t
>>> dt, = doctest.DocTestFinder().find(t)
>>> dt
<DocTest t.f from /home/denis/src/cpython/t.py:None (1 example)>
>>> print(dt.lineno)
None
```
This is because `DocTest._find_lineno()` relies on `inspect.isfunction()` to possibly inspect function's code and get line numbers; but `inspect.isfunction()` returns `False` for a function decorated with `functools.cache` because only plain `types.FunctionType` is considered.
*Original question*: Should such cached functions be considered as well in `inspect.isfunction()`?
I would be happy to work on a fix in case this change is acceptable.
### CPython versions tested on:
CPython main branch, 3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136930
* gh-137615
* gh-137616
<!-- /gh-linked-prs -->
|
fece15d29f28e89f1231afa80508c80ed28dc37d
|
d5e75c07682864e9d265e11f5e4730147e7d4842
|
python/cpython
|
python__cpython-136885
|
# Replace reference to Google Groups
# Documentation
[Google is ending the support for](https://support.google.com/groups/answer/11036538?visit_id=638886377438342734-2695709570&p=usenet&rd=1) their Usenet client. There's a reference in the logging documentation which says Usenet is available under that group link. We should remove it or link an alternative Usenet client.
[Reference](https://github.com/python/cpython/blame/8f59fbb082a4d64619aeededc47b3b45212d2341/Doc/howto/logging.rst#L306)
<img width="818" height="116" alt="Image" src="https://github.com/user-attachments/assets/90114154-b9a3-4fdc-8050-11ee4ba60f47" />
Taken from here https://docs.python.org/3/howto/logging.html#next-steps @ 2025-07-20
<!-- gh-linked-prs -->
### Linked PRs
* gh-136885
* gh-136905
* gh-136906
<!-- /gh-linked-prs -->
|
1e672935b44e084439527507a865b94a4c1315c3
|
5798348a0739ccf46f690f5fa1443080ec5de310
|
python/cpython
|
python__cpython-136875
|
# `url2pathname()` doesn't handle URL query or fragment components
# Bug report
### Bug description:
`urllib.request.url2pathname()` incorrectly treats URL query (`?a=b&c=d`) and fragment (`#anchor`) components as part of the URL path
```python
>>> from urllib.request import url2pathname
>>> url2pathname('file://localhost/etc/hosts?foo=bar#badgers', require_scheme=True)
'/etc/hosts?foo=bar#badgers' # expected '/etc/hosts'
```
I _think_ they should be silently discarded as they have no bearing on the filesystem path (similar to how we discard the netloc if it's a local hostname).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136875
* gh-136942
<!-- /gh-linked-prs -->
|
80b2d60a51cfd824d025eb8b3ec500acce5c010c
|
4b68289ca6954b8d135e2ee2344e67fae38239fd
|
python/cpython
|
python__cpython-136951
|
# data races in instrumentation when running coverage under TSAN
When running the test suite of [python-isal](https://github.com/pycompression/python-isal) which uses coverage.py, there are multiple data races reported under a TSAN build:
Races:
<details>
```console
#9 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#10 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#11 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#12 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#13 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#14 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#15 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#16 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#17 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#18 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#19 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#20 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#21 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#22 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#23 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#24 _PyObject_Call call.c:361 (python.exe:arm64+0x10007f418)
#25 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#26 _PyEval_EvalFrameDefault generated_cases.c.h:2656 (python.exe:arm64+0x10027b6b4)
#27 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#28 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#29 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#30 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#31 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#32 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#33 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#34 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#35 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#36 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#37 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#38 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#39 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#40 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#41 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#42 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#43 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#44 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#45 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#46 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#47 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#48 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#49 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#50 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#51 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#52 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#53 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#54 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#55 builtin_exec bltinmodule.c.h:568 (python.exe:arm64+0x100263f80)
#56 cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:465 (python.exe:arm64+0x10011f43c)
#57 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#58 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#59 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#60 run_mod pythonrun.c:1436 (python.exe:arm64+0x1003359cc)
#61 _PyRun_SimpleFileObject pythonrun.c:521 (python.exe:arm64+0x1003311f8)
#62 _PyRun_AnyFileObject pythonrun.c:81 (python.exe:arm64+0x100330950)
#63 pymain_run_file main.c:429 (python.exe:arm64+0x100370710)
#64 Py_RunMain main.c:772 (python.exe:arm64+0x10036fb44)
#65 pymain_main main.c:802 (python.exe:arm64+0x10036ffb0)
#66 Py_BytesMain main.c:826 (python.exe:arm64+0x100370084)
#67 main python.c:15 (python.exe:arm64+0x100000a04)
SUMMARY: ThreadSanitizer: data race generated_cases.c.h:9254 in _PyEval_EvalFrameDefault
==================
==================
tests/test_igzip.py::test_compress_stdin_stdout[1] PARALLEL FAILED
tests/test_igzip.py::test_compress_stdin_stdout[2] PARALLEL FAILED
tests/test_igzip.py::test_compress_stdin_stdout[3] PARALLEL FAILED
tests/test_igzip.py::test_decompress_infile_outfile PARALLEL PASSED
==================
WARNING: ThreadSanitizer: data race (pid=60855)
Read of size 1 at 0x0001181e17ae by thread T26292:
#0 _PyEval_EvalFrameDefault generated_cases.c.h (python.exe:arm64+0x10027bbb8)
#1 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#2 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#3 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#4 context_run context.c:728 (python.exe:arm64+0x1002b5200)
#5 _PyEval_EvalFrameDefault generated_cases.c.h:3766 (python.exe:arm64+0x10027fa0c)
#6 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#7 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#8 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#9 _PyObject_Call call.c:348 (python.exe:arm64+0x10007f458)
#10 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#11 thread_run _threadmodule.c:373 (python.exe:arm64+0x1003ffad8)
#12 pythread_wrapper thread_pthread.h:232 (python.exe:arm64+0x100357734)
Previous atomic write of size 1 at 0x0001181e17ae by thread T26290:
#0 call_instrumentation_vector instrumentation.c:1194 (python.exe:arm64+0x100304348)
#1 _Py_call_instrumentation_jump instrumentation.c:1245 (python.exe:arm64+0x1003047f8)
#2 _PyEval_EvalFrameDefault generated_cases.c.h:7298 (python.exe:arm64+0x100271388)
#3 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#4 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#5 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#6 context_run context.c:728 (python.exe:arm64+0x1002b5200)
#7 _PyEval_EvalFrameDefault generated_cases.c.h:3766 (python.exe:arm64+0x10027fa0c)
#8 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#9 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#10 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#11 _PyObject_Call call.c:348 (python.exe:arm64+0x10007f458)
#12 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#13 thread_run _threadmodule.c:373 (python.exe:arm64+0x1003ffad8)
#14 pythread_wrapper thread_pthread.h:232 (python.exe:arm64+0x100357734)
Thread T26292 (tid=709487, running) created by main thread at:
#0 pthread_create <null> (libclang_rt.tsan_osx_dynamic.dylib:arm64e+0x32b00)
#1 do_start_joinable_thread thread_pthread.h:279 (python.exe:arm64+0x10035698c)
#2 PyThread_start_joinable_thread thread_pthread.h:321 (python.exe:arm64+0x1003567d4)
#3 do_start_new_thread _threadmodule.c:1877 (python.exe:arm64+0x1003ff68c)
#4 thread_PyThread_start_joinable_thread _threadmodule.c:1992 (python.exe:arm64+0x1003fe41c)
#5 cfunction_call methodobject.c:564 (python.exe:arm64+0x10012009c)
#6 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#7 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#8 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#9 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#10 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#11 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#12 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#13 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#14 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#15 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#16 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#17 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#18 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#19 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#20 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#21 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#22 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#23 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#24 _PyObject_Call call.c:361 (python.exe:arm64+0x10007f418)
#25 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#26 _PyEval_EvalFrameDefault generated_cases.c.h:2656 (python.exe:arm64+0x10027b6b4)
#27 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#28 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#29 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#30 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#31 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#32 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#33 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#34 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#35 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#36 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#37 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#38 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#39 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#40 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#41 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#42 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#43 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#44 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#45 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#46 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#47 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#48 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#49 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#50 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#51 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#52 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#53 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#54 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#55 builtin_exec bltinmodule.c.h:568 (python.exe:arm64+0x100263f80)
#56 cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:465 (python.exe:arm64+0x10011f43c)
#57 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#58 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#59 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#60 run_mod pythonrun.c:1436 (python.exe:arm64+0x1003359cc)
#61 _PyRun_SimpleFileObject pythonrun.c:521 (python.exe:arm64+0x1003311f8)
#62 _PyRun_AnyFileObject pythonrun.c:81 (python.exe:arm64+0x100330950)
#63 pymain_run_file main.c:429 (python.exe:arm64+0x100370710)
#64 Py_RunMain main.c:772 (python.exe:arm64+0x10036fb44)
#65 pymain_main main.c:802 (python.exe:arm64+0x10036ffb0)
#66 Py_BytesMain main.c:826 (python.exe:arm64+0x100370084)
#67 main python.c:15 (python.exe:arm64+0x100000a04)
Thread T26290 (tid=709485, running) created by main thread at:
#0 pthread_create <null> (libclang_rt.tsan_osx_dynamic.dylib:arm64e+0x32b00)
#1 do_start_joinable_thread thread_pthread.h:279 (python.exe:arm64+0x10035698c)
#2 PyThread_start_joinable_thread thread_pthread.h:321 (python.exe:arm64+0x1003567d4)
#3 do_start_new_thread _threadmodule.c:1877 (python.exe:arm64+0x1003ff68c)
#4 thread_PyThread_start_joinable_thread _threadmodule.c:1992 (python.exe:arm64+0x1003fe41c)
#5 cfunction_call methodobject.c:564 (python.exe:arm64+0x10012009c)
#6 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#7 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#8 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#9 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#10 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#11 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#12 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#13 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#14 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#15 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#16 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#17 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#18 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#19 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#20 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#21 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#22 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#23 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#24 _PyObject_Call call.c:361 (python.exe:arm64+0x10007f418)
#25 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#26 _PyEval_EvalFrameDefault generated_cases.c.h:2656 (python.exe:arm64+0x10027b6b4)
#27 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#28 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#29 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#30 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#31 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#32 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#33 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#34 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#35 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#36 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#37 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#38 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#39 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#40 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#41 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#42 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#43 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#44 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#45 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#46 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#47 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#48 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#49 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#50 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#51 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#52 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#53 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#54 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#55 builtin_exec bltinmodule.c.h:568 (python.exe:arm64+0x100263f80)
#56 cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:465 (python.exe:arm64+0x10011f43c)
#57 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#58 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#59 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#60 run_mod pythonrun.c:1436 (python.exe:arm64+0x1003359cc)
#61 _PyRun_SimpleFileObject pythonrun.c:521 (python.exe:arm64+0x1003311f8)
#62 _PyRun_AnyFileObject pythonrun.c:81 (python.exe:arm64+0x100330950)
#63 pymain_run_file main.c:429 (python.exe:arm64+0x100370710)
#64 Py_RunMain main.c:772 (python.exe:arm64+0x10036fb44)
#65 pymain_main main.c:802 (python.exe:arm64+0x10036ffb0)
#66 Py_BytesMain main.c:826 (python.exe:arm64+0x100370084)
#67 main python.c:15 (python.exe:arm64+0x100000a04)
SUMMARY: ThreadSanitizer: data race generated_cases.c.h in _PyEval_EvalFrameDefault
==================
tests/test_igzip.py::test_compress_infile_outfile PARALLEL PASSED
tests/test_igzip.py::test_decompress_infile_outfile_error PARALLEL PASSED
==================
WARNING: ThreadSanitizer: data race (pid=60855)
Atomic write of size 1 at 0x00011812e114 by thread T26300:
#0 _PyEval_EvalFrameDefault generated_cases.c.h:10411 (python.exe:arm64+0x1002770c4)
#1 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#2 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#3 list_extend_iter_lock_held listobject.c:1263 (python.exe:arm64+0x1000d8b84)
#4 _list_extend listobject.c:1452 (python.exe:arm64+0x1000d3c48)
#5 _PyList_Extend listobject.c:1480 (python.exe:arm64+0x1000d3540)
#6 PySequence_List abstract.c:2085 (python.exe:arm64+0x100054ad0)
#7 PySequence_Fast abstract.c:2116 (python.exe:arm64+0x100054d2c)
#8 PyUnicode_Join unicodeobject.c:10232 (python.exe:arm64+0x1001c3674)
#9 unicode_join unicodeobject.c:12513 (python.exe:arm64+0x1001e81a4)
#10 _PyEval_EvalFrameDefault generated_cases.c.h:3979 (python.exe:arm64+0x10026dc20)
#11 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#12 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#13 _PyForIter_VirtualIteratorNext ceval.c:3585 (python.exe:arm64+0x100289c14)
#14 _PyEval_EvalFrameDefault generated_cases.c.h:5751 (python.exe:arm64+0x100273808)
#15 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#16 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#17 list_extend_iter_lock_held listobject.c:1263 (python.exe:arm64+0x1000d8bc4)
#18 _list_extend listobject.c:1452 (python.exe:arm64+0x1000d3c48)
#19 _PyList_Extend listobject.c:1480 (python.exe:arm64+0x1000d3540)
#20 PySequence_List abstract.c:2085 (python.exe:arm64+0x100054ad0)
#21 PySequence_Fast abstract.c:2116 (python.exe:arm64+0x100054d2c)
#22 PyUnicode_Join unicodeobject.c:10232 (python.exe:arm64+0x1001c3674)
#23 unicode_join unicodeobject.c:12513 (python.exe:arm64+0x1001e81a4)
#24 _PyEval_EvalFrameDefault generated_cases.c.h:3979 (python.exe:arm64+0x10026dc20)
#25 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#26 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#27 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#28 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#29 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#30 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#31 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#32 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#33 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#34 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#35 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#36 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#37 context_run context.c:728 (python.exe:arm64+0x1002b5200)
#38 _PyEval_EvalFrameDefault generated_cases.c.h:3766 (python.exe:arm64+0x10027fa0c)
#39 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#40 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#41 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#42 _PyObject_Call call.c:348 (python.exe:arm64+0x10007f458)
#43 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#44 thread_run _threadmodule.c:373 (python.exe:arm64+0x1003ffad8)
#45 pythread_wrapper thread_pthread.h:232 (python.exe:arm64+0x100357734)
Previous read of size 1 at 0x00011812e114 by thread T26298:
#0 call_instrumentation_vector instrumentation.c:1194 (python.exe:arm64+0x1003042b4)
#1 _Py_call_instrumentation instrumentation.c:1209 (python.exe:arm64+0x100303e94)
#2 _PyEval_EvalFrameDefault generated_cases.c.h:7500 (python.exe:arm64+0x10026f920)
#3 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#4 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#5 list_extend_iter_lock_held listobject.c:1263 (python.exe:arm64+0x1000d8b84)
#6 _list_extend listobject.c:1452 (python.exe:arm64+0x1000d3c48)
#7 _PyList_Extend listobject.c:1480 (python.exe:arm64+0x1000d3540)
#8 PySequence_List abstract.c:2085 (python.exe:arm64+0x100054ad0)
#9 PySequence_Fast abstract.c:2116 (python.exe:arm64+0x100054d2c)
#10 PyUnicode_Join unicodeobject.c:10232 (python.exe:arm64+0x1001c3674)
#11 unicode_join unicodeobject.c:12513 (python.exe:arm64+0x1001e81a4)
#12 method_vectorcall_O descrobject.c:476 (python.exe:arm64+0x10009693c)
#13 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#14 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#15 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#16 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#17 _PyForIter_VirtualIteratorNext ceval.c:3585 (python.exe:arm64+0x100289c14)
#18 _PyEval_EvalFrameDefault generated_cases.c.h:5751 (python.exe:arm64+0x100273808)
#19 gen_send_ex2 genobject.c:259 (python.exe:arm64+0x1000b3854)
#20 gen_iternext genobject.c:634 (python.exe:arm64+0x1000b1068)
#21 list_extend_iter_lock_held listobject.c:1263 (python.exe:arm64+0x1000d8bc4)
#22 _list_extend listobject.c:1452 (python.exe:arm64+0x1000d3c48)
#23 _PyList_Extend listobject.c:1480 (python.exe:arm64+0x1000d3540)
#24 PySequence_List abstract.c:2085 (python.exe:arm64+0x100054ad0)
#25 PySequence_Fast abstract.c:2116 (python.exe:arm64+0x100054d2c)
#26 PyUnicode_Join unicodeobject.c:10232 (python.exe:arm64+0x1001c3674)
#27 unicode_join unicodeobject.c:12513 (python.exe:arm64+0x1001e81a4)
#28 _PyEval_EvalFrameDefault generated_cases.c.h:3979 (python.exe:arm64+0x10026dc20)
#29 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#30 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#31 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#32 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#33 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#34 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#35 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#36 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#37 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#38 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#39 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#40 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#41 context_run context.c:728 (python.exe:arm64+0x1002b5200)
#42 _PyEval_EvalFrameDefault generated_cases.c.h:3766 (python.exe:arm64+0x10027fa0c)
#43 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#44 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#45 method_vectorcall classobject.c:73 (python.exe:arm64+0x100083d20)
#46 _PyObject_Call call.c:348 (python.exe:arm64+0x10007f458)
#47 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#48 thread_run _threadmodule.c:373 (python.exe:arm64+0x1003ffad8)
#49 pythread_wrapper thread_pthread.h:232 (python.exe:arm64+0x100357734)
Thread T26300 (tid=709520, running) created by main thread at:
#0 pthread_create <null> (libclang_rt.tsan_osx_dynamic.dylib:arm64e+0x32b00)
#1 do_start_joinable_thread thread_pthread.h:279 (python.exe:arm64+0x10035698c)
#2 PyThread_start_joinable_thread thread_pthread.h:321 (python.exe:arm64+0x1003567d4)
#3 do_start_new_thread _threadmodule.c:1877 (python.exe:arm64+0x1003ff68c)
#4 thread_PyThread_start_joinable_thread _threadmodule.c:1992 (python.exe:arm64+0x1003fe41c)
#5 cfunction_call methodobject.c:564 (python.exe:arm64+0x10012009c)
#6 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#7 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#8 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#9 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#10 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#11 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#12 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#13 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#14 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#15 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#16 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#17 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#18 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#19 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#20 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#21 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#22 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#23 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#24 _PyObject_Call call.c:361 (python.exe:arm64+0x10007f418)
#25 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#26 _PyEval_EvalFrameDefault generated_cases.c.h:2656 (python.exe:arm64+0x10027b6b4)
#27 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#28 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#29 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#30 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#31 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#32 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#33 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#34 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#35 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#36 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#37 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#38 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#39 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#40 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#41 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#42 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#43 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#44 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#45 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#46 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#47 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#48 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#49 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#50 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#51 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#52 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#53 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#54 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#55 builtin_exec bltinmodule.c.h:568 (python.exe:arm64+0x100263f80)
#56 cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:465 (python.exe:arm64+0x10011f43c)
#57 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#58 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#59 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#60 run_mod pythonrun.c:1436 (python.exe:arm64+0x1003359cc)
#61 _PyRun_SimpleFileObject pythonrun.c:521 (python.exe:arm64+0x1003311f8)
#62 _PyRun_AnyFileObject pythonrun.c:81 (python.exe:arm64+0x100330950)
#63 pymain_run_file main.c:429 (python.exe:arm64+0x100370710)
#64 Py_RunMain main.c:772 (python.exe:arm64+0x10036fb44)
#65 pymain_main main.c:802 (python.exe:arm64+0x10036ffb0)
#66 Py_BytesMain main.c:826 (python.exe:arm64+0x100370084)
#67 main python.c:15 (python.exe:arm64+0x100000a04)
Thread T26298 (tid=709518, running) created by main thread at:
#0 pthread_create <null> (libclang_rt.tsan_osx_dynamic.dylib:arm64e+0x32b00)
#1 do_start_joinable_thread thread_pthread.h:279 (python.exe:arm64+0x10035698c)
#2 PyThread_start_joinable_thread thread_pthread.h:321 (python.exe:arm64+0x1003567d4)
#3 do_start_new_thread _threadmodule.c:1877 (python.exe:arm64+0x1003ff68c)
#4 thread_PyThread_start_joinable_thread _threadmodule.c:1992 (python.exe:arm64+0x1003fe41c)
#5 cfunction_call methodobject.c:564 (python.exe:arm64+0x10012009c)
#6 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#7 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#8 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#9 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#10 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#11 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#12 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#13 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#14 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#15 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#16 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#17 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#18 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#19 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#20 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#21 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#22 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#23 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#24 _PyObject_Call call.c:361 (python.exe:arm64+0x10007f418)
#25 PyObject_Call call.c:373 (python.exe:arm64+0x10007f4cc)
#26 _PyEval_EvalFrameDefault generated_cases.c.h:2656 (python.exe:arm64+0x10027b6b4)
#27 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#28 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#29 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#30 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#31 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#32 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#33 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#34 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#35 _PyEval_EvalFrameDefault generated_cases.c.h:3236 (python.exe:arm64+0x10027ab80)
#36 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#37 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#38 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#39 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#40 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#41 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#42 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#43 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#44 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#45 _PyEval_Vector ceval.c:1977 (python.exe:arm64+0x10026a6d0)
#46 _PyFunction_Vectorcall call.c (python.exe:arm64+0x10007f7e0)
#47 _PyObject_VectorcallDictTstate call.c:146 (python.exe:arm64+0x10007e3c0)
#48 _PyObject_Call_Prepend call.c:504 (python.exe:arm64+0x10007fddc)
#49 call_method typeobject.c:3055 (python.exe:arm64+0x10018a4e8)
#50 slot_tp_call typeobject.c:10524 (python.exe:arm64+0x10018a308)
#51 _PyObject_MakeTpCall call.c:242 (python.exe:arm64+0x10007e658)
#52 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f244)
#53 _PyEval_EvalFrameDefault generated_cases.c.h:2968 (python.exe:arm64+0x10027c754)
#54 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#55 builtin_exec bltinmodule.c.h:568 (python.exe:arm64+0x100263f80)
#56 cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:465 (python.exe:arm64+0x10011f43c)
#57 PyObject_Vectorcall call.c:327 (python.exe:arm64+0x10007f1a8)
#58 _PyEval_EvalFrameDefault generated_cases.c.h:1620 (python.exe:arm64+0x1002774ac)
#59 PyEval_EvalCode ceval.c:868 (python.exe:arm64+0x10026a2ac)
#60 run_mod pythonrun.c:1436 (python.exe:arm64+0x1003359cc)
#61 _PyRun_SimpleFileObject pythonrun.c:521 (python.exe:arm64+0x1003311f8)
#62 _PyRun_AnyFileObject pythonrun.c:81 (python.exe:arm64+0x100330950)
#63 pymain_run_file main.c:429 (python.exe:arm64+0x100370710)
#64 Py_RunMain main.c:772 (python.exe:arm64+0x10036fb44)
#65 pymain_main main.c:802 (python.exe:arm64+0x10036ffb0)
#66 Py_BytesMain main.c:826 (python.exe:arm64+0x100370084)
#67 main python.c:15 (python.exe:arm64+0x100000a04)
SUMMARY: ThreadSanitizer: data race generated_cases.c.h:10411 in _PyEval_EvalFrameDefault
==================
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-136951
* gh-136953
* gh-136994
* gh-137082
<!-- /gh-linked-prs -->
|
f183996eb77fd2d5662c62667298c292c943ebf5
|
322442945084ea9055f86a17fa5096b11ba5b344
|
python/cpython
|
python__cpython-136864
|
# Improve `StrEnum` documentation
The `StrEnum` docs include a note that references `str(StrEnum.member)`, which can seem like `member` is a method of the `StrEnum` class.
Also, unlike `IntEnum`, there is no illustrative example.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136864
* gh-136936
* gh-136937
<!-- /gh-linked-prs -->
|
5f9e38f9b9f2b82e841f1b11a8300f2cacd76a36
|
58d305cf387816c559602a95ba850856dc9b8129
|
python/cpython
|
python__cpython-136856
|
# Prevent `make venv` from saying it succeeded when it failed
As reported in https://github.com/python/devguide/issues/1607, if an error happens when executing `make venv`, it will still be reported as successful ("The venv has been created in the ./venv directory").
The `set -e` option should be enabled to exit on error.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136856
* gh-136860
* gh-136861
<!-- /gh-linked-prs -->
|
9c2f91cde80a6758e0c1390323bf6f7eb4b5d6b5
|
dda9d0011fc3d3f561ca00ac83bf7a55a6325aa9
|
python/cpython
|
python__cpython-136853
|
# Emscripten buildbot should run against node 24
We want to support pyrepl #124621 but it will require JSPI which requires node 24. So to test this we should run tests against node 24.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136853
* gh-136907
* gh-136909
<!-- /gh-linked-prs -->
|
9c7b2af73dee2b99793637c3b70f724641b84349
|
cf19b6435d02dd7be11b84a44f4a8a9f1a935b15
|
python/cpython
|
python__cpython-136811
|
# minor cleanup: dict .update({x: y}) calls with a single item dict literal
@disconnect3d was analyzing stdlib code and noted several places with an old code pattern of a dict's .update method being called with a single element dict literal `d.update({key: value})` instead of just `d[key] = value` assignment. see PR which cleans these up. https://github.com/python/cpython/pull/136811
It avoids an unnecessary temporary dict and method call, less awkward code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136811
* gh-136840
<!-- /gh-linked-prs -->
|
69ea1b3a8f45fec46add3272ad47f14ff5321ae8
|
67036f1ee1c23257d320a80c152090235b8ca892
|
python/cpython
|
python__cpython-136804
|
# Repl syntax highlighting fails in pattern matching when the previous case spans across multiple lines
# Bug report
### Bug description:
There is a bug in the new repl syntax highlighting when using pattern matching with cases spawning across new lines.
To reproduce:
```python
def status(code):
match code:
case 0: return "OK" # correct highlight
case -1: # correct highlight
return "Error"
case -2: # no highlight
return "Big Error"
case _: # no highlight
return "Unknown status"
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-136804
* gh-136813
<!-- /gh-linked-prs -->
|
3a648445337098abf22c7faa296389dab597797c
|
6a1c93af806d0ca5d3fb86cd183d00013bbf28d1
|
python/cpython
|
python__cpython-136829
|
# One million hertz
# Documentation
https://docs.python.org/3.15/whatsnew/3.15.html says of the new sampling profiler:
> This approach provides virtually zero overhead while achieving sampling rates of **up to 200,000 Hz**, making it the fastest sampling profiler available for Python (at the time of its contribution) and ideal for debugging performance issues in production environments.
But https://creators.spotify.com/pod/profile/corepy/episodes/The-Megahertz-e35ffoi says it can be up to 1,000,000 Hz.
@pablogsal Shall we change What's New to say "up to 1,000,000 Hz"?
<!-- gh-linked-prs -->
### Linked PRs
* gh-136829
<!-- /gh-linked-prs -->
|
1ba23244f3306aa8d19eb4b98cfee6ad4cf514c9
|
6293d8a1a648a498b7ac899631b74fa25c71c1ac
|
python/cpython
|
python__cpython-136802
|
# Align `ValueError` exception messages when a hash digest is not available
### Proposal:
Currently, we have a bit of different messages when a hash algorithm is not supported. It's annoying because the user does not necessarily know what is what. Also, unfortunately, since OpenSSL 3.0, when a digest is not supported by the FIPS provider, then the reason message only contains "ValueError: [digital envelope routines] unsupported" and not the old "ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS" as functions are now no more indicated in OpenSSL errors.
This is a bit annoying, and especially very confusing in the following cases:
```py
>>> _hashlib.openssl_md5()
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
_hashlib.openssl_md5()
~~~~~~~~~~~~~~~~~~~~^^
_hashlib.UnsupportedDigestmodError: [digital envelope routines] unsupported
>>> import hmac
>>> hmac.new(b"", b"", "shake_128")
...
ValueError: error in OpenSSL function HMAC_Init_ex()
```
This does not give any information of why it failed. So we need to be better here, for the user at least. On the other hand, with blocked built-in functions, the ValueError is raised by `__get_builtin_constructor`, which has a better message.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136802
<!-- /gh-linked-prs -->
|
6be49ee517258281357aa6846d2564bc5626b7ca
|
800d37feca2e0ea3343995b3b817b653db2f9034
|
python/cpython
|
python__cpython-136784
|
# ctypes docs should list fixed-size integer types in table for "fundamental data types"
See https://docs.python.org/3.15/library/ctypes.html#fundamental-data-types. Fixed-size types, like [c_int32](https://docs.python.org/3.15/library/ctypes.html#ctypes.c_int32) - are missing here.
They can be found below in section [Fundamental data types](https://docs.python.org/3.15/library/ctypes.html#ctypes-fundamental-data-types-2), but I think they should be listed in the table as well. Or someone could again open an issue like https://github.com/python/cpython/issues/51108 ;-)
<!-- gh-linked-prs -->
### Linked PRs
* gh-136784
* gh-136785
* gh-136786
<!-- /gh-linked-prs -->
|
acefb978dcb5dd554e3c49a3015ee5c2ad6bfda1
|
263e451c4114ac98add1f1e8aa9ee030e054bdfd
|
python/cpython
|
python__cpython-136774
|
# Misleading comment in `enum.verify.__call__`
I saw some nonsense in a comment.
```py
if enum_type == 'flag':
# check for powers of two
for i in range(_high_bit(low)+1, _high_bit(high)):
if 2**i not in values:
missing.append(2**i)
elif enum_type == 'enum':
# check for powers of one
for i in range(low+1, high):
if i not in values:
missing.append(i)
```
The nonsense in this code, present in the class EnumCheck, is the comment about powers of 1. What is meant is rather "checking the identifiers are contiguous".
I suggest replacing the comment with something more suitable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136774
* gh-136841
* gh-136842
<!-- /gh-linked-prs -->
|
6a1c93af806d0ca5d3fb86cd183d00013bbf28d1
|
f575588ccf27d8d54a1e99cfda944f2614b3255c
|
python/cpython
|
python__cpython-136794
|
# IPv4 addresses in 0.0.0.0/8 should be marked reserved
# Bug report
### Bug description:
According to https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
and https://en.wikipedia.org/wiki/Reserved_IP_addresses 0.0.0.0/8 and other ranges should be reserved.
`ipaddress.IPv4Address("0.0.0.0").is_reserved` is `False` however.
The definition in the Python docs is
```
is_reserved
True if the address is otherwise IETF reserved.
```
So I believe there are several ranges missing.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-136794
* gh-136827
* gh-136828
<!-- /gh-linked-prs -->
|
6293d8a1a648a498b7ac899631b74fa25c71c1ac
|
57acd65a30f8cb1f3a3cc01322f03215017f5caa
|
python/cpython
|
python__cpython-136709
|
# `os.chdir` docstring is invalid rst
# Bug report
### Bug description:
```python
>>> from docutils.core import publish_doctree
>>> from posix import chdir
>>> publish_doctree(chdir.__doc__)
<string>:5: (ERROR/3) Unexpected indentation.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136709
* gh-136719
* gh-136721
<!-- /gh-linked-prs -->
|
bde808ad6ba5eee8a6201983cf071449d7ce7e39
|
2f0db9b05f0598548c0c136571c31065ecf961e5
|
python/cpython
|
python__cpython-136747
|
# `sys.monitoring.register_callback()` audit event not documented in the table
# Documentation
Summary
----
`sys.monitoring.register_callback()` is mentioned in the [`sys.monitoring` docs](https://docs.python.org/3/library/sys.monitoring.html#sys.monitoring.register_callback) as a function which generates audit events, but it doesn't have a corresponding entry in the [audit events table](https://docs.python.org/3/library/audit_events.html).
Cause
----
The corresponding `audit-event` ([definition](https://github.com/python/cpython/blob/cb59eaefeda5ff44ac0c742bff2b8afc023be313/Doc/tools/extensions/audit_events.py#L254); [usage example (`glob.glob`)](https://github.com/python/cpython/blob/cb59eaefeda5ff44ac0c742bff2b8afc023be313/Doc/library/glob.rst?plain=1#L71)) directive is missing from [`Doc/library/sys.monitoring.rst`](blob/main/Doc/library/sys.monitoring.rst).
Possible fix
----
According to [`Python/instrumentation.c`](https://github.com/python/cpython/blob/cb59eaefeda5ff44ac0c742bff2b8afc023be313/Python/instrumentation.c#L2284), the event is raised under the name `'sys.monitoring.register_callback'` with a single argument (the registered callback or `None`). So maybe something like:
```rst
.. audit-event:: sys.monitoring.register_callback func sys.monitoring.register_callback
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-136747
* gh-136749
* gh-136750
<!-- /gh-linked-prs -->
|
28937d3a21cf8168c853ae43374a8287c21f71c9
|
eddc8c0a1d274ff6393c6fa233e535360c0dd07b
|
python/cpython
|
python__cpython-136683
|
# os.path.samestat incorrectly states "Accepts a path-like object"
# Documentation
Unless I am in some way confused, that statement is incorrect. The function only accepts `stat` objects.
https://github.com/python/cpython/blob/624bf52c83abcb1f948f9059e29729fa94d38086/Lib/genericpath.py#L121-L126
<!-- gh-linked-prs -->
### Linked PRs
* gh-136683
* gh-136684
* gh-136685
<!-- /gh-linked-prs -->
|
7e10a103dfe52feb0ef3d541e08abc2640838101
|
624bf52c83abcb1f948f9059e29729fa94d38086
|
python/cpython
|
python__cpython-136670
|
# build `_asyncio` module as static module
Currently `_asyncio` is built as a shared library module which causes it to use slower functions calls for getting the thread state whereas if it is built as static module then it can read the current thread state by using faster segment registers directly. This affects both free-threading and normal builds however on free-threading, critical sections heavily use thread state so it has a larger impact.
The function calls to `_PyThreadState_GetCurrent` are completely eliminated after the change and it reads thread state directly via `fs` register.
Normal build:
Before:
```
(gdb) disassemble _asyncio__get_running_loop
Dump of assembler code for function _asyncio__get_running_loop:
0x00007ffff73e1d80 <+0>: push %rax
0x00007ffff73e1d81 <+1>: call 0x7ffff73e1500 <_PyThreadState_GetCurrent@plt>
0x00007ffff73e1d86 <+6>: mov 0x358(%rax),%rax
0x00007ffff73e1d8d <+13>: test %rax,%rax
0x00007ffff73e1d90 <+16>: je 0x7ffff73e1da2 <_asyncio__get_running_loop+34>
0x00007ffff73e1d92 <+18>: mov (%rax),%ecx
0x00007ffff73e1d94 <+20>: cmp $0xbfffffff,%ecx
0x00007ffff73e1d9a <+26>: ja 0x7ffff73e1da0 <_asyncio__get_running_loop+32>
0x00007ffff73e1d9c <+28>: inc %ecx
0x00007ffff73e1d9e <+30>: mov %ecx,(%rax)
0x00007ffff73e1da0 <+32>: pop %rcx
0x00007ffff73e1da1 <+33>: ret
0x00007ffff73e1da2 <+34>: mov 0xb1ef(%rip),%rax # 0x7ffff73ecf98
0x00007ffff73e1da9 <+41>: pop %rcx
0x00007ffff73e1daa <+42>: ret
End of assembler dump.
(gdb)
```
After:
```
(gdb) disassemble _asyncio__get_running_loop
Dump of assembler code for function _asyncio__get_running_loop:
0x0000555555853c80 <+0>: mov $0xfffffffffffffff0,%rax
0x0000555555853c87 <+7>: mov %fs:(%rax),%rax
0x0000555555853c8b <+11>: mov 0x358(%rax),%rax
0x0000555555853c92 <+18>: test %rax,%rax
0x0000555555853c95 <+21>: je 0x555555853ca6 <_asyncio__get_running_loop+38>
0x0000555555853c97 <+23>: mov (%rax),%ecx
0x0000555555853c99 <+25>: cmp $0xbfffffff,%ecx
0x0000555555853c9f <+31>: ja 0x555555853ca5 <_asyncio__get_running_loop+37>
0x0000555555853ca1 <+33>: inc %ecx
0x0000555555853ca3 <+35>: mov %ecx,(%rax)
0x0000555555853ca5 <+37>: ret
0x0000555555853ca6 <+38>: lea 0x1f377b(%rip),%rax # 0x555555a47428 <_Py_NoneStruct>
0x0000555555853cad <+45>: ret
End of assembler dump.
```
free-threading:
Before:
```
(gdb) disassemble _asyncio_Future_done
Dump of assembler code for function _asyncio_Future_done:
0x00007ffff7412c30 <+0>: push %r14
0x00007ffff7412c32 <+2>: push %rbx
0x00007ffff7412c33 <+3>: sub $0x18,%rsp
0x00007ffff7412c37 <+7>: mov %rdi,%rbx
0x00007ffff7412c3a <+10>: lea 0xa(%rdi),%r14
0x00007ffff7412c3e <+14>: mov $0x1,%cl
0x00007ffff7412c40 <+16>: xor %eax,%eax
0x00007ffff7412c42 <+18>: lock cmpxchg %cl,0xa(%rdi)
0x00007ffff7412c47 <+23>: jne 0x7ffff7412c74 <_asyncio_Future_done+68>
0x00007ffff7412c49 <+25>: call 0x7ffff740c540 <_PyThreadState_GetCurrent@plt>
0x00007ffff7412c4e <+30>: mov %r14,0x10(%rsp)
0x00007ffff7412c53 <+35>: mov 0xb0(%rax),%rcx
0x00007ffff7412c5a <+42>: mov %rcx,0x8(%rsp)
0x00007ffff7412c5f <+47>: lea 0x8(%rsp),%rcx
0x00007ffff7412c64 <+52>: mov %rcx,0xb0(%rax)
0x00007ffff7412c6b <+59>: cmpq $0x0,0x20(%rbx)
0x00007ffff7412c70 <+64>: jne 0x7ffff7412c88 <_asyncio_Future_done+88>
0x00007ffff7412c72 <+66>: jmp 0x7ffff7412c8e <_asyncio_Future_done+94>
0x00007ffff7412c74 <+68>: lea 0x8(%rsp),%rdi
0x00007ffff7412c79 <+73>: mov %r14,%rsi
0x00007ffff7412c7c <+76>: call 0x7ffff740c130 <_PyCriticalSection_BeginSlow@plt>
0x00007ffff7412c81 <+81>: cmpq $0x0,0x20(%rbx)
0x00007ffff7412c86 <+86>: je 0x7ffff7412c8e <_asyncio_Future_done+94>
0x00007ffff7412c88 <+88>: cmpl $0x0,0x78(%rbx)
0x00007ffff7412c8c <+92>: jne 0x7ffff7412ca1 <_asyncio_Future_done+113>
0x00007ffff7412c8e <+94>: mov 0x92f3(%rip),%rbx # 0x7ffff741bf88
0x00007ffff7412c95 <+101>: mov 0x10(%rsp),%rdi
0x00007ffff7412c9a <+106>: test %rdi,%rdi
0x00007ffff7412c9d <+109>: jne 0x7ffff7412cb2 <_asyncio_Future_done+130>
0x00007ffff7412c9f <+111>: jmp 0x7ffff7412cdf <_asyncio_Future_done+175>
0x00007ffff7412ca1 <+113>: mov 0x92f8(%rip),%rbx # 0x7ffff741bfa0
0x00007ffff7412ca8 <+120>: mov 0x10(%rsp),%rdi
0x00007ffff7412cad <+125>: test %rdi,%rdi
0x00007ffff7412cb0 <+128>: je 0x7ffff7412cdf <_asyncio_Future_done+175>
0x00007ffff7412cb2 <+130>: xor %ecx,%ecx
0x00007ffff7412cb4 <+132>: mov $0x1,%al
0x00007ffff7412cb6 <+134>: lock cmpxchg %cl,(%rdi)
0x00007ffff7412cba <+138>: je 0x7ffff7412cc1 <_asyncio_Future_done+145>
0x00007ffff7412cbc <+140>: call 0x7ffff740c550 <PyMutex_Unlock@plt>
0x00007ffff7412cc1 <+145>: call 0x7ffff740c540 <_PyThreadState_GetCurrent@plt>
0x00007ffff7412cc6 <+150>: mov 0x8(%rsp),%rcx
0x00007ffff7412ccb <+155>: mov %rcx,0xb0(%rax)
0x00007ffff7412cd2 <+162>: test $0x1,%cl
0x00007ffff7412cd5 <+165>: je 0x7ffff7412cdf <_asyncio_Future_done+175>
0x00007ffff7412cd7 <+167>: mov %rax,%rdi
0x00007ffff7412cda <+170>: call 0x7ffff740c390 <_PyCriticalSection_Resume@plt>
0x00007ffff7412cdf <+175>: mov %rbx,%rax
0x00007ffff7412ce2 <+178>: add $0x18,%rsp
0x00007ffff7412ce6 <+182>: pop %rbx
0x00007ffff7412ce7 <+183>: pop %r14
0x00007ffff7412ce9 <+185>: ret
End of assembler dump.
```
After:
```
(gdb) disassemble _asyncio_Future_done
Dump of assembler code for function _asyncio_Future_done:
0x0000555555892fc0 <+0>: push %rbx
0x0000555555892fc1 <+1>: sub $0x10,%rsp
0x0000555555892fc5 <+5>: mov %rdi,%rbx
0x0000555555892fc8 <+8>: lea 0xa(%rdi),%rsi
0x0000555555892fcc <+12>: mov $0x1,%cl
0x0000555555892fce <+14>: xor %eax,%eax
0x0000555555892fd0 <+16>: lock cmpxchg %cl,0xa(%rdi)
0x0000555555892fd5 <+21>: jne 0x555555893005 <_asyncio_Future_done+69>
0x0000555555892fd7 <+23>: mov $0xfffffffffffffff0,%rax
0x0000555555892fde <+30>: mov %fs:(%rax),%rax
0x0000555555892fe2 <+34>: mov %rsi,0x8(%rsp)
0x0000555555892fe7 <+39>: mov 0xb0(%rax),%rcx
0x0000555555892fee <+46>: mov %rcx,(%rsp)
0x0000555555892ff2 <+50>: mov %rsp,%rcx
0x0000555555892ff5 <+53>: mov %rcx,0xb0(%rax)
0x0000555555892ffc <+60>: cmpq $0x0,0x20(%rbx)
0x0000555555893001 <+65>: jne 0x555555893014 <_asyncio_Future_done+84>
0x0000555555893003 <+67>: jmp 0x55555589301a <_asyncio_Future_done+90>
0x0000555555893005 <+69>: mov %rsp,%rdi
0x0000555555893008 <+72>: call 0x5555557a4970 <_PyCriticalSection_BeginSlow>
0x000055555589300d <+77>: cmpq $0x0,0x20(%rbx)
0x0000555555893012 <+82>: je 0x55555589301a <_asyncio_Future_done+90>
0x0000555555893014 <+84>: cmpl $0x0,0x78(%rbx)
0x0000555555893018 <+88>: jne 0x555555893034 <_asyncio_Future_done+116>
0x000055555589301a <+90>: lea 0x1e9f57(%rip),%rbx # 0x555555a7cf78 <_Py_FalseStruct>
0x0000555555893021 <+97>: mov 0x8(%rsp),%rdi
0x0000555555893026 <+102>: test %rdi,%rdi
0x0000555555893029 <+105>: jne 0x555555893045 <_asyncio_Future_done+133>
0x000055555589302b <+107>: mov %rbx,%rax
0x000055555589302e <+110>: add $0x10,%rsp
0x0000555555893032 <+114>: pop %rbx
0x0000555555893033 <+115>: ret
0x0000555555893034 <+116>: lea 0x1e9f0d(%rip),%rbx # 0x555555a7cf48 <_Py_TrueStruct>
0x000055555589303b <+123>: mov 0x8(%rsp),%rdi
0x0000555555893040 <+128>: test %rdi,%rdi
0x0000555555893043 <+131>: je 0x55555589302b <_asyncio_Future_done+107>
0x0000555555893045 <+133>: xor %ecx,%ecx
0x0000555555893047 <+135>: mov $0x1,%al
0x0000555555893049 <+137>: lock cmpxchg %cl,(%rdi)
0x000055555589304d <+141>: je 0x555555893054 <_asyncio_Future_done+148>
0x000055555589304f <+143>: call 0x5555557e2ec0 <PyMutex_Unlock>
0x0000555555893054 <+148>: mov (%rsp),%rax
0x0000555555893058 <+152>: mov $0xfffffffffffffff0,%rcx
0x000055555589305f <+159>: mov %fs:(%rcx),%rdi
0x0000555555893063 <+163>: mov %rax,0xb0(%rdi)
0x000055555589306a <+170>: test $0x1,%al
0x000055555589306c <+172>: je 0x55555589302b <_asyncio_Future_done+107>
0x000055555589306e <+174>: call 0x5555557a4ae0 <_PyCriticalSection_Resume>
0x0000555555893073 <+179>: mov %rbx,%rax
0x0000555555893076 <+182>: add $0x10,%rsp
0x000055555589307a <+186>: pop %rbx
0x000055555589307b <+187>: ret
End of assembler dump.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-136670
<!-- /gh-linked-prs -->
|
b7d722547bcc9e92dca4837b9fdbe7457788820b
|
69d8fe50ddc4dbe757c9929a532e2e882f0261ba
|
python/cpython
|
python__cpython-136664
|
# Signature of `PyFloat_Pack{2,4,8}` inconsistent with documentation
According to the documentation (https://docs.python.org/3/c-api/float.html#c.PyFloat_Pack2), the signature of `PyFloat_Pack{2,4,8}` is
```
int PyFloat_Pack2(double x, unsigned char *p, int le)
```
Note that the second parameter is of type `unsigned char *`
However, for the implementation, the type is actually `char *`
https://github.com/python/cpython/blob/db2032407a0c4928f3bdff63bba0456bf99e257e/Include/cpython/floatobject.h#L21
The commit message of https://github.com/python/cpython/commit/882d8096c262a5945e0cfdd706e5db3ad2b73543, says "Replace the "unsigned char*" type with "char*" which is more common and easy to use". So the fix is to update the docs to use `char *` as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136664
* gh-136666
* gh-136667
<!-- /gh-linked-prs -->
|
e4654e0b3e7d802c8fe984cf39a36a42b67de1ad
|
db2032407a0c4928f3bdff63bba0456bf99e257e
|
python/cpython
|
python__cpython-136592
|
# Avoid using `ERR_func_error_string` and `EVP_MD_CTX_md` with OpenSSL 3.0+
# Feature or enhancement
### Proposal:
In `_hashlib`, we use `EVP_MD_CTX_md` and `ERR_func_error_string` but those functions are deprecated since OpenSSL 3.0. The task is to remove their usage while retaining backward compatibility, as for #134531.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136592
<!-- /gh-linked-prs -->
|
9be3649f5eccfbda1b3c9c3195927951a9ae9b90
|
be2c3d284ecce67474a260b8c37e2f1e0628a9cf
|
python/cpython
|
python__cpython-136587
|
# `winreg`'s docstring is not up to date
It's missing the newly added `CreateKeyEx`, `DeleteKeyEx`, `DeleteKeyEx`, `EnableReflectionKey`, `QueryReflectionKey`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136587
<!-- /gh-linked-prs -->
|
d53199101c7e74273d4d550456a994de01b6e3f1
|
377b78761814e7d848361e642d376881739d5a29
|
python/cpython
|
python__cpython-136573
|
# Convert more datetime constructors and methods to Argument Clinic
The `datetime` module was partially converted to Argument Clinic. The following PR converts more functions. This adds signatures for some classes and methods. As a side effect, this may improve performance.
Usually we avoid behavior changes in conversions to Argument Clinic, but there is one such change in the following PR. Currently, `fromisocalendar()` always raises ValueError for out of range arguments. After conversion, it will raise OverflowError for values that can't fit in the C int. This is a regression. But all other methods that take integer arguments raise OverflowError. It is better to raise ValueError, but this should be consistent for all methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136573
<!-- /gh-linked-prs -->
|
046a4e39b3f8ac5cb13ea292418c9c3767b0074d
|
af15e1d13ea26575afbb94b814e541586547a706
|
python/cpython
|
python__cpython-136780
|
# `Tools/cases_generator/interpreter_definition.md` lacks information about some prefixes
So far it's seem that we haven't any documentation for:
* `specializing`
* `replicate`
* ~`split`~ (Actually, it has been removed in 7ebd71ee14a497bb5dc7a693dd00f074a9f4831f)
* `no_save_ip`
<!-- gh-linked-prs -->
### Linked PRs
* gh-136780
<!-- /gh-linked-prs -->
|
406dc714f6b4dbc18d4e5119a10621386bccbee3
|
13e21b2fd6343ba8309ed857a2cbf6d6995ca5f2
|
python/cpython
|
python__cpython-136566
|
# Improve and amend `hashlib.__doc__`
# Feature or enhancement
### Proposal:
There are a few typos in `hashlib.__doc__` and I would like the usage example to be consistent (one example is MD5 while the other, introduced by "more condensed:" is using SHA-224). We also say:
> Choose your hash function wisely. Some have known collision weaknesses. sha384 and sha512 will be slow on 32 bit platforms.
But SHA-384 and SHA-512 won't be necessary slow on 32-bit platforms. What we can say however is that, depending on how it's implemented, they are usually *faster* on 64-bit platforms compared to SHA-224 and SHA-256. The reason is that internal computations use 64-bit words, whereas the SHA-224 and SHA-256 use 32-bit words, even on 64-bit platforms.
As I haven't exactly reviewed the performance and the OpenSSL/HACL* implementations of SHA-384 and SHA-512, I suggest we remove this notice as it could be misleading. Saying that they will be faster than SHA-224 and SHA-256 should also be avoided as I can't test this. So, I would say:
> Choose your hash function wisely. Some have known collision weaknesses, while others may be slower depending on the CPU architecture.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136566
<!-- /gh-linked-prs -->
|
83d04a29a64eedc55d0a8d93aaae43d6069729e3
|
c7d24b81c376b0cf0b34f861cb18c1b1e4eac27b
|
python/cpython
|
python__cpython-136559
|
# Failure when running `test_inspect` locally
# Bug report
### Bug description:
When running `test_inspect` locally (either via `unittest` or via executing `Lib/test/test_inspect/test_inspect.py`), I see this failure:
```
ERROR: test_threading_module_has_signatures (__main__.TestSignatureDefinitions.test_threading_module_has_signatures) [supported] (builtin='excepthook')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/peter/develop/cpython/Lib/test/test_inspect/test_inspect.py", line 5717, in _test_module_has_signatures
self.assertIsNotNone(inspect.signature(obj))
~~~~~~~~~~~~~~~~~^^^^^
File "/home/peter/develop/cpython/Lib/inspect.py", line 3312, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
globals=globals, locals=locals, eval_str=eval_str,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
annotation_format=annotation_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/develop/cpython/Lib/inspect.py", line 3027, in from_callable
return _signature_from_callable(obj, sigcls=cls,
follow_wrapper_chains=follow_wrapped,
globals=globals, locals=locals, eval_str=eval_str,
annotation_format=annotation_format)
File "/home/peter/develop/cpython/Lib/inspect.py", line 2508, in _signature_from_callable
return _signature_from_builtin(sigcls, obj,
skip_bound_arg=skip_bound_arg)
File "/home/peter/develop/cpython/Lib/inspect.py", line 2294, in _signature_from_builtin
return _signature_fromstr(cls, func, s, skip_bound_arg)
File "/home/peter/develop/cpython/Lib/inspect.py", line 2150, in _signature_fromstr
raise ValueError("{!r} builtin has invalid signature".format(obj))
ValueError: <built-in function _excepthook> builtin has invalid signature
```
I'm able to reproduce this back to 3.13, so it's definitely not a recent failure.
Interestingly, this failure *does not* show up when running through regrtest (`python -m test test_inspect`), both in CI or locally. I'm not sure if this is a misconfiguration that regrtest fixes, or an actual bug that's being hidden.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136559
* gh-136589
* gh-136590
<!-- /gh-linked-prs -->
|
be2c3d284ecce67474a260b8c37e2f1e0628a9cf
|
5e1e21dee35b8e9066692d08033bbbdb562e2c28
|
python/cpython
|
python__cpython-136570
|
# Allow tests to temporarily disable specific hash algorithms
# Feature or enhancement
### Proposal:
This is feature I needed in order to test my fix for #136134. The idea is to simulate a disabled algorithm due to FIPS reasons on a machine that enables it. However, to that end, we need to mock multiple entry points. For instance, disabling MD5 means disabling both `hashlib.md5` & co but also `hmac.new(..., 'md5')` & co.
I have a local branch with those changes that I will push tomorrow, but I needed an issue first. At the same time, it'll become useful for people who want to simulate FIPS-builds. Note that the support will not be universal and that tests using those helpers need to be written by people who *know* how the hash algorithm is used (in general, we use `hashlib.new(NAME)` and `hmac.new(NAME)` so the stdlib should avoid using explicit functions as they would not necessarily be mocked.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136570
* gh-136762
<!-- /gh-linked-prs -->
|
9e5cebd56d06e35faeca166813215d72f2f8906a
|
0d4fd10fbab2767fad3eb27639905c8885b88c89
|
python/cpython
|
python__cpython-136500
|
# perf trampolines are not reliable in aarch64
# Bug report
### Bug description:
The perf trampolines can randomly fail in some aarch64 systems
### CPython versions tested on:
3.15, 3.14, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136500
* gh-136544
* gh-136545
<!-- /gh-linked-prs -->
|
236f733d8ffb3d587e1167fa0a0248c24512e7fd
|
7de8ea7be6c19f21c090f44a01817fab26c1f095
|
python/cpython
|
python__cpython-136529
|
# Exception raised from `Wave_write.__del__()` after failed attempt to open file for write operation
# Bug report
### Bug description:
Consider a Python script with the following content:
```python
import wave
try:
with wave.open('/unwritable_path.wav', 'wb') as f:
pass # Not reachable, open() should have raised by now
except PermissionError:
pass
```
When executed in a Python interpreter, an exception is raised on exit from `Wave_write.__del__()`:
```
Exception ignored in: <function Wave_write.__del__ at 0x000001CA3478B740>
Traceback (most recent call last):
File "C:\Python313\Lib\wave.py", line 469, in __del__
self.close()
File "C:\Python313\Lib\wave.py", line 592, in close
if self._file:
AttributeError: 'Wave_write' object has no attribute '_file'
```
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-136529
* gh-136606
* gh-136607
<!-- /gh-linked-prs -->
|
171de05b4884d1353044417ea51a4efcb55ba633
|
42b251bcebd749eceeb62389e413a3be37cff343
|
python/cpython
|
python__cpython-136518
|
# Print uncollectable objects if DEBUG_UNCOLLECTABLE mode was set
# Bug report
### Bug description:
There was a typo, and uncollectable objects printed only if DEBUG_COLLECTABLE mode is set.
Affected main and 3.14 (183b020cb5960e17b87c34a98ec02fcf2b840578)
https://github.com/python/cpython/blob/59acdba820f75081cfb47ad6e71044d022854cbc/Python/gc.c#L1783-L1787
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136518
* gh-136522
<!-- /gh-linked-prs -->
|
c560df9658f1a24edea995fe6f9c84c55b37cfb3
|
59acdba820f75081cfb47ad6e71044d022854cbc
|
python/cpython
|
python__cpython-136546
|
# FrameLocalsProxy (PEP 667) is not registered as a subclass of abc.Mapping
# Bug report
### Update
Turns out it is correctly registered as a subclass of Mapping of Python 3.13.2 (at least) - which was the original report - but then it should be exposed in `types`.
### Bug description:
Although the FrameLocalsProxy object created when retrieving the `.f_locals` attribute
from a frame is a complete mutable mapping, and created so it could be a drop-in,
backwards compatible replacement for the plain dict that was retrieved from
there prior to Python 3.13, it can't be tested as a Mapping in either
dynamic or static checking.
In practical terms, if I have a function that will support receiving a mapping, and into which I would pass a FrameLocalsProxy, static checkers would fail.
this is testable on the repl:
```python
import sys
from collections.abc import Mapping, MutableMapping
FrameLocalsProxy = (lambda: type(sys._getframe().f_locals)()
def gen():
yield
running_gen = gen()
next(running_gen)
f_locals = running_gen.gi_frame.f_locals
isinstance(f_locals, FrameLocalsProxy)
# True
isinstance(f_locals, Mapping)
# False
isinstance(f_locals, MutableMapping)
# False
```
Admittedly, it can't conform with `MutableMapping` since it doesn't make sense to try to delete items from such a proxy - although values can be replaced and added. In accord, it lacks `clear` and `popitem` methods -
Therefore one trying to accurately trying to static type a method that could receive a FrameLocalsProxy one intends to modify will have to resort to protocols. But that doesn't need to be the case for `Mapping`, since it is fully conformant.
Also, otherwise code which wants to test in runtime if an object is a mapping should be able to know that with an isinstance check.
---
I am willing to contribute this change if it is judged correct.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136546
<!-- /gh-linked-prs -->
|
8f59fbb082a4d64619aeededc47b3b45212d2341
|
e24c66d55a4fd2c56017f8f4e1bcb154db4ba50a
|
python/cpython
|
python__cpython-136483
|
# get_async_stack_trace is missing part of the graph
# Bug report
### Bug description:
In `Modules/_remote_debugging_module.c` the `get_async_stack_trace` is missing part of the graph because it doesn't properly recurse over all tasks.
### CPython versions tested on:
3.14, 3.15
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136483
* gh-136490
* gh-136494
* gh-136495
<!-- /gh-linked-prs -->
|
ea45a2f97cb1d4774a6f88e63c6ce0a487f83031
|
9c4d28777526e9975b212d49fb0a530f773a3209
|
python/cpython
|
python__cpython-136472
|
# `InterpreterPoolExecutor`'s default thread name prefix is invalid
# Bug report
### Bug description:
```python
from concurrent.futures import InterpreterPoolExecutor
def w():
import time
time.sleep(100)
executor1 = InterpreterPoolExecutor()
executor1.submit(w)
executor1.submit(w)
executor2 = InterpreterPoolExecutor()
executor2.submit(w)
executor2.submit(w)
executor1.shutdown()
executor2.shutdown()
```
With this code, htop with "Show custom thread names" enabled) shows it's *ThreadPoolExecutor* instead of *InterpreterPoolExecutor*:
<img width="233" height="89" alt="htop output" src="https://github.com/user-attachments/assets/4cfb27e6-12e5-4dba-8019-980d7b9daf06" />
Other process monitor tools may have the same result if it read the custom thread name from OS.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS, Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136472
* gh-136889
<!-- /gh-linked-prs -->
|
246be21de1e2a51d757c747902108dfec13e0605
|
aec7f5f8b2e8b5e02869cdb4e1f8a9ef87c9f953
|
python/cpython
|
python__cpython-136461
|
# Consider enabling perf trampoline on macOS
# Feature or enhancement
### Proposal:
Currently perf trampoline (with `PYTHONPERFSUPPORT=1`) is Linux only. I assume it's mainly because `perf` is a Linux-only tool. But since then more non-Linux only profilers added support for Linux perf maps. For example [`samply`](https://github.com/mstange/samply) supports perf maps for a while now that's availabe both on Linux and macOS. I think there is no reason to keep it Linux only, especially since we now have tools that can use it.
I would be interested to work on this if there are no objections or concerns. I actually already have a working prototype, so I will polish it and submit a PR soon.
For example this currently works on Linux:
```
samply record PYTHONPERFSUPPORT=1 python test.py
```
But it doesn't output this perf map file for samply to use on macOS. It could be super useful to have it for macOS profiling.
Not related to Python, but we [enabled the perf trampoline support for macOS on SpiderMonkey](https://bugzilla.mozilla.org/show_bug.cgi?id=1827214). Also, v8 [enabled it on macOS recently](https://issues.chromium.org/issues/403765219). It would be great to have this for Python as well!
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136461
* gh-136500
* gh-137031
<!-- /gh-linked-prs -->
|
e41c1ce585827f92dab9b7a7fc3df2bda2f817fe
|
b13a5df52fc854d1097e8b5419cb8802dc4059e0
|
python/cpython
|
python__cpython-136448
|
# asyncio REPL: Use `self.loop` instead of global `loop` variable in `AsyncIOInteractiveConsole`
I'm trying to extend the asyncio REPL (yes, I know this is an unsupported use case) by doing the following:
- `from asyncio.__main__ import AsyncIOInteractiveConsole`
- Copy the `if __name__ == '__main__'` block into my own `__main__.py`
- Copy and edit `REPLThread` (subclassing isn't possible because it also refers to global variables)
However, this doesn't work because `AsyncIOInteractiveConsole` refers to `asyncio.__main__`'s global `loop` variable, despite it being passed and assigned to the instance attribute `self.loop`.
https://github.com/python/cpython/blob/3.13/Lib/asyncio/__main__.py#L65
I see no reason why `AsyncIOInteractiveConsole` shouldn't use `self.loop` consistently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136448
* gh-136457
* gh-136458
<!-- /gh-linked-prs -->
|
77fa7a4dcc771bf4d297ebfd4f357483d0750a1c
|
797abd1f7fdeb744bf9f683ef844e7279aad3d72
|
python/cpython
|
python__cpython-136812
|
# Different parameter names in `os.path` documentation vs. runtime
[normcase](https://docs.python.org/3/library/os.path.html#os.path.normcase) must be `os.path.normcase(s)`
[basename](https://docs.python.org/3/library/os.path.html#os.path.basename) must be `os.path.basename(p)`
<!-- gh-linked-prs -->
### Linked PRs
* gh-136812
* gh-136944
* gh-136945
* gh-136946
* gh-136947
* gh-136948
* gh-136949
* gh-136970
* gh-137000
* gh-137001
<!-- /gh-linked-prs -->
|
b5428bb0e786f5b67c6077472c0068cadd0b5ea9
|
a10960699a2b3e4e62896331c4f9cfd162ebf440
|
python/cpython
|
python__cpython-136435
|
# `./python.exe -OO -m test test_concurrent_futures` fails
# Bug report
This happens because `-OO` mode is not handled. I have a PR ready.
Output:
```
» ./python.exe -OO -m test test_concurrent_futures
Using random seed: 2576397461
0:00:00 load avg: 2.18 Run 9 tests sequentially in a single process
0:00:00 load avg: 2.18 [1/9] test_concurrent_futures.test_as_completed
test test_concurrent_futures.test_as_completed crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_as_completed.py", line 11, in <module>
from .util import (
...<2 lines>...
create_future, create_executor_tests, setup_module)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [1/9/1] test_concurrent_futures.test_as_completed failed (uncaught exception)
0:00:00 load avg: 2.18 [2/9/1] test_concurrent_futures.test_deadlock
test test_concurrent_futures.test_deadlock crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_deadlock.py", line 14, in <module>
from .util import (
create_executor_tests, setup_module,
ProcessPoolForkMixin, ProcessPoolForkserverMixin, ProcessPoolSpawnMixin)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [2/9/2] test_concurrent_futures.test_deadlock failed (uncaught exception)
0:00:00 load avg: 2.18 [3/9/2] test_concurrent_futures.test_future
test test_concurrent_futures.test_future crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_future.py", line 11, in <module>
from .util import (
...<2 lines>...
BaseTestCase, create_future, setup_module)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [3/9/3] test_concurrent_futures.test_future failed (uncaught exception)
0:00:00 load avg: 2.18 [4/9/3] test_concurrent_futures.test_init
test test_concurrent_futures.test_init crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_init.py", line 15, in <module>
from .util import ExecutorMixin, create_executor_tests, setup_module
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [4/9/4] test_concurrent_futures.test_init failed (uncaught exception)
0:00:00 load avg: 2.18 [5/9/4] test_concurrent_futures.test_interpreter_pool
test test_concurrent_futures.test_interpreter_pool crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_interpreter_pool.py", line 10, in <module>
from concurrent.futures.interpreter import BrokenInterpreterPool
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [5/9/5] test_concurrent_futures.test_interpreter_pool failed (uncaught exception)
0:00:00 load avg: 2.18 [6/9/5] test_concurrent_futures.test_process_pool
test test_concurrent_futures.test_process_pool crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_process_pool.py", line 16, in <module>
from .util import (
ProcessPoolForkMixin, ProcessPoolForkserverMixin, ProcessPoolSpawnMixin,
create_executor_tests, setup_module)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [6/9/6] test_concurrent_futures.test_process_pool failed (uncaught exception)
0:00:00 load avg: 2.18 [7/9/6] test_concurrent_futures.test_shutdown
test test_concurrent_futures.test_shutdown crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_shutdown.py", line 11, in <module>
from .util import (
...<2 lines>...
create_executor_tests, setup_module)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [7/9/7] test_concurrent_futures.test_shutdown failed (uncaught exception)
0:00:00 load avg: 2.18 [8/9/7] test_concurrent_futures.test_thread_pool
test test_concurrent_futures.test_thread_pool crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_thread_pool.py", line 12, in <module>
from .util import BaseTestCase, ThreadPoolMixin, setup_module
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [8/9/8] test_concurrent_futures.test_thread_pool failed (uncaught exception)
0:00:00 load avg: 2.18 [9/9/8] test_concurrent_futures.test_wait
test test_concurrent_futures.test_wait crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test
test_mod = importlib.import_module(module_name)
File "/Users/sobolev/Desktop/cpython/Lib/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 762, in exec_module
File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_wait.py", line 8, in <module>
from .util import (
...<4 lines>...
ProcessPoolForkMixin, ProcessPoolForkserverMixin, ProcessPoolSpawnMixin)
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 87, in <module>
class InterpreterPoolMixin(ExecutorMixin):
...<3 lines>...
self.skipTest("InterpreterPoolExecutor doesn't support events")
File "/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/util.py", line 88, in InterpreterPoolMixin
executor_type = futures.InterpreterPoolExecutor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/__init__.py", line 62, in __getattr__
from .interpreter import InterpreterPoolExecutor
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/futures/interpreter.py", line 3, in <module>
from concurrent import interpreters
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/__init__.py", line 12, in <module>
from ._queues import (
...<2 lines>...
)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_queues.py", line 49, in <module>
UNBOUND = _crossinterp.UnboundItem.singleton('queue', __name__)
File "/Users/sobolev/Desktop/cpython/Lib/concurrent/interpreters/_crossinterp.py", line 43, in singleton
doc = cls.__doc__.replace('cross-interpreter container', kind)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'replace'
0:00:00 load avg: 2.18 [9/9/9] test_concurrent_futures.test_wait failed (uncaught exception)
== Tests result: FAILURE ==
9 tests failed:
test_concurrent_futures.test_as_completed
test_concurrent_futures.test_deadlock
test_concurrent_futures.test_future
test_concurrent_futures.test_init
test_concurrent_futures.test_interpreter_pool
test_concurrent_futures.test_process_pool
test_concurrent_futures.test_shutdown
test_concurrent_futures.test_thread_pool
test_concurrent_futures.test_wait
Total duration: 213 ms
Total tests: run=0
Total test files: run=9/9 failed=9
Result: FAILURE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-136435
* gh-136540
<!-- /gh-linked-prs -->
|
3343fce05acb29a772599ce586abd43edf40bae6
|
975b57d945c84000949f241ded8f44413ecc6217
|
python/cpython
|
python__cpython-136903
|
# Performance improvement to uuid8 on “What’s New” page
# Documentation
The [_What’s new in Python 3.14_ page](https://docs.python.org/3.14/whatsnew/3.14.html#id4) currently states:
> [uuid4()](https://docs.python.org/3.14/library/uuid.html#uuid.uuid4) and [uuid8()](https://docs.python.org/3.14/library/uuid.html#uuid.uuid8) are 30% and 40% faster respectively.
For `uuid4`, I’d interpret that as “faster than in Python 3.13”; but for `uuid8`, which is new in 3.14 [as mentioned a few paragraphs earlier on the same page](https://docs.python.org/3.14/whatsnew/3.14.html#uuid), the comparison is not clear.
Looking at GitHub, initial support for `uuid8` was [merged in November 2024](https://github.com/python/cpython/pull/123224) and the performance improvements were [merged in January 2025](https://github.com/python/cpython/pull/128151); so this is effectively a performance comparison between 3.14a2 and 3.14a4. Perhaps that’s something that should not be included in user-facing documentation?
<!-- gh-linked-prs -->
### Linked PRs
* gh-136903
* gh-136904
<!-- /gh-linked-prs -->
|
5798348a0739ccf46f690f5fa1443080ec5de310
|
c5e77af131aa0c8832a9ee50c4410731254e4209
|
python/cpython
|
python__cpython-136411
|
# Switching between the JIT and interpreter is too slow.
A key part of the original design thesis for the JIT was that it was OK to jit small sections of code, provided that the cost of entering and exiting jjitted code was small enough.
Ideally, entering (and exiting) jitted code should cost no more than 2 or 3 instruction dispatches in the interpreter.
At the moment, we are nowhere near that.
To achieve that low overhead, we need transfers to perform minimal memory accesses and use reasonably easily predictable branches.
### What we have now
#### ENTER_EXECUTOR
This is where code enters the jit. Currently this does an eval-breaker check (to avoid needing to perform an escaping call in the jit) then increfs the executor, to keep it alive, calls the shim frame, which then calls the actual jitted code.
#### _EXIT_TRACE
This is where jitted code transfers control back to the interpreter or to other jitted code.
This uop contains complex logic to determine whether the exit is "hot", calls the jit compiler or jumps to other jitted code. Even in the case where jitted code already exists, it still needs to check for validity, before making a doubly dependent load to find the jitted code: `exit->executor->jitted_code`.
### What we want:
First of all, the interpreter and jit need to use the same calling convention. We can do this by using the tailcalling interpreter and TOS caching, such that the jitted code and interpreter functions take the same parameters.
We also want to refactor the executor or calling conventions, to save an indirection. ie. `exit->jit` rather than `exit->executor->jit`.
#### ENTER_EXECUTOR
With the same calling convention, there should be no need for a shim frame.
So, apart from the eval-breaker check, `ENTER_EXECUTOR` only needs to find the executor and make the tailcall `executor->jit(...)`.
#### _EXIT_TRACE
By handling cold exits and invalid executors in stubs, `_EXIT_TRACE` can avoid complex control flow and tailcall directly into the exit: `exit->jit(...)`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136411
<!-- /gh-linked-prs -->
|
e7b55f564dbf5a788e8f6edc55ef441d6afad01c
|
718e0c89ba0610bba048245028ac133bbf2d44c2
|
python/cpython
|
python__cpython-136525
|
# Bug with per-thread bytecode and profiling/instrumentation in freethreading
# Bug report
### Bug description:
A bunch of the instrumentation state is per-code object, such as the active montiors. The modifications also typically happen lazily when a code object is executed after instrumentation is enabled/disabled.
https://github.com/python/cpython/blob/0240ef4705d835e27beb2437dfadb5d34aa2db17/Python/instrumentation.c#L1812-L1814
However, if you create a new thread, then it will be initialized without the instrumented bytecodes. Here's an example that breaks:
1) Enable instrumentation and call some function. This will replace things like `CALL` with `INSTRUMENTED_CALL`.
2) Disable instrumentation. Note that this doesn't immediately change `INSTRUMENTED_CALL` back to `CALL`!
3) Start a new thread, enable instrumentation, and call that same function - uh oh!
In (3), the new thread gets a clean copy of the bytecode without instrumentation:
https://github.com/python/cpython/blob/0240ef4705d835e27beb2437dfadb5d34aa2db17/Objects/codeobject.c#L3333-L3341
However, the code object still has instrumentation enabled, so the `monitors_are_empty` check above returns with instrumenting the bytecode. Missing events!
Adapted from @pablogsal's repro:
```python
import sys
import threading
import dis
def looooooser(x):
print("I am a looooooser")
def LOSER():
looooooser(42)
TRACES = []
def tracing_function(frame, event, arg):
function_name = frame.f_code.co_name
TRACES.append((function_name, event, arg))
def func1():
sys.setprofile(tracing_function)
LOSER()
sys.setprofile(None)
TRACES.clear()
def func2():
def thread_body():
sys.setprofile(tracing_function)
LOSER()
sys.setprofile(None)
dis.dis(looooooser, adaptive=True)
# WHEN
bg_thread = threading.Thread(target=thread_body)
bg_thread.start()
bg_thread.join()
for trace in TRACES:
print(trace)
assert ('looooooser', 'call', None) in TRACES
func1()
func2()
```
cc @mpage
### CPython versions tested on:
CPython main branch, 3.14, 3.15
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136525
* gh-136657
<!-- /gh-linked-prs -->
|
d995922198304a6de19ac1bec3e36d1e886d8468
|
3d8c38f6db0fea7845aafb92fe6bc795b536a367
|
python/cpython
|
python__cpython-136432
|
# compression.zstd tests: test_compress_locking sometimes fail on free-threading builds
# Bug report
### Bug description:
I am suspicious about the purpose of the `test_compress_locking` test in `test_zstd.py`, and actually found a race condition in it, when using a freethreaded build.
From my understanding, it is doing the following thing:
1. allocate a `ZstdCompressor` instance
2. on each of 8 threads:
1. call `.compress` method with the string `b'a'*16384`
2. check if the result is empty
3. `.append` the result to a list common to all threads
3. (more steps here outside of the threading part)
4. check if the content of the list is the same as expected
Now, because the input of `.compress` method is the same on all threads, here is the output of that method:
- if it is the first call, then the zstd header will be added, so `b'(\xb5/\xfd\x00XL\x00\x00\x10aa\x01\x00\xfb\x9f\x07X'`
- else (for all 7 others): `b'\x02\x00\x02a'`
However, there is a logic issue: the thread that calls first the `.compress` method in step 2.i. may not be the same thread that calls the `.append` in step 2.iii.: this is a race condition. If that happens, the result is not the same as expected and the test fails.
For some reason, this happens very infrequently, but it does (at least on free-threading build), here on attempt 307:
```
$ for i in `seq 1000`; do echo $i; ./python -m test.test_zstd -vv -k test_compress_locking || break; done
<<<redacted>>>
306
test_compress_locking (__main__.FreeThreadingMethodTests.test_compress_locking) ... FAIL
======================================================================
FAIL: test_compress_locking (__main__.FreeThreadingMethodTests.test_compress_locking)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/redacted/cpython/Lib/test/support/threading_helper.py", line 66, in decorator
return func(*args)
File "/redacted/cpython/Lib/test/test_zstd.py", line 2704, in test_compress_locking
self.assertEqual(expected, actual)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
AssertionError: b'(\xb5/\xfd\x00XL\x00\x00\x10aa\x01\x00\xf[109 chars]\x00' != b'\x02\x00\x02a\x02\x00\x02a(\xb5/\xfd\x00X[109 chars]\x00'
----------------------------------------------------------------------
Ran 1 test in 0.069s
FAILED (failures=1)
```
So here is my question: what is the test expected to be testing exactly? How can we rewrite the test to still perform the expected test, while at the same time not having the race condition mentioned above?
Also it may be worth looking in other tests of `FreeThreadingMethodTests` to see if they are impacted.
---
_Tested on main (0240ef4705d) with a free-threading build, but should be the same in 3.14.0b3.
Please add to https://github.com/orgs/python/projects/20/views/7
Paging @emmatyping_
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136432
* gh-136444
* gh-136506
<!-- /gh-linked-prs -->
|
f519918ec6c125715d4efc9713ba80e83346e466
|
d754f75f42f040267d818ab804ada340f55e5925
|
python/cpython
|
python__cpython-136381
|
# Inconsistent import behavior when concurrent.futures.InterpreterPoolExecutor not exist
# Bug report
### Bug description:
When the `_interpreters` module which under the hood to implement `InterpreterPoolExecutor` does not exist, `import concurrent.futures.InterpreterPoolExecutor` will result `None`, instead of `ImportError`.
A PR is on the way to address this.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136381
* gh-136420
<!-- /gh-linked-prs -->
|
490eea02819ad303a5042529af7507b7b1fdabdc
|
ba9c1986305517ed88470129fe7c71aaec22d08d
|
python/cpython
|
python__cpython-136319
|
# 3.14 regression with `typing._eval_type()`
# Bug report
### Bug description:
I'm in the process of adding 3.14 support to Pydantic. To evaluate type annotations, we make use of the private `typing._eval_type()` function. In an ideal we shouldn't, but this is very hard to change now (and still, we need evaluate "standalone" type expressions for several reasons, e.g. to be able to have `TypeAdapter(list['ForwardReference'])` working).
In 3.13, the following works:
```python
from typing import ForwardRef, _eval_type
MyList = list[int]
MyDict = dict[str, 'MyList']
fw = ForwardRef('MyDict', module=__name__)
print(_eval_type(fw, globalns=None, localns=None, type_params=()))
#> dict[str, list[int]]
```
However, starting with 3.14, this raises:
```python
from annotationlib import ForwardRef
from typing import _eval_type
MyList = list[int]
MyDict = dict[str, 'MyList']
fw = ForwardRef('MyDict', module=__name__)
print(_eval_type(fw, globalns=None, localns=None, type_params=()))
# NameError: name 'MyList' is not defined
```
Also explicitly passing `globalns=globals()` in 3.14 doesn't work, as `typing._eval_type()` is setting the globals to `None` if a `__forward_module__` is present on the `ForwardRef`:
https://github.com/python/cpython/blob/5de7e3f9739b01ad180fffb242ac57cea930e74d/Lib/typing.py#L432-L447
Which will affect _all_ recursive calls to `_eval_type()` from `evaluate_forward_ref()`.
---
Sorry I couldn't find a repro not relying on this private function (nor I could get rid of the explicit `ForwardRef` usage). Feel free to close as not planned, we can live without this in Pydantic (I hope).
cc @JelleZijlstra
### CPython versions tested on:
3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136319
* gh-136346
<!-- /gh-linked-prs -->
|
9312702d2e12c2f58f02bfa02877d0ec790d06bd
|
c89f76e6c4ca9b0200d5cc8cf0a675a76de50ba8
|
python/cpython
|
python__cpython-136320
|
# Test test_zstd_multithread_compress is always skipped
# Bug report
### Bug description:
The test `test_zstd_multithread_compress` is always skipped, even when it should be run.
This comes from [`SUPPORT_MULTITHREADING` being initiated to `False`](https://github.com/python/cpython/blob/v3.14.0b3/Lib/test/test_zstd.py#L66) when [the `@unittest.skipIf` decorator is called](https://github.com/python/cpython/blob/v3.14.0b3/Lib/test/test_zstd.py#L321-L322). It [is changed to `True` in the global `setUp`](https://github.com/python/cpython/blob/v3.14.0b3/Lib/test/test_zstd.py#L74) afterwards, but it's too late.
Tested on 3.14.0b3 tag:
```sh
$ ./python -c 'from compression.zstd import *; print(CompressionParameter.nb_workers.bounds() != (0, 0))'
True
$ ./python -m test.test_zstd -k test_zstd_multithread_compress -v
test_zstd_multithread_compress (__main__.CompressorTestCase.test_zstd_multithread_compress) ... skipped "zstd build doesn't support multi-threaded compression"
----------------------------------------------------------------------
Ran 1 test in 0.061s
OK (skipped=1)
```
I'm opening this ticket instead of creating a PR because I don't know why the global `SUPPORT_MULTITHREADING` was introduced in the test. Maybe to handle when CPython is build without zstd support?
---
Introduced in gh-132983 @emmatyping
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136320
* gh-136322
<!-- /gh-linked-prs -->
|
5dac137b9f75c5c1d5096101bcd33d565d0526e4
|
887e5c8646dfc6dd3d64b482c5310a414ac9162b
|
python/cpython
|
python__cpython-136307
|
# Add support in SSL module for getting/setting groups used for key agreement
# Add support for getting/setting groups used for key agreement
### Proposal:
This feature proposal is an expansion of the feature proposed in issue #109945. It began as a discussion on the PR where I provided some suggestions on generalizing that feature to include supporting more than just EC curves, and I provided some rough example code. Since then, I've put together a more complete version of this which I'll be submitting shortly as a PR attached to this issue.
The basic idea is to add three new methods related to getting & setting groups used for key agreement:
```python
SSLContext.get_groups() -> List[str]:
"""Get a list of groups implemented for key agreement, taking into account
the SSLContext's current TLS `minimum_version` and `maximum_version` values."""
SSLContext.set_groups(groups: str) -> None:
"""Set the groups allowed for key agreement for sockets created with this context."""
SSLSocket.group() -> str:
"""Return the group used for key agreement, after the TLS handshake completes."""
```
These methods are designed to directly mimic the existing methods for getting and setting ciphers suites. Prior to TLS 1.3, all of this could be done with just setting ciphers, but that's no longer the case.
This proposal provides a superset of the functionality requested in #109945, allowing not only multiple EC curves to be specified but also allowing other mechanisms like fixed field DHE and post-quantum algorithms added in OpenSSL 3.5. In fact, once the `set_groups()` method is available the existing `set_ecdh_curve()` method could be deprecated, as the methods it calls are available all the way to OpenSSL 1.1.1, which is now the minimum supported OpenSSL version for Python.
The `group()` and `get_groups()` methods require later versions of OpenSSL (3.2 and 3.5, respectively), but the code can check for this and raise a NotImplemented exception if the version of OpenSSL that Python is built against is too old to support them.
### Links to previous discussion of this feature:
Previous discussion occurred in PR #119244, and it was suggested that it might be best to create a new issue and PR, since the previous request might not be monitored any more.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136307
* gh-137405
<!-- /gh-linked-prs -->
|
377b78761814e7d848361e642d376881739d5a29
|
59e2330cf391a9dc324690f8579acd179e66d19d
|
python/cpython
|
python__cpython-136301
|
# Not all C tests conform to PEP-737
# Bug report
### Bug description:
A few C tests do not conform to PEP-737 in that they don't:
- Use %T format specifier instead of %s and Py_TYPE(x)->tp_name.
- Use legacy %.200s format specifier for truncating type names.
Example patch that needs applying:
```patch
PyErr_Format(PyExc_TypeError,
- "cannot index memory using \"%.200s\"",
- Py_TYPE(key)->tp_name);
+ "cannot index memory using \"%T\"",
+ key);
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136301
<!-- /gh-linked-prs -->
|
7de8ea7be6c19f21c090f44a01817fab26c1f095
|
3343fce05acb29a772599ce586abd43edf40bae6
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 477