關於如何在初始化之前設定直譯器的細節,請參見 Python 初始化設定。
In an application embedding Python, the Py_Initialize()
function must
be called before using any other Python/C API functions; with the exception of
a few functions and the global configuration variables.
The following functions can be safely called before Python is initialized:
Functions that initialize the interpreter:
the runtime pre-initialization functions covered in Python 初始化設定
Configuration functions:
PyInitFrozenExtensions()
the configuration functions covered in Python 初始化設定
Informative functions:
Utilities:
the status reporting and utility functions covered in Python 初始化設定
Memory allocators:
Synchronization:
備註
Despite their apparent similarity to some of the functions listed above,
the following functions should not be called before the interpreter has
been initialized: Py_EncodeLocale()
, Py_GetPath()
,
Py_GetPrefix()
, Py_GetExecPrefix()
,
Py_GetProgramFullPath()
, Py_GetPythonHome()
,
Py_GetProgramName()
, PyEval_InitThreads()
, and
Py_RunMain()
.
Python has variables for the global configuration to control different features and options. By default, these flags are controlled by command line options.
When a flag is set by an option, the value of the flag is the number of times
that the option was set. For example, -b
sets Py_BytesWarningFlag
to 1 and -bb
sets Py_BytesWarningFlag
to 2.
This API is kept for backward compatibility: setting
PyConfig.bytes_warning
should be used instead, see Python
Initialization Configuration.
Issue a warning when comparing bytes
or bytearray
with
str
or bytes
with int
. Issue an error if greater
or equal to 2
.
由 -b
選項設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.parser_debug
should be used instead, see Python
Initialization Configuration.
Turn on parser debugging output (for expert only, depending on compilation options).
由 -d
選項與 PYTHONDEBUG
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.write_bytecode
should be used instead, see Python
Initialization Configuration.
If set to non-zero, Python won't try to write .pyc
files on the
import of source modules.
由 -B
選項與 PYTHONDONTWRITEBYTECODE
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.pathconfig_warnings
should be used instead, see
Python Initialization Configuration.
Suppress error messages when calculating the module search path in
Py_GetPath()
.
Private flag used by _freeze_module
and frozenmain
programs.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.hash_seed
and PyConfig.use_hash_seed
should
be used instead, see Python Initialization Configuration.
如果環境變數 PYTHONHASHSEED
被設定為一個非空字串則設為 1
。
If the flag is non-zero, read the PYTHONHASHSEED
environment
variable to initialize the secret hash seed.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.use_environment
should be used instead, see
Python Initialization Configuration.
忽略所有可能被設定的 PYTHON*
環境變數,例如 PYTHONPATH
與 PYTHONHOME
。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.inspect
should be used instead, see
Python Initialization Configuration.
When a script is passed as first argument or the -c
option is used,
enter interactive mode after executing the script or the command, even when
sys.stdin
does not appear to be a terminal.
由 -i
選項與 PYTHONINSPECT
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.interactive
should be used instead, see
Python Initialization Configuration.
由 -i
選項設定。
Deprecated since version 3.12, will be removed in version 3.15.
This API is kept for backward compatibility: setting
PyConfig.isolated
should be used instead, see
Python Initialization Configuration.
Run Python in isolated mode. In isolated mode sys.path
contains
neither the script's directory nor the user's site-packages directory.
由 -i
選項設定。
在 3.4 版被加入.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyPreConfig.legacy_windows_fs_encoding
should be used instead, see
Python Initialization Configuration.
If the flag is non-zero, use the mbcs
encoding with replace
error
handler, instead of the UTF-8 encoding with surrogatepass
error handler,
for the filesystem encoding and error handler.
如果環境變數 PYTHONLEGACYWINDOWSFSENCODING
被設定為一個非空字串則設為 1
。
更多詳情請見 PEP 529。
適用: Windows.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.legacy_windows_stdio
should be used instead, see
Python Initialization Configuration.
If the flag is non-zero, use io.FileIO
instead of
io._WindowsConsoleIO
for sys
standard streams.
Set to 1
if the PYTHONLEGACYWINDOWSSTDIO
environment
variable is set to a non-empty string.
更多詳情請見 PEP 528。
適用: Windows.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.site_import
should be used instead, see
Python Initialization Configuration.
Disable the import of the module site
and the site-dependent
manipulations of sys.path
that it entails. Also disable these
manipulations if site
is explicitly imported later (call
site.main()
if you want them to be triggered).
由 -S
選項設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.user_site_directory
should be used instead, see
Python Initialization Configuration.
Don't add the user site-packages directory
to
sys.path
.
由 -s
選項、-I
選項與 PYTHONNOUSERSITE
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.optimization_level
should be used instead, see
Python Initialization Configuration.
由 -O
選項與 PYTHONOPTIMIZE
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.quiet
should be used instead, see Python
Initialization Configuration.
Don't display the copyright and version messages even in interactive mode.
由 -q
選項設定。
在 3.2 版被加入.
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.buffered_stdio
should be used instead, see Python
Initialization Configuration.
Force the stdout and stderr streams to be unbuffered.
由 -u
選項與 PYTHONUNBUFFERED
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
This API is kept for backward compatibility: setting
PyConfig.verbose
should be used instead, see Python
Initialization Configuration.
Print a message each time a module is initialized, showing the place
(filename or built-in module) from which it is loaded. If greater or equal
to 2
, print a message for each file that is checked for when
searching for a module. Also provides information on module cleanup at exit.
由 -v
選項與 PYTHONVERBOSE
環境變數設定。
Deprecated since version 3.12, will be removed in version 3.14.
Initialize the Python interpreter. In an application embedding Python, this should be called before using any other Python/C API functions; see Before Python Initialization for the few exceptions.
This initializes the table of loaded modules (sys.modules
), and creates
the fundamental modules builtins
, __main__
and sys
.
It also initializes the module search path (sys.path
). It does not set
sys.argv
; use the Python Initialization Configuration
API for that. This is a no-op when called for a second time (without calling
Py_FinalizeEx()
first). There is no return value; it is a fatal
error if the initialization fails.
Use Py_InitializeFromConfig()
to customize the
Python Initialization Configuration.
備註
On Windows, changes the console mode from O_TEXT
to O_BINARY
,
which will also affect non-Python uses of the console using the C Runtime.
This function works like Py_Initialize()
if initsigs is 1
. If
initsigs is 0
, it skips initialization registration of signal handlers,
which may be useful when CPython is embedded as part of a larger application.
Use Py_InitializeFromConfig()
to customize the
Python Initialization Configuration.
Initialize Python from config configuration, as described in Initialization with PyConfig.
See the Python 初始化設定 section for details on pre-initializing the interpreter, populating the runtime configuration structure, and querying the returned status structure.
Return true (nonzero) when the Python interpreter has been initialized, false
(zero) if not. After Py_FinalizeEx()
is called, this returns false until
Py_Initialize()
is called again.
Return true (non-zero) if the main Python interpreter is shutting down. Return false (zero) otherwise.
在 3.13 版被加入.
Undo all initializations made by Py_Initialize()
and subsequent use of
Python/C API functions, and destroy all sub-interpreters (see
Py_NewInterpreter()
below) that were created and not yet destroyed since
the last call to Py_Initialize()
. Ideally, this frees all memory
allocated by the Python interpreter. This is a no-op when called for a second
time (without calling Py_Initialize()
again first).
Since this is the reverse of Py_Initialize()
, it should be called
in the same thread with the same interpreter active. That means
the main thread and the main interpreter.
This should never be called while Py_RunMain()
is running.
Normally the return value is 0
.
If there were errors during finalization (flushing buffered data),
-1
is returned.
This function is provided for a number of reasons. An embedding application might want to restart Python without having to restart the application itself. An application that has loaded the Python interpreter from a dynamically loadable library (or DLL) might want to free all memory allocated by Python before unloading the DLL. During a hunt for memory leaks in an application a developer might want to free all memory allocated by Python before exiting from the application.
Bugs and caveats: The destruction of modules and objects in modules is done
in random order; this may cause destructors (__del__()
methods) to fail
when they depend on other objects (even functions) or modules. Dynamically
loaded extension modules loaded by Python are not unloaded. Small amounts of
memory allocated by the Python interpreter may not be freed (if you find a leak,
please report it). Memory tied up in circular references between objects is not
freed. Some memory allocated by extension modules may not be freed. Some
extensions may not work properly if their initialization routine is called more
than once; this can happen if an application calls Py_Initialize()
and
Py_FinalizeEx()
more than once.
引發一個不附帶引數的稽核事件 cpython._PySys_ClearAuditHooks
。
在 3.6 版被加入.
This is a backwards-compatible version of Py_FinalizeEx()
that
disregards the return value.
Similar to Py_Main()
but argv is an array of bytes strings,
allowing the calling application to delegate the text decoding step to
the CPython runtime.
在 3.8 版被加入.
The main program for the standard interpreter, encapsulating a full
initialization/finalization cycle, as well as additional
behaviour to implement reading configurations settings from the environment
and command line, and then executing __main__
in accordance with
命令列.
This is made available for programs which wish to support the full CPython command line interface, rather than just embedding a Python runtime in a larger application.
The argc and argv parameters are similar to those which are passed to a
C program's main()
function, except that the argv entries are first
converted to wchar_t
using Py_DecodeLocale()
. It is also
important to note that the argument list entries may be modified to point to
strings other than those passed in (however, the contents of the strings
pointed to by the argument list are not modified).
The return value will be 0
if the interpreter exits normally (i.e.,
without an exception), 1
if the interpreter exits due to an exception,
or 2
if the argument list does not represent a valid Python command
line.
Note that if an otherwise unhandled SystemExit
is raised, this
function will not return 1
, but exit the process, as long as
Py_InspectFlag
is not set. If Py_InspectFlag
is set, execution will
drop into the interactive Python prompt, at which point a second otherwise
unhandled SystemExit
will still exit the process, while any other
means of exiting will set the return value as described above.
In terms of the CPython runtime configuration APIs documented in the
runtime configuration section (and without accounting
for error handling), Py_Main
is approximately equivalent to:
PyConfig config;
PyConfig_InitPythonConfig(&config);
PyConfig_SetArgv(&config, argc, argv);
Py_InitializeFromConfig(&config);
PyConfig_Clear(&config);
Py_RunMain();
In normal usage, an embedding application will call this function
instead of calling Py_Initialize()
, Py_InitializeEx()
or
Py_InitializeFromConfig()
directly, and all settings will be applied
as described elsewhere in this documentation. If this function is instead
called after a preceding runtime initialization API call, then exactly
which environmental and command line configuration settings will be updated
is version dependent (as it depends on which settings correctly support
being modified after they have already been set once when the runtime was
first initialized).
Executes the main module in a fully configured CPython runtime.
Executes the command (PyConfig.run_command
), the script
(PyConfig.run_filename
) or the module
(PyConfig.run_module
) specified on the command line or in the
configuration. If none of these values are set, runs the interactive Python
prompt (REPL) using the __main__
module's global namespace.
If PyConfig.inspect
is not set (the default), the return value
will be 0
if the interpreter exits normally (that is, without raising
an exception), or 1
if the interpreter exits due to an exception. If an
otherwise unhandled SystemExit
is raised, the function will immediately
exit the process instead of returning 1
.
If PyConfig.inspect
is set (such as when the -i
option
is used), rather than returning when the interpreter exits, execution will
instead resume in an interactive Python prompt (REPL) using the __main__
module's global namespace. If the interpreter exited with an exception, it
is immediately raised in the REPL session. The function return value is
then determined by the way the REPL session terminates: returning 0
if the session terminates without raising an unhandled exception, exiting
immediately for an unhandled SystemExit
, and returning 1
for
any other unhandled exception.
This function always finalizes the Python interpreter regardless of whether
it returns a value or immediately exits the process due to an unhandled
SystemExit
exception.
See Python Configuration for an example of a
customized Python that always runs in isolated mode using
Py_RunMain()
.
Register an atexit
callback for the target interpreter interp.
This is similar to Py_AtExit()
, but takes an explicit interpreter and
data pointer for the callback.
The GIL must be held for interp.
在 3.13 版被加入.
This API is kept for backward compatibility: setting
PyConfig.program_name
should be used instead, see Python
Initialization Configuration.
This function should be called before Py_Initialize()
is called for
the first time, if it is called at all. It tells the interpreter the value
of the argv[0]
argument to the main()
function of the program
(converted to wide characters).
This is used by Py_GetPath()
and some other functions below to find
the Python run-time libraries relative to the interpreter executable. The
default value is 'python'
. The argument should point to a
zero-terminated wide character string in static storage whose contents will not
change for the duration of the program's execution. No code in the Python
interpreter will change the contents of this storage.
Use Py_DecodeLocale()
to decode a bytes string to get a
wchar_t* string.
在 3.11 版之後被棄用.
Return the program name set with PyConfig.program_name
, or the default.
The returned string points into static storage; the caller should not modify its
value.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: Get sys.executable
instead.
Return the prefix for installed platform-independent files. This is derived
through a number of complicated rules from the program name set with
PyConfig.program_name
and some environment variables; for example, if the
program name is '/usr/local/bin/python'
, the prefix is '/usr/local'
. The
returned string points into static storage; the caller should not modify its
value. This corresponds to the prefix variable in the top-level
Makefile
and the --prefix
argument to the configure
script at build time. The value is available to Python code as sys.base_prefix
.
It is only useful on Unix. See also the next function.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: Get sys.base_prefix
instead, or sys.prefix
if
virtual environments need to be handled.
Return the exec-prefix for installed platform-dependent files. This is
derived through a number of complicated rules from the program name set with
PyConfig.program_name
and some environment variables; for example, if the
program name is '/usr/local/bin/python'
, the exec-prefix is
'/usr/local'
. The returned string points into static storage; the caller
should not modify its value. This corresponds to the exec_prefix
variable in the top-level Makefile
and the --exec-prefix
argument to the configure script at build time. The value is
available to Python code as sys.base_exec_prefix
. It is only useful on
Unix.
Background: The exec-prefix differs from the prefix when platform dependent
files (such as executables and shared libraries) are installed in a different
directory tree. In a typical installation, platform dependent files may be
installed in the /usr/local/plat
subtree while platform independent may
be installed in /usr/local
.
Generally speaking, a platform is a combination of hardware and software families, e.g. Sparc machines running the Solaris 2.x operating system are considered the same platform, but Intel machines running Solaris 2.x are another platform, and Intel machines running Linux are yet another platform. Different major revisions of the same operating system generally also form different platforms. Non-Unix operating systems are a different story; the installation strategies on those systems are so different that the prefix and exec-prefix are meaningless, and set to the empty string. Note that compiled Python bytecode files are platform independent (but not independent from the Python version by which they were compiled!).
System administrators will know how to configure the mount or
automount programs to share /usr/local
between platforms
while having /usr/local/plat
be a different filesystem for each
platform.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: Get sys.base_exec_prefix
instead, or sys.exec_prefix
if
virtual environments need to be handled.
Return the full program name of the Python executable; this is computed as a
side-effect of deriving the default module search path from the program name
(set by PyConfig.program_name
). The returned string points into
static storage; the caller should not modify its value. The value is available
to Python code as sys.executable
.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: Get sys.executable
instead.
Return the default module search path; this is computed from the program name
(set by PyConfig.program_name
) and some environment variables.
The returned string consists of a series of directory names separated by a
platform dependent delimiter character. The delimiter character is ':'
on Unix and macOS, ';'
on Windows. The returned string points into
static storage; the caller should not modify its value. The list
sys.path
is initialized with this value on interpreter startup; it
can be (and usually is) modified later to change the search path for loading
modules.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: Get sys.path
instead.
Return the version of this Python interpreter. This is a string that looks something like
"3.0a5+ (py3k:63103M, May 12 2008, 00:53:55) \n[GCC 4.2.3]"
The first word (up to the first space character) is the current Python version;
the first characters are the major and minor version separated by a
period. The returned string points into static storage; the caller should not
modify its value. The value is available to Python code as sys.version
.
See also the Py_Version
constant.
Return the platform identifier for the current platform. On Unix, this is
formed from the "official" name of the operating system, converted to lower
case, followed by the major revision number; e.g., for Solaris 2.x, which is
also known as SunOS 5.x, the value is 'sunos5'
. On macOS, it is
'darwin'
. On Windows, it is 'win'
. The returned string points into
static storage; the caller should not modify its value. The value is available
to Python code as sys.platform
.
Return the official copyright string for the current Python version, for example
'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'
The returned string points into static storage; the caller should not modify its
value. The value is available to Python code as sys.copyright
.
Return an indication of the compiler used to build the current Python version, in square brackets, for example:
"[GCC 2.7.2.2]"
The returned string points into static storage; the caller should not modify its
value. The value is available to Python code as part of the variable
sys.version
.
Return information about the sequence number and build date and time of the current Python interpreter instance, for example
"#67, Aug 1 1997, 22:34:28"
The returned string points into static storage; the caller should not modify its
value. The value is available to Python code as part of the variable
sys.version
.
This API is kept for backward compatibility: setting
PyConfig.argv
, PyConfig.parse_argv
and
PyConfig.safe_path
should be used instead, see Python
Initialization Configuration.
Set sys.argv
based on argc and argv. These parameters are
similar to those passed to the program's main()
function with the
difference that the first entry should refer to the script file to be
executed rather than the executable hosting the Python interpreter. If there
isn't a script that will be run, the first entry in argv can be an empty
string. If this function fails to initialize sys.argv
, a fatal
condition is signalled using Py_FatalError()
.
If updatepath is zero, this is all the function does. If updatepath
is non-zero, the function also modifies sys.path
according to the
following algorithm:
If the name of an existing script is passed in argv[0]
, the absolute
path of the directory where the script is located is prepended to
sys.path
.
Otherwise (that is, if argc is 0
or argv[0]
doesn't point
to an existing file name), an empty string is prepended to
sys.path
, which is the same as prepending the current working
directory ("."
).
Use Py_DecodeLocale()
to decode a bytes string to get a
wchar_t* string.
See also PyConfig.orig_argv
and PyConfig.argv
members of the Python Initialization Configuration.
備註
It is recommended that applications embedding the Python interpreter
for purposes other than executing a single script pass 0
as updatepath,
and update sys.path
themselves if desired.
See CVE 2008-5983.
On versions before 3.1.3, you can achieve the same effect by manually
popping the first sys.path
element after having called
PySys_SetArgv()
, for example using:
PyRun_SimpleString("import sys; sys.path.pop(0)\n");
在 3.1.3 版被加入.
在 3.11 版之後被棄用.
This API is kept for backward compatibility: setting
PyConfig.argv
and PyConfig.parse_argv
should be used
instead, see Python Initialization Configuration.
This function works like PySys_SetArgvEx()
with updatepath set
to 1
unless the python interpreter was started with the
-I
.
Use Py_DecodeLocale()
to decode a bytes string to get a
wchar_t* string.
See also PyConfig.orig_argv
and PyConfig.argv
members of the Python Initialization Configuration.
在 3.4 版的變更: The updatepath value depends on -I
.
在 3.11 版之後被棄用.
This API is kept for backward compatibility: setting
PyConfig.home
should be used instead, see Python
Initialization Configuration.
Set the default "home" directory, that is, the location of the standard
Python libraries. See PYTHONHOME
for the meaning of the
argument string.
The argument should point to a zero-terminated character string in static storage whose contents will not change for the duration of the program's execution. No code in the Python interpreter will change the contents of this storage.
Use Py_DecodeLocale()
to decode a bytes string to get a
wchar_t* string.
在 3.11 版之後被棄用.
Return the default "home", that is, the value set by
PyConfig.home
, or the value of the PYTHONHOME
environment variable if it is set.
此函式不應該在 Py_Initialize()
之前呼叫,否則會回傳 NULL
。
在 3.10 版的變更: 如果在 Py_Initialize()
之前呼叫,現在會回傳 NULL
。
Deprecated since version 3.13, will be removed in version 3.15: 改為取得 PyConfig.home
或 PYTHONHOME
環境變數。
The Python interpreter is not fully thread-safe. In order to support multi-threaded Python programs, there's a global lock, called the global interpreter lock or GIL, that must be held by the current thread before it can safely access Python objects. Without the lock, even the simplest operations could cause problems in a multi-threaded program: for example, when two threads simultaneously increment the reference count of the same object, the reference count could end up being incremented only once instead of twice.
Therefore, the rule exists that only the thread that has acquired the
GIL may operate on Python objects or call Python/C API functions.
In order to emulate concurrency of execution, the interpreter regularly
tries to switch threads (see sys.setswitchinterval()
). The lock is also
released around potentially blocking I/O operations like reading or writing
a file, so that other Python threads can run in the meantime.
The Python interpreter keeps some thread-specific bookkeeping information
inside a data structure called PyThreadState
. There's also one
global variable pointing to the current PyThreadState
: it can
be retrieved using PyThreadState_Get()
.
Most extension code manipulating the GIL has the following simple structure:
Save the thread state in a local variable.
Release the global interpreter lock.
... Do some blocking I/O operation ...
Reacquire the global interpreter lock.
Restore the thread state from the local variable.
This is so common that a pair of macros exists to simplify it:
Py_BEGIN_ALLOW_THREADS
... Do some blocking I/O operation ...
Py_END_ALLOW_THREADS
The Py_BEGIN_ALLOW_THREADS
macro opens a new block and declares a
hidden local variable; the Py_END_ALLOW_THREADS
macro closes the
block.
The block above expands to the following code:
PyThreadState *_save;
_save = PyEval_SaveThread();
... Do some blocking I/O operation ...
PyEval_RestoreThread(_save);
Here is how these functions work: the global interpreter lock is used to protect the pointer to the current thread state. When releasing the lock and saving the thread state, the current thread state pointer must be retrieved before the lock is released (since another thread could immediately acquire the lock and store its own thread state in the global variable). Conversely, when acquiring the lock and restoring the thread state, the lock must be acquired before storing the thread state pointer.
備註
Calling system I/O functions is the most common use case for releasing
the GIL, but it can also be useful before calling long-running computations
which don't need access to Python objects, such as compression or
cryptographic functions operating over memory buffers. For example, the
standard zlib
and hashlib
modules release the GIL when
compressing or hashing data.
When threads are created using the dedicated Python APIs (such as the
threading
module), a thread state is automatically associated to them
and the code showed above is therefore correct. However, when threads are
created from C (for example by a third-party library with its own thread
management), they don't hold the GIL, nor is there a thread state structure
for them.
If you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned third-party library), you must first register these threads with the interpreter by creating a thread state data structure, then acquiring the GIL, and finally storing their thread state pointer, before you can start using the Python/C API. When you are done, you should reset the thread state pointer, release the GIL, and finally free the thread state data structure.
The PyGILState_Ensure()
and PyGILState_Release()
functions do
all of the above automatically. The typical idiom for calling into Python
from a C thread is:
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
/* Perform Python actions here. */
result = CallSomeFunction();
/* evaluate result or handle exception */
/* Release the thread. No Python API allowed beyond this point. */
PyGILState_Release(gstate);
Note that the PyGILState_*
functions assume there is only one global
interpreter (created automatically by Py_Initialize()
). Python
supports the creation of additional interpreters (using
Py_NewInterpreter()
), but mixing multiple interpreters and the
PyGILState_*
API is unsupported.
Another important thing to note about threads is their behaviour in the face
of the C fork()
call. On most systems with fork()
, after a
process forks only the thread that issued the fork will exist. This has a
concrete impact both on how locks must be handled and on all stored state
in CPython's runtime.
The fact that only the "current" thread remains
means any locks held by other threads will never be released. Python solves
this for os.fork()
by acquiring the locks it uses internally before
the fork, and releasing them afterwards. In addition, it resets any
Lock objects in the child. When extending or embedding Python, there
is no way to inform Python of additional (non-Python) locks that need to be
acquired before or reset after a fork. OS facilities such as
pthread_atfork()
would need to be used to accomplish the same thing.
Additionally, when extending or embedding Python, calling fork()
directly rather than through os.fork()
(and returning to or calling
into Python) may result in a deadlock by one of Python's internal locks
being held by a thread that is defunct after the fork.
PyOS_AfterFork_Child()
tries to reset the necessary locks, but is not
always able to.
The fact that all other threads go away also means that CPython's
runtime state there must be cleaned up properly, which os.fork()
does. This means finalizing all other PyThreadState
objects
belonging to the current interpreter and all other
PyInterpreterState
objects. Due to this and the special
nature of the "main" interpreter,
fork()
should only be called in that interpreter's "main"
thread, where the CPython global runtime was originally initialized.
The only exception is if exec()
will be called immediately
after.
These are the most commonly used types and functions when writing C extension code, or when embedding the Python interpreter:
This data structure represents the state shared by a number of cooperating threads. Threads belonging to the same interpreter share their module administration and a few other internal items. There are no public members in this structure.
Threads belonging to different interpreters initially share nothing, except process state like available memory, open file descriptors and such. The global interpreter lock is also shared by all threads, regardless of to which interpreter they belong.
This data structure represents the state of a single thread. The only public data member is:
This thread's interpreter state.
Deprecated function which does nothing.
In Python 3.6 and older, this function created the GIL if it didn't exist.
在 3.9 版的變更: 此函式現在不會做任何事情。
在 3.7 版的變更: This function is now called by Py_Initialize()
, so you don't
have to call it yourself anymore.
在 3.2 版的變更: This function cannot be called before Py_Initialize()
anymore.
在 3.9 版之後被棄用.
Release the global interpreter lock (if it has been created) and reset the
thread state to NULL
, returning the previous thread state (which is not
NULL
). If the lock has been created, the current thread must have
acquired it.
Acquire the global interpreter lock (if it has been created) and set the
thread state to tstate, which must not be NULL
. If the lock has been
created, the current thread must not have acquired it, otherwise deadlock
ensues.
備註
Calling this function from a thread when the runtime is finalizing
will terminate the thread, even if the thread was not created by Python.
You can use Py_IsFinalizing()
or sys.is_finalizing()
to
check if the interpreter is in process of being finalized before calling
this function to avoid unwanted termination.
Return the current thread state. The global interpreter lock must be held.
When the current thread state is NULL
, this issues a fatal error (so that
the caller needn't check for NULL
).
Similar to PyThreadState_Get()
, but don't kill the process with a
fatal error if it is NULL. The caller is responsible to check if the result
is NULL.
在 3.13 版被加入: In Python 3.5 to 3.12, the function was private and known as
_PyThreadState_UncheckedGet()
.
Swap the current thread state with the thread state given by the argument
tstate, which may be NULL
.
The GIL does not need to be held, but will be held upon returning
if tstate is non-NULL
.
The following functions use thread-local storage, and are not compatible with sub-interpreters:
Ensure that the current thread is ready to call the Python C API regardless
of the current state of Python, or of the global interpreter lock. This may
be called as many times as desired by a thread as long as each call is
matched with a call to PyGILState_Release()
. In general, other
thread-related APIs may be used between PyGILState_Ensure()
and
PyGILState_Release()
calls as long as the thread state is restored to
its previous state before the Release(). For example, normal usage of the
Py_BEGIN_ALLOW_THREADS
and Py_END_ALLOW_THREADS
macros is
acceptable.
The return value is an opaque "handle" to the thread state when
PyGILState_Ensure()
was called, and must be passed to
PyGILState_Release()
to ensure Python is left in the same state. Even
though recursive calls are allowed, these handles cannot be shared - each
unique call to PyGILState_Ensure()
must save the handle for its call
to PyGILState_Release()
.
When the function returns, the current thread will hold the GIL and be able to call arbitrary Python code. Failure is a fatal error.
備註
Calling this function from a thread when the runtime is finalizing
will terminate the thread, even if the thread was not created by Python.
You can use Py_IsFinalizing()
or sys.is_finalizing()
to
check if the interpreter is in process of being finalized before calling
this function to avoid unwanted termination.
Release any resources previously acquired. After this call, Python's state will
be the same as it was prior to the corresponding PyGILState_Ensure()
call
(but generally this state will be unknown to the caller, hence the use of the
GILState API).
Every call to PyGILState_Ensure()
must be matched by a call to
PyGILState_Release()
on the same thread.
Get the current thread state for this thread. May return NULL
if no
GILState API has been used on the current thread. Note that the main thread
always has such a thread-state, even if no auto-thread-state call has been
made on the main thread. This is mainly a helper/diagnostic function.
Return 1
if the current thread is holding the GIL and 0
otherwise.
This function can be called from any thread at any time.
Only if it has had its Python thread state initialized and currently is
holding the GIL will it return 1
.
This is mainly a helper/diagnostic function. It can be useful
for example in callback contexts or memory allocation functions when
knowing that the GIL is locked can allow the caller to perform sensitive
actions or otherwise behave differently.
在 3.4 版被加入.
The following macros are normally used without a trailing semicolon; look for example usage in the Python source distribution.
This macro expands to { PyThreadState *_save; _save = PyEval_SaveThread();
.
Note that it contains an opening brace; it must be matched with a following
Py_END_ALLOW_THREADS
macro. See above for further discussion of this
macro.
This macro expands to PyEval_RestoreThread(_save); }
. Note that it contains
a closing brace; it must be matched with an earlier
Py_BEGIN_ALLOW_THREADS
macro. See above for further discussion of
this macro.
This macro expands to PyEval_RestoreThread(_save);
: it is equivalent to
Py_END_ALLOW_THREADS
without the closing brace.
This macro expands to _save = PyEval_SaveThread();
: it is equivalent to
Py_BEGIN_ALLOW_THREADS
without the opening brace and variable
declaration.
All of the following functions must be called after Py_Initialize()
.
在 3.7 版的變更: Py_Initialize()
now initializes the GIL.
Create a new interpreter state object. The global interpreter lock need not be held, but may be held if it is necessary to serialize calls to this function.
引發一個不附帶引數的稽核事件 cpython.PyInterpreterState_New
。
Reset all information in an interpreter state object. The global interpreter lock must be held.
引發一個不附帶引數的稽核事件 cpython.PyInterpreterState_Clear
。
Destroy an interpreter state object. The global interpreter lock need not be
held. The interpreter state must have been reset with a previous call to
PyInterpreterState_Clear()
.
Create a new thread state object belonging to the given interpreter object. The global interpreter lock need not be held, but may be held if it is necessary to serialize calls to this function.
Reset all information in a thread state object. The global interpreter lock must be held.
在 3.9 版的變更: This function now calls the PyThreadState.on_delete
callback.
Previously, that happened in PyThreadState_Delete()
.
在 3.13 版的變更: PyThreadState.on_delete
回呼已被移除。
Destroy a thread state object. The global interpreter lock need not be held.
The thread state must have been reset with a previous call to
PyThreadState_Clear()
.
Destroy the current thread state and release the global interpreter lock.
Like PyThreadState_Delete()
, the global interpreter lock must
be held. The thread state must have been reset with a previous call
to PyThreadState_Clear()
.
Get the current frame of the Python thread state tstate.
Return a strong reference. Return NULL
if no frame is currently
executing.
也請見 PyEval_GetFrame()
。
tstate 不可為 NULL
。
在 3.9 版被加入.
Get the unique thread state identifier of the Python thread state tstate.
tstate 不可為 NULL
。
在 3.9 版被加入.
Get the interpreter of the Python thread state tstate.
tstate 不可為 NULL
。
在 3.9 版被加入.
Suspend tracing and profiling in the Python thread state tstate.
Resume them using the PyThreadState_LeaveTracing()
function.
在 3.11 版被加入.
Resume tracing and profiling in the Python thread state tstate suspended
by the PyThreadState_EnterTracing()
function.
See also PyEval_SetTrace()
and PyEval_SetProfile()
functions.
在 3.11 版被加入.
Get the current interpreter.
Issue a fatal error if there no current Python thread state or no current interpreter. It cannot return NULL.
The caller must hold the GIL.
在 3.9 版被加入.
Return the interpreter's unique ID. If there was any error in doing
so then -1
is returned and an error is set.
The caller must hold the GIL.
在 3.7 版被加入.
Return a dictionary in which interpreter-specific data may be stored.
If this function returns NULL
then no exception has been raised and
the caller should assume no interpreter-specific dict is available.
This is not a replacement for PyModule_GetState()
, which
extensions should use to store interpreter-specific state information.
在 3.8 版被加入.
Return a strong reference to the __main__
module object
for the given interpreter.
The caller must hold the GIL.
在 3.13 版被加入.
Type of a frame evaluation function.
The throwflag parameter is used by the throw()
method of generators:
if non-zero, handle the current exception.
在 3.9 版的變更: The function now takes a tstate parameter.
在 3.11 版的變更: The frame parameter changed from PyFrameObject*
to _PyInterpreterFrame*
.
Get the frame evaluation function.
See the PEP 523 "Adding a frame evaluation API to CPython".
在 3.9 版被加入.
Set the frame evaluation function.
See the PEP 523 "Adding a frame evaluation API to CPython".
在 3.9 版被加入.
Return a dictionary in which extensions can store thread-specific state
information. Each extension should use a unique key to use to store state in
the dictionary. It is okay to call this function when no current thread state
is available. If this function returns NULL
, no exception has been raised and
the caller should assume no current thread state is available.
Asynchronously raise an exception in a thread. The id argument is the thread
id of the target thread; exc is the exception object to be raised. This
function does not steal any references to exc. To prevent naive misuse, you
must write your own C extension to call this. Must be called with the GIL held.
Returns the number of thread states modified; this is normally one, but will be
zero if the thread id isn't found. If exc is NULL
, the pending
exception (if any) for the thread is cleared. This raises no exceptions.
在 3.7 版的變更: The type of the id parameter changed from long to unsigned long.
Acquire the global interpreter lock and set the current thread state to
tstate, which must not be NULL
. The lock must have been created earlier.
If this thread already has the lock, deadlock ensues.
備註
Calling this function from a thread when the runtime is finalizing
will terminate the thread, even if the thread was not created by Python.
You can use Py_IsFinalizing()
or sys.is_finalizing()
to
check if the interpreter is in process of being finalized before calling
this function to avoid unwanted termination.
在 3.8 版的變更: Updated to be consistent with PyEval_RestoreThread()
,
Py_END_ALLOW_THREADS()
, and PyGILState_Ensure()
,
and terminate the current thread if called while the interpreter is finalizing.
PyEval_RestoreThread()
is a higher-level function which is always
available (even when threads have not been initialized).
Reset the current thread state to NULL
and release the global interpreter
lock. The lock must have been created earlier and must be held by the current
thread. The tstate argument, which must not be NULL
, is only used to check
that it represents the current thread state --- if it isn't, a fatal error is
reported.
PyEval_SaveThread()
is a higher-level function which is always
available (even when threads have not been initialized).
While in most uses, you will only embed a single Python interpreter, there are cases where you need to create several independent interpreters in the same process and perhaps even in the same thread. Sub-interpreters allow you to do that.
The "main" interpreter is the first one created when the runtime initializes.
It is usually the only Python interpreter in a process. Unlike sub-interpreters,
the main interpreter has unique process-global responsibilities like signal
handling. It is also responsible for execution during runtime initialization and
is usually the active interpreter during runtime finalization. The
PyInterpreterState_Main()
function returns a pointer to its state.
You can switch between sub-interpreters using the PyThreadState_Swap()
function. You can create and destroy them using the following functions:
Structure containing most parameters to configure a sub-interpreter.
Its values are used only in Py_NewInterpreterFromConfig()
and
never modified by the runtime.
在 3.12 版被加入.
Structure fields:
If this is 0
then the sub-interpreter will use its own
"object" allocator state.
Otherwise it will use (share) the main interpreter's.
If this is 0
then
check_multi_interp_extensions
must be 1
(non-zero).
If this is 1
then gil
must not be PyInterpreterConfig_OWN_GIL
.
If this is 0
then the runtime will not support forking the
process in any thread where the sub-interpreter is currently active.
Otherwise fork is unrestricted.
Note that the subprocess
module still works
when fork is disallowed.
If this is 0
then the runtime will not support replacing the
current process via exec (e.g. os.execv()
) in any thread
where the sub-interpreter is currently active.
Otherwise exec is unrestricted.
Note that the subprocess
module still works
when exec is disallowed.
If this is 0
then the sub-interpreter's threading
module
won't create threads.
Otherwise threads are allowed.
If this is 0
then the sub-interpreter's threading
module
won't create daemon threads.
Otherwise daemon threads are allowed (as long as
allow_threads
is non-zero).
If this is 0
then all extension modules may be imported,
including legacy (single-phase init) modules,
in any thread where the sub-interpreter is currently active.
Otherwise only multi-phase init extension modules
(see PEP 489) may be imported.
(Also see Py_mod_multiple_interpreters
.)
This must be 1
(non-zero) if
use_main_obmalloc
is 0
.
This determines the operation of the GIL for the sub-interpreter. It may be one of the following:
Use the default selection (PyInterpreterConfig_SHARED_GIL
).
Use (share) the main interpreter's GIL.
Use the sub-interpreter's own GIL.
If this is PyInterpreterConfig_OWN_GIL
then
PyInterpreterConfig.use_main_obmalloc
must be 0
.
Create a new sub-interpreter. This is an (almost) totally separate environment
for the execution of Python code. In particular, the new interpreter has
separate, independent versions of all imported modules, including the
fundamental modules builtins
, __main__
and sys
. The
table of loaded modules (sys.modules
) and the module search path
(sys.path
) are also separate. The new environment has no sys.argv
variable. It has new standard I/O stream file objects sys.stdin
,
sys.stdout
and sys.stderr
(however these refer to the same underlying
file descriptors).
The given config controls the options with which the interpreter is initialized.
Upon success, tstate_p will be set to the first thread state
created in the new
sub-interpreter. This thread state is made in the current thread state.
Note that no actual thread is created; see the discussion of thread states
below. If creation of the new interpreter is unsuccessful,
tstate_p is set to NULL
;
no exception is set since the exception state is stored in the
current thread state and there may not be a current thread state.
Like all other Python/C API functions, the global interpreter lock must be held before calling this function and is still held when it returns. Likewise a current thread state must be set on entry. On success, the returned thread state will be set as current. If the sub-interpreter is created with its own GIL then the GIL of the calling interpreter will be released. When the function returns, the new interpreter's GIL will be held by the current thread and the previously interpreter's GIL will remain released here.
在 3.12 版被加入.
Sub-interpreters are most effective when isolated from each other, with certain functionality restricted:
PyInterpreterConfig config = {
.use_main_obmalloc = 0,
.allow_fork = 0,
.allow_exec = 0,
.allow_threads = 1,
.allow_daemon_threads = 0,
.check_multi_interp_extensions = 1,
.gil = PyInterpreterConfig_OWN_GIL,
};
PyThreadState *tstate = NULL;
PyStatus status = Py_NewInterpreterFromConfig(&tstate, &config);
if (PyStatus_Exception(status)) {
Py_ExitStatusException(status);
}
Note that the config is used only briefly and does not get modified.
During initialization the config's values are converted into various
PyInterpreterState
values. A read-only copy of the config
may be stored internally on the PyInterpreterState
.
Extension modules are shared between (sub-)interpreters as follows:
For modules using multi-phase initialization,
e.g. PyModule_FromDefAndSpec()
, a separate module object is
created and initialized for each interpreter.
Only C-level static and global variables are shared between these
module objects.
For modules using single-phase initialization,
e.g. PyModule_Create()
, the first time a particular extension
is imported, it is initialized normally, and a (shallow) copy of its
module's dictionary is squirreled away.
When the same extension is imported by another (sub-)interpreter, a new
module is initialized and filled with the contents of this copy; the
extension's init
function is not called.
Objects in the module's dictionary thus end up shared across
(sub-)interpreters, which might cause unwanted behavior (see
Bugs and caveats below).
Note that this is different from what happens when an extension is
imported after the interpreter has been completely re-initialized by
calling Py_FinalizeEx()
and Py_Initialize()
; in that
case, the extension's initmodule
function is called again.
As with multi-phase initialization, this means that only C-level static
and global variables are shared between these modules.
Create a new sub-interpreter. This is essentially just a wrapper
around Py_NewInterpreterFromConfig()
with a config that
preserves the existing behavior. The result is an unisolated
sub-interpreter that shares the main interpreter's GIL, allows
fork/exec, allows daemon threads, and allows single-phase init
modules.
Destroy the (sub-)interpreter represented by the given thread state.
The given thread state must be the current thread state. See the
discussion of thread states below. When the call returns,
the current thread state is NULL
. All thread states associated
with this interpreter are destroyed. The global interpreter lock
used by the target interpreter must be held before calling this
function. No GIL is held when it returns.
Py_FinalizeEx()
will destroy all sub-interpreters that
haven't been explicitly destroyed at that point.
Using Py_NewInterpreterFromConfig()
you can create
a sub-interpreter that is completely isolated from other interpreters,
including having its own GIL. The most important benefit of this
isolation is that such an interpreter can execute Python code without
being blocked by other interpreters or blocking any others. Thus a
single Python process can truly take advantage of multiple CPU cores
when running Python code. The isolation also encourages a different
approach to concurrency than that of just using threads.
(See PEP 554.)
Using an isolated interpreter requires vigilance in preserving that
isolation. That especially means not sharing any objects or mutable
state without guarantees about thread-safety. Even objects that are
otherwise immutable (e.g. None
, (1, 5)
) can't normally be shared
because of the refcount. One simple but less-efficient approach around
this is to use a global lock around all use of some state (or object).
Alternately, effectively immutable objects (like integers or strings)
can be made safe in spite of their refcounts by making them immortal.
In fact, this has been done for the builtin singletons, small integers,
and a number of other builtin objects.
If you preserve isolation then you will have access to proper multi-core computing without the complications that come with free-threading. Failure to preserve isolation will expose you to the full consequences of free-threading, including races and hard-to-debug crashes.
Aside from that, one of the main challenges of using multiple isolated interpreters is how to communicate between them safely (not break isolation) and efficiently. The runtime and stdlib do not provide any standard approach to this yet. A future stdlib module would help mitigate the effort of preserving isolation and expose effective tools for communicating (and sharing) data between interpreters.
在 3.12 版被加入.
Because sub-interpreters (and the main interpreter) are part of the same
process, the insulation between them isn't perfect --- for example, using
low-level file operations like os.close()
they can
(accidentally or maliciously) affect each other's open files. Because of the
way extensions are shared between (sub-)interpreters, some extensions may not
work properly; this is especially likely when using single-phase initialization
or (static) global variables.
It is possible to insert objects created in one sub-interpreter into
a namespace of another (sub-)interpreter; this should be avoided if possible.
Special care should be taken to avoid sharing user-defined functions, methods, instances or classes between sub-interpreters, since import operations executed by such objects may affect the wrong (sub-)interpreter's dictionary of loaded modules. It is equally important to avoid sharing objects from which the above are reachable.
Also note that combining this functionality with PyGILState_*
APIs
is delicate, because these APIs assume a bijection between Python thread states
and OS-level threads, an assumption broken by the presence of sub-interpreters.
It is highly recommended that you don't switch sub-interpreters between a pair
of matching PyGILState_Ensure()
and PyGILState_Release()
calls.
Furthermore, extensions (such as ctypes
) using these APIs to allow calling
of Python code from non-Python created threads will probably be broken when using
sub-interpreters.
A mechanism is provided to make asynchronous notifications to the main interpreter thread. These notifications take the form of a function pointer and a void pointer argument.
Schedule a function to be called from the main interpreter thread. On
success, 0
is returned and func is queued for being called in the
main thread. On failure, -1
is returned without setting any exception.
When successfully queued, func will be eventually called from the main interpreter thread with the argument arg. It will be called asynchronously with respect to normally running Python code, but with both these conditions met:
on a bytecode boundary;
with the main thread holding the global interpreter lock (func can therefore use the full C API).
func must return 0
on success, or -1
on failure with an exception
set. func won't be interrupted to perform another asynchronous
notification recursively, but it can still be interrupted to switch
threads if the global interpreter lock is released.
This function doesn't need a current thread state to run, and it doesn't need the global interpreter lock.
To call this function in a subinterpreter, the caller must hold the GIL. Otherwise, the function func can be scheduled to be called from the wrong interpreter.
警告
This is a low-level function, only useful for very special cases. There is no guarantee that func will be called as quick as possible. If the main thread is busy executing a system call, func won't be called before the system call returns. This function is generally not suitable for calling Python code from arbitrary C threads. Instead, use the PyGILState API.
在 3.1 版被加入.
在 3.9 版的變更: If this function is called in a subinterpreter, the function func is now scheduled to be called from the subinterpreter, rather than being called from the main interpreter. Each subinterpreter now has its own list of scheduled calls.
The Python interpreter provides some low-level support for attaching profiling and execution tracing facilities. These are used for profiling, debugging, and coverage analysis tools.
This C interface allows the profiling or tracing code to avoid the overhead of calling through Python-level callable objects, making a direct C function call instead. The essential attributes of the facility have not changed; the interface allows trace functions to be installed per-thread, and the basic events reported to the trace function are the same as had been reported to the Python-level trace functions in previous versions.
The type of the trace function registered using PyEval_SetProfile()
and
PyEval_SetTrace()
. The first parameter is the object passed to the
registration function as obj, frame is the frame object to which the event
pertains, what is one of the constants PyTrace_CALL
,
PyTrace_EXCEPTION
, PyTrace_LINE
, PyTrace_RETURN
,
PyTrace_C_CALL
, PyTrace_C_EXCEPTION
, PyTrace_C_RETURN
,
or PyTrace_OPCODE
, and arg depends on the value of what:
Value of what |
arg 的含義 |
---|---|
Always |
|
Exception information as returned by
|
|
Always |
|
Value being returned to the caller,
or |
|
被呼叫的函式物件。 |
|
被呼叫的函式物件。 |
|
被呼叫的函式物件。 |
|
Always |
The value of the what parameter to a Py_tracefunc
function when a new
call to a function or method is being reported, or a new entry into a generator.
Note that the creation of the iterator for a generator function is not reported
as there is no control transfer to the Python bytecode in the corresponding
frame.
The value of the what parameter to a Py_tracefunc
function when an
exception has been raised. The callback function is called with this value for
what when after any bytecode is processed after which the exception becomes
set within the frame being executed. The effect of this is that as exception
propagation causes the Python stack to unwind, the callback is called upon
return to each frame as the exception propagates. Only trace functions receives
these events; they are not needed by the profiler.
The value passed as the what parameter to a Py_tracefunc
function
(but not a profiling function) when a line-number event is being reported.
It may be disabled for a frame by setting f_trace_lines
to
0 on that frame.
The value for the what parameter to Py_tracefunc
functions when a
call is about to return.
The value for the what parameter to Py_tracefunc
functions when a C
function is about to be called.
The value for the what parameter to Py_tracefunc
functions when a C
function has raised an exception.
The value for the what parameter to Py_tracefunc
functions when a C
function has returned.
The value for the what parameter to Py_tracefunc
functions (but not
profiling functions) when a new opcode is about to be executed. This event is
not emitted by default: it must be explicitly requested by setting
f_trace_opcodes
to 1 on the frame.
Set the profiler function to func. The obj parameter is passed to the
function as its first parameter, and may be any Python object, or NULL
. If
the profile function needs to maintain state, using a different value for obj
for each thread provides a convenient and thread-safe place to store it. The
profile function is called for all monitored events except PyTrace_LINE
PyTrace_OPCODE
and PyTrace_EXCEPTION
.
See also the sys.setprofile()
function.
呼叫者必須持有 GIL。
Like PyEval_SetProfile()
but sets the profile function in all running threads
belonging to the current interpreter instead of the setting it only on the current thread.
呼叫者必須持有 GIL。
As PyEval_SetProfile()
, this function ignores any exceptions raised while
setting the profile functions in all threads.
在 3.12 版被加入.
Set the tracing function to func. This is similar to
PyEval_SetProfile()
, except the tracing function does receive line-number
events and per-opcode events, but does not receive any event related to C function
objects being called. Any trace function registered using PyEval_SetTrace()
will not receive PyTrace_C_CALL
, PyTrace_C_EXCEPTION
or
PyTrace_C_RETURN
as a value for the what parameter.
也請見 sys.settrace()
函式。
呼叫者必須持有 GIL。
Like PyEval_SetTrace()
but sets the tracing function in all running threads
belonging to the current interpreter instead of the setting it only on the current thread.
呼叫者必須持有 GIL。
As PyEval_SetTrace()
, this function ignores any exceptions raised while
setting the trace functions in all threads.
在 3.12 版被加入.
在 3.13 版被加入.
The type of the trace function registered using PyRefTracer_SetTracer()
.
The first parameter is a Python object that has been just created (when event
is set to PyRefTracer_CREATE
) or about to be destroyed (when event
is set to PyRefTracer_DESTROY
). The data argument is the opaque pointer
that was provided when PyRefTracer_SetTracer()
was called.
在 3.13 版被加入.
The value for the event parameter to PyRefTracer
functions when a Python
object has been created.
The value for the event parameter to PyRefTracer
functions when a Python
object has been destroyed.
Register a reference tracer function. The function will be called when a new
Python has been created or when an object is going to be destroyed. If
data is provided it must be an opaque pointer that will be provided when
the tracer function is called. Return 0
on success. Set an exception and
return -1
on error.
Not that tracer functions must not create Python objects inside or otherwise the call will be re-entrant. The tracer also must not clear any existing exception or set an exception. The GIL will be held every time the tracer function is called.
The GIL must be held when calling this function.
在 3.13 版被加入.
Get the registered reference tracer function and the value of the opaque data
pointer that was registered when PyRefTracer_SetTracer()
was called.
If no tracer was registered this function will return NULL and will set the
data pointer to NULL.
The GIL must be held when calling this function.
在 3.13 版被加入.
These functions are only intended to be used by advanced debugging tools.
Return the interpreter state object at the head of the list of all such objects.
Return the main interpreter state object.
Return the next interpreter state object after interp from the list of all such objects.
Return the pointer to the first PyThreadState
object in the list of
threads associated with the interpreter interp.
Return the next thread state object after tstate from the list of all such
objects belonging to the same PyInterpreterState
object.
The Python interpreter provides low-level support for thread-local storage
(TLS) which wraps the underlying native TLS implementation to support the
Python-level thread local storage API (threading.local
). The
CPython C level APIs are similar to those offered by pthreads and Windows:
use a thread key and functions to associate a void* value per
thread.
The GIL does not need to be held when calling these functions; they supply their own locking.
Note that Python.h
does not include the declaration of the TLS APIs,
you need to include pythread.h
to use thread-local storage.
備註
None of these API functions handle memory management on behalf of the void* values. You need to allocate and deallocate them yourself. If the void* values happen to be PyObject*, these functions don't do refcount operations on them either.
TSS API is introduced to supersede the use of the existing TLS API within the
CPython interpreter. This API uses a new type Py_tss_t
instead of
int to represent thread keys.
在 3.7 版被加入.
也參考
"A New C-API for Thread-Local Storage in CPython" (PEP 539)
This data structure represents the state of a thread key, the definition of which may depend on the underlying TLS implementation, and it has an internal field representing the key's initialization state. There are no public members in this structure.
When Py_LIMITED_API is not defined, static allocation of
this type by Py_tss_NEEDS_INIT
is allowed.
This macro expands to the initializer for Py_tss_t
variables.
Note that this macro won't be defined with Py_LIMITED_API.
Dynamic allocation of the Py_tss_t
, required in extension modules
built with Py_LIMITED_API, where static allocation of this type
is not possible due to its implementation being opaque at build time.
Return a value which is the same state as a value initialized with
Py_tss_NEEDS_INIT
, or NULL
in the case of dynamic allocation
failure.
Free the given key allocated by PyThread_tss_alloc()
, after
first calling PyThread_tss_delete()
to ensure any associated
thread locals have been unassigned. This is a no-op if the key
argument is NULL
.
備註
A freed key becomes a dangling pointer. You should reset the key to
NULL
.
The parameter key of these functions must not be NULL
. Moreover, the
behaviors of PyThread_tss_set()
and PyThread_tss_get()
are
undefined if the given Py_tss_t
has not been initialized by
PyThread_tss_create()
.
Return a non-zero value if the given Py_tss_t
has been initialized
by PyThread_tss_create()
.
Return a zero value on successful initialization of a TSS key. The behavior
is undefined if the value pointed to by the key argument is not
initialized by Py_tss_NEEDS_INIT
. This function can be called
repeatedly on the same key -- calling it on an already initialized key is a
no-op and immediately returns success.
Destroy a TSS key to forget the values associated with the key across all
threads, and change the key's initialization state to uninitialized. A
destroyed key is able to be initialized again by
PyThread_tss_create()
. This function can be called repeatedly on
the same key -- calling it on an already destroyed key is a no-op.
Return a zero value to indicate successfully associating a void* value with a TSS key in the current thread. Each thread has a distinct mapping of the key to a void* value.
Return the void* value associated with a TSS key in the current
thread. This returns NULL
if no value is associated with the key in the
current thread.
在 3.7 版之後被棄用: This API is superseded by Thread Specific Storage (TSS) API.
備註
This version of the API does not support platforms where the native TLS key
is defined in a way that cannot be safely cast to int
. On such platforms,
PyThread_create_key()
will return immediately with a failure status,
and the other TLS functions will all be no-ops on such platforms.
Due to the compatibility problem noted above, this version of the API should not be used in new code.
The C-API provides a basic mutual exclusion lock.
A mutual exclusion lock. The PyMutex
should be initialized to
zero to represent the unlocked state. For example:
PyMutex mutex = {0};
Instances of PyMutex
should not be copied or moved. Both the
contents and address of a PyMutex
are meaningful, and it must
remain at a fixed, writable location in memory.
備註
A PyMutex
currently occupies one byte, but the size should be
considered unstable. The size may change in future Python releases
without a deprecation period.
在 3.13 版被加入.
Lock mutex m. If another thread has already locked it, the calling thread will block until the mutex is unlocked. While blocked, the thread will temporarily release the GIL if it is held.
在 3.13 版被加入.
Unlock mutex m. The mutex must be locked --- otherwise, the function will issue a fatal error.
在 3.13 版被加入.
The critical section API provides a deadlock avoidance layer on top of per-object locks for free-threaded CPython. They are intended to replace reliance on the global interpreter lock, and are no-ops in versions of Python with the global interpreter lock.
Critical sections avoid deadlocks by implicitly suspending active critical
sections and releasing the locks during calls to PyEval_SaveThread()
.
When PyEval_RestoreThread()
is called, the most recent critical section
is resumed, and its locks reacquired. This means the critical section API
provides weaker guarantees than traditional locks -- they are useful because
their behavior is similar to the GIL.
The functions and structs used by the macros are exposed for cases where C macros are not available. They should only be used as in the given macro expansions. Note that the sizes and contents of the structures may change in future Python versions.
備註
Operations that need to lock two objects at once must use
Py_BEGIN_CRITICAL_SECTION2
. You cannot use nested critical
sections to lock more than one object at once, because the inner critical
section may suspend the outer critical sections. This API does not provide
a way to lock more than two objects at once.
Example usage:
static PyObject *
set_field(MyObject *self, PyObject *value)
{
Py_BEGIN_CRITICAL_SECTION(self);
Py_SETREF(self->field, Py_XNewRef(value));
Py_END_CRITICAL_SECTION();
Py_RETURN_NONE;
}
In the above example, Py_SETREF
calls Py_DECREF
, which
can call arbitrary code through an object's deallocation function. The critical
section API avoids potential deadlocks due to reentrancy and lock ordering
by allowing the runtime to temporarily suspend the critical section if the
code triggered by the finalizer blocks and calls PyEval_SaveThread()
.
Acquires the per-object lock for the object op and begins a critical section.
In the free-threaded build, this macro expands to:
{
PyCriticalSection _py_cs;
PyCriticalSection_Begin(&_py_cs, (PyObject*)(op))
In the default build, this macro expands to {
.
在 3.13 版被加入.
Ends the critical section and releases the per-object lock.
In the free-threaded build, this macro expands to:
PyCriticalSection_End(&_py_cs);
}
In the default build, this macro expands to }
.
在 3.13 版被加入.
Acquires the per-objects locks for the objects a and b and begins a critical section. The locks are acquired in a consistent order (lowest address first) to avoid lock ordering deadlocks.
In the free-threaded build, this macro expands to:
{
PyCriticalSection2 _py_cs2;
PyCriticalSection2_Begin(&_py_cs2, (PyObject*)(a), (PyObject*)(b))
In the default build, this macro expands to {
.
在 3.13 版被加入.
Ends the critical section and releases the per-object locks.
In the free-threaded build, this macro expands to:
PyCriticalSection2_End(&_py_cs2);
}
In the default build, this macro expands to }
.
在 3.13 版被加入.