Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-115999: Specialize LOAD_ATTR for instance and class receivers in free-threaded builds #128164

Open
wants to merge 47 commits into
base: main
Choose a base branch
from

Conversation

mpage
Copy link
Contributor

@mpage mpage commented Dec 21, 2024

This PR finishes specialization forLOAD_ATTR in the free-threaded build by adding support for class and instance receivers.

The bulk of it is dedicated to making the specialized instructions and the specialization logic thread-safe. This consists of using atomics / locks in the appropriate places, avoiding races in specialization related to reloading versions, and ensuring that the objects stored in inline caches remain valid when accessed (by only storing deferred objects). See the section on "Thread Safety" below for more details.

Additionally, making this work required a few unrelated changes to fix existing bugs or work around differences between the two builds that results from only storing deferred values (which causes in specialization failures in the free-threaded build when a value that would be stored in the cache is not deferred):

  • Fixed a bug in the cases generator where it wasn't treating macro tokens as terminators when searching for the assignment target of an expression involving PyStackRef_FromPyObjectNew.
  • Refactored test_descr.MiscTests.test_type_lookup_mro_reference to work when specialization fails (and also be a behavorial test).
  • Temporarily skip test_capi.test_type.TypeTests.test_freeze_meta when running refleaks tests on free-threaded builds. Specialization failure triggers an existing bug.

Single-threaded Performance

  • Performance is improved by ~12% on free-threaded builds.
  • Performance is ~neutral on default builds. (Technically the benchmark runner reports a 0.2% improvement, but that looks like noise to me.)

We're leaving a bit of perf on the table by only storing deferred objects: we can't specialize attribute lookups that resolve to class attributes (e.g. counters, settings). I haven't measured how much perf we're giving up, but I'd like to address that in a separate PR.

Scalability

The object_cfunction and pymethod benchmarks are improved (1.4x slower -> 14.3x faster, 1.8x slower -> 13.0x faster, respectively). Other benchmarks appear unchanged.

I would expect that cmodule_function would also improve, but it looks like the benchmark is bottlenecked on increfing the int.__floor__ method that is returned from the call to _PyObject_LookupSpecial in math_floor (the incref happens in _PyType_LookupRef, which is called by PyObject_LookupSpecial):

PyObject *method = _PyObject_LookupSpecial(number, state->str___floor__);

Raw numbers:

Running benchmarks with 28 threads
                   Merge-base    This PR
object_cfunction    1.4x slower  14.3x faster
cmodule_function    1.7x slower   1.7x slower
mult_constant      12.8x faster  13.6x faster
generator          11.5x faster  12.2x faster
pymethod            1.8x slower  13.4x faster
pyfunction         12.2x faster  13.0x faster
module_function    14.5x faster  14.8x faster
load_string_const  13.0x faster  13.2x faster
load_tuple_const   12.7x faster  14.1x faster
create_pyobject    12.8x faster  13.5x faster
create_closure     13.9x faster  14.8x faster
create_dict        10.5x faster  13.3x faster
thread_local_read   3.9x slower   4.0x slower

Thread Safety

Thread safety of specialized instructions is addressed in a few different ways:

  • Use atomics where needed.
  • Lock the instance dictionary in _LOAD_ATTR_WITH_HINT.
  • Only store deferred objects in the inline cache. All uops that retrieve objects from the cache are preceded by type version guards, which ensure that the (deferred) value retrieved from the cache is valid. The type version will be mutated before destroying the reference in the type. If the type held the last reference, then GC will need to run in order to reclaim the object. GC requires stopping the world, so if GC reclaims the object then we will see the new type version and the guard will fail.

Thread safety of specialization is addressed using similar techniques:

  • Atomics / locking is used to access shared state.
  • Specialization code is refactored to read versions at the start of specialization and store the same version in caches (as opposed to rereading the version). This ensures that the specialization is consistent with the version by avoiding races where the type or shared keys dictionary changes after we've queried it.

Stats

Following the instructions in the comment preceding specialize_attr_loadclassattr, I collected stats for both this PR and its merge base using ./python -m test_typing test_re test_dis test_zlib and compared them using Tools/scripts/summarize_stats.py. The results for LOAD_ATTR are nearly identical and are consistent with results from comparing the merge base against itself:

mpage added 30 commits December 18, 2024 14:13
Look up a unicode key in an all unicode keys object along with the
keys version, assigning one if not present.

We need a keys version that is consistent with presence of the key
for use in the guards.
Reading the shared keys version and looking up the key need to be performed
atomically. Otherwise, a key inserted after the lookup could race with reading
the version, causing us to incorrectly specialize that no key shadows the
descriptor.
* Check that the type hasn't changed across lookups
* Descr is now an owned ref
- Use atomic load for value
- Use _Py_TryIncrefCompareStackRef for incref
- Use atomics and _Py_TryIncrefCompareStackRef in _LOAD_ATTR_SLOT
- Pass type version during specialization
- Check that fget is deferred
- Pass tp_version
Macros should be treated as terminators when searching for the assignment
target of an expression involving PyStackRef_FromPyObjectNew
All instance loads are complete!
mpage added 17 commits December 18, 2024 14:13
- Use atomics where appropriate
- Only cache deferred values
If specialization fails, `X.mykey` will be performed again using the
new mro set by __eq__.

Check that the process doesn't crash, since that's what we care about.
Always use the type version from the __getattribute__ lookup (even if
its zero). The guard will fail if the type changes afterwards.
This matches the previous implementation and causes failures to
specialize due to the presence of both __getattr__ and __getattribute__
to be classified correctly (rather than being classified as being out
of type versions).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant