Cyber Posture

CVE-2022-49700

High

Published: 26 February 2025

Published
26 February 2025
Modified
25 March 2025
KEV Added
Patch
CVSS Score 7.8 CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
EPSS Score 0.0001 2.8th percentile
Risk Priority 16 60% EPSS · 20% KEV · 20% CVSS

Description

In the Linux kernel, the following vulnerability has been resolved: mm/slub: add missing TID updates on slab deactivation The fastpath in slab_alloc_node() assumes that c->slab is stable as long as the TID stays the same. However, two places in __slab_alloc() currently don't update the TID when deactivating the CPU slab. If multiple operations race the right way, this could lead to an object getting lost; or, in an even more unlikely situation, it could even lead to an object being freed onto the wrong slab's freelist, messing up the `inuse` counter and eventually causing a page to be freed to the page allocator while it still contains slab objects. (I haven't actually tested these cases though, this is just based on looking at the code. Writing testcases for this stuff seems like it'd be a pain...) The race leading to state inconsistency is (all operations on the same CPU and kmem_cache): - task A: begin do_slab_free(): - read TID - read pcpu freelist (==NULL) - check `slab == c->slab` (true) - [PREEMPT A->B] - task B: begin slab_alloc_node(): - fastpath fails (`c->freelist` is NULL) - enter __slab_alloc() - slub_get_cpu_ptr() (disables preemption) - enter ___slab_alloc() - take local_lock_irqsave() - read c->freelist as NULL - get_freelist() returns NULL - write `c->slab = NULL` - drop local_unlock_irqrestore() - goto new_slab - slub_percpu_partial() is NULL - get_partial() returns NULL - slub_put_cpu_ptr() (enables preemption) - [PREEMPT B->A] - task A: finish do_slab_free(): - this_cpu_cmpxchg_double() succeeds() - [CORRUPT STATE: c->slab==NULL, c->freelist!=NULL] From there, the object on c->freelist will get lost if task B is allowed to continue from here: It will proceed to the retry_load_slab label, set c->slab, then jump to load_freelist, which clobbers c->freelist. But if we instead continue as follows, we get worse corruption: - task A: run __slab_free() on object from other struct slab: - CPU_PARTIAL_FREE case (slab was on no list, is now on pcpu partial) - task A: run slab_alloc_node() with NUMA node constraint: - fastpath fails (c->slab is NULL) - call __slab_alloc() - slub_get_cpu_ptr() (disables preemption) - enter ___slab_alloc() - c->slab is NULL: goto new_slab - slub_percpu_partial() is non-NULL - set c->slab to slub_percpu_partial(c) - [CORRUPT STATE: c->slab points to slab-1, c->freelist has objects from slab-2] - goto redo - node_match() fails - goto deactivate_slab - existing c->freelist is passed into deactivate_slab() - inuse count of slab-1 is decremented to account for object from slab-2 At this point, the inuse count of slab-1 is 1 lower than it should be. This means that if we free all allocated objects in slab-1 except for one, SLUB will think that slab-1 is completely unused, and may free its page, leading to use-after-free.

Security Summary

CVE-2022-49700 is a race condition vulnerability in the Linux kernel's SLUB slab allocator, specifically in the mm/slub subsystem. The issue arises because the fastpath in slab_alloc_node() assumes the per-CPU slab (c->slab) remains stable as long as the TID (a per-CPU sequence counter) is unchanged. However, two locations in __slab_alloc() fail to update the TID when deactivating the CPU slab, allowing races that can result in an object being lost or, in rarer cases, freed onto the wrong slab's freelist. This corrupts the inuse counter, potentially causing a slab page to be prematurely freed to the page allocator while still containing live objects, leading to a use-after-free condition (CWE-416).

A local attacker with low privileges (PR:L) on the same CPU can exploit this vulnerability with low attack complexity (AC:L) and no user interaction (UI:N), as indicated by its CVSS v3.1 score of 7.8 (AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H). Exploitation requires carefully timed races between slab_free and slab_alloc operations on the same kmem_cache, such as task A preempted during do_slab_free after reading an empty freelist, allowing task B to deactivate the slab (setting c->slab to NULL), and then task A completing the free onto a now-inconsistent per-CPU structure. This can lead to lost objects, freelist corruption across slabs, or decremented inuse counts on the wrong slab, enabling use-after-free when the affected page is reclaimed.

Mitigation involves applying kernel patches that add the missing TID updates during slab deactivation. Relevant stable backports are available at the following commit URLs: https://git.kernel.org/stable/c/0515cc9b6b24877f59b222ade704bfaa42caa2a6, https://git.kernel.org/stable/c/197e257da473c725dfe47759c3ee02f2398d8ea5, https://git.kernel.org/stable/c/308c6d0e1f200fd26c71270c6e6bfcf0fc6ff082, https://git.kernel.org/stable/c/6c32496964da0dc230cea763a0e934b2e02dabd5, and https://git.kernel.org/stable/c/d6a597450e686d4c6388bd3cdcb17224b4dae7f0. Security practitioners should ensure systems run patched kernels to prevent exploitation.

Details

CWE(s)
CWE-416

Affected Products

linux
linux kernel
5.19 · 3.1 — 4.9.323 · 4.10 — 4.14.288 · 4.15 — 4.19.252

References