A too trusty TrustZone and a few Linux Kernel bugs
Gerbv uses a fixed size array to store
gerbv_aperture_t structures but is indexed by an unrestricted integer providing an out-of-bounds read and write. This array is indexed through an attacker controlled value
tool_num, while the value is checked against the MIN and MAX values for the array, being out-of-bounds only results in an error message.
This later provides an out-of-bounds read at the attacker-controlled index, and if that value is null, a user-tained value (
size) will be written to the
->parameter field of the structure, giving an out-of-bound write. The value written is influenced by the attacker, but it is a floating point variable, that will be divided by either
1000 depending on the also user-controlled unit being used.
The authors don’t dive too deeply into exploitation, but propose that an attacker could target the
drill_stats linked-list in the heap, overwriting the
next field to inject custom fields that will later be freed providing an arbitrary free primitive.
Multiple vulnerabilities in the Trusted Application,
tzdemuxerservice used by Samsung Smart TVs, five of the six issues have the same root cause. When a “normal world” application is calling into the “trusted execution environment” (TEE) the parameters can be passed as either by value or by reference. In several locations teh aprameter type was not checked and so a buffer could into TEE memory instead of normal world memory leading to various memory writes into the TEE. Which is a powerful primitive and may lead to code execution.
The five locations with this vulnerability were the commands:
To patch these issues Samsung started enforcing that the parameter types were
TEE_PARAM_TYPE_MEMREF_INOUT which prevents the buffers from pointing into TEE memory.
The final vulnerability was that several locations did not check the return value of
malloc. Should allocation fail in some situation, it could lead to a null-deference which may be exploitable if an attacker can gain control of memory at
A relatively trivial heap overflow in the Transparent Inter-Process Communication (TIPC) module of the kernel. The
crypto_key_rcv function in the driver takes a received packet and parses it for key data. The packet contains a name, a key length, then an auxiliary data buffer containing the key itself. The problem is that
keylen isn’t validated against the overall message size until after the
keylen is used to memcpy the key into the newly allocated buffer for it.
This bug comes with some blessings when it comes to exploitation. Where an attacker has influence over the size of the allocation via the message size, they can influence what kmalloc cache to cause an overflow in. An attacker also has control over the overflowed data that gets written, because while the allocated size uses the given message size, the message size is only validated to be in-bounds of the received packet. By sending a large packet with a smaller message size, the data that gets written out of bounds can be setup.
Heap overflow in the AMD GPU driver’s debugfs write handler for display port test patterns. The driver allocates a 100 byte write buffer to copy data into, but uses the debugfs handler’s size parameter for the actual copy. This size is never checked to ensure it’s within 100 bytes and thus it’s possible to overflow in the kmalloc-128 cache. This bug was powerful enough to full chain and bypass modern mitigations by exploiting the bug twice. It was abused first for an infoleak via triggering an out-of-bounds read by smashing a setup
msg_msg object, and again to corrupt the freelist to get an arbitrary write. The infoleak was used to leak the address of
modprobe_path, and the arbitrary write was used to smash the path to point to an attacker-controlled script.
It’s worth noting this bug depends on the GPU driver’s debugfs being accessible to the attacker and
The paper details a new tool which is a program that analyzes and reports potential memory safety bugs in unsafe Rust. What the tool is interesting and did uncover several issues, the more interesting part of the paper is just the discussion about the types of issues Rudra is looking for, and thus the types of bugs one might find in Rust code. It looks for three types of issues:
Panic Safety - In Rust
panic is used as a singal that the current program has entered an unrecoverable state. Upon
panic Rust will unwind the call stack and release all the resources held by the thread on the way up. This helps ensure resources don’t leak when a panic happens, the problem is on the how that interacts with
unsafe code. Unsafe code often will temporarily violate Rust’s guarantees, like creating an object that bypasses Rust’s ownership system by extending the object lifetime, or creating uninitialized variaibles, and then will fix up the inconsistency later in the code. If a
panic happens between the bypass and the resolution the destructors of a variaible will run. Not realizing the object is in a problematic state may lead to vulnerable situations such as double frees or uninitialized use of memory.
Higher-Over Safety Invariant - In Rust, a safe-function cannot invoke any undefined behavior, it can only act within the safety invariant provided by the compiler, and the compiler will enforce these things. Unsafe code however must check their assumptions directly. The authors of the paper point to three mistakes regarding incorrect asusmptions that are often made by developers:
- Logical Consisency - Assuming the inputs will always respect certain logical constraints such as the property of total order.
- Purity - The same input will always result in the same output
- Semantic Restrictions
If unsafe code makes any of these assumptions then they may introduce vulnerability.
Propagating Send/Sync in Generic Types - The
Sync traits govern Rust’s thread safety.
Send indicating that a type can be sent to other threads,
Sync, that a type can be referenced concurrently by multiple threads. If a type only contains fields that have the trait then they will inherit the trait, otherwise developers need to manually implement
Sync. Its important to note that these must be safe for all uses, when there is a manual implementation it must be updated when new APIs are added to the type. A developer, unaware of the manual implementation may forget to do so.
Another Send/Sync situation is when generic types are involved. There might be some clear conditional bound like if the inner type has the traits then the generic type has it. With
Vec<T> for example, if the inner type
T has the traits then the
Vec will also. The rules for this can become rather unintuitive and may not be properly captured by the bound the developer writes.
For example, the official futures library provides
MappedMutexGuard<_, T, U> has it the
Sync traits bound only to the type of
T but not
U leading to a thread-safety issue.