Timing attacks

What is HertzBleed?

HertzBleed is an overclocking attack, a type of timing attack. A demo of HertzBleed has extracted secret keys from the official software for SIKE running on various Intel and AMD CPUs.

What is SIKE?

The NIST Post-Quantum Cryptography Standardization Project is considering several "finalists" for possible standardization in 2022–2023, and is considering several "alternates" for possible standardization in 2023–2024. SIKE is a proposed encryption system, one of the alternates.

I don't use SIKE. Should I be worried?

Yes. The demo was for SIKE, but overclocking attacks are much more broadly applicable to any software handling secrets on these CPUs. Some secrets might be difficult to extract, but the best bet is that followup demos will extract many more secrets. Overclocking attacks are a real threat to security, even bigger than most HertzBleed reports indicate.

I'm a user. Should I do something right now?

Yes. It is normal, although not universal, for computer manufacturers to provide configuration options that let you take action right now. What's most obviously important is to disable overclocking, but for safety you should also disable underclocking:

If some of your devices do not have obvious ways to disable overclocking, you should try asking the operating-system distributor whether there is a way to disable overclocking, and you should avoid using those devices for any data that you care about.

I'm an operating-system distributor. Should I do something right now?

Yes. By default, you should treat data from all physical monitors, including the power monitors and temperature monitors inside the CPU, as secret, and avoid copying that data to anywhere else. You should scan for OS scripts that check physical monitors, and disable those scripts by default. CPU frequencies are public, so by default you should not put the CPU into a mode where it chooses frequencies based on power consumption. In particular, you should disable overclocking by default. If it is not clearly documented that underclocking is sensor-independent then you should disable underclocking by default. If the CPU is underclocking because it reaches thermal limits then you should set it to minimum clock frequency and advise the user to fix the broken hardware (most commonly a broken fan).

Is overclocking a bug? Is it a feature?

Overclocking is a short-term tradeoff. Descriptions of it as a bug or a feature are oversimplified.

The main function of overclocking is to produce a speedup, often around 2x for unoptimized software on current CPUs. The reason to label this as a "tradeoff" rather than an "optimization" is that there are many years of evidence that overclocking makes CPUs more likely to fail prematurely. The reason to label this as "short-term" is that it has very little impact on modern vectorized multithreaded software. For example, an overclocking proponent in 2022 advertised the "Max Turbo Frequency" 4.20GHz of the Intel Core i5-1135G7, which has base frequency 2.40GHz, but then measured the x265 video encoder as gaining only 1% from Turbo Boost on that CPU.

This tradeoff is very far from being a clear win, even without security considerations. Configuration options to disable overclocking are offered by Intel, AMD, and most (although not all) computer manufacturers and operating-system distributors using Intel/AMD CPUs. CPUs sold to the server market set lower limits for overclocking, presumably to avoid the risk that big cluster operators in a year or two will issue reports highlighting the number of malfunctioning CPUs and dead CPUs.

Why is vectorized multithreaded software the "modern" option if it limits the overclocking speedup?

Vectorization and multithreading typically improve response time by a factor 10x or more. The improvement increases as the number of CPU cores increases.

From a performance perspective, it has been obvious for years that if the user is waiting for software (rather than something else, such as the network) then the software in question should be vectorized and multithreaded. Many applications have already made this upgrade. From a security perspective, this upgrade has the convenient side effect of undermining claims that overclocking brings critical benefits to the user.

Beware that reports online often cherry-pick the maximum possible overclocking speedup for any particular CPU. The overall system-wide overclocking speedup for the same CPU is much smaller, and continues to shrink as more and more applications upgrade to vectorization and multithreading.

How does vectorized multithreaded software end up limiting overclocking?

Higher clock speeds consume much more power than normal clock speeds. A CPU configured to overclock will switch to a higher clock speed if it has power budget available for that clock speed (and the temperature is within limits). If something suddenly consumes more power then the CPU will very quickly switch to lower clock speed.

The CPU is designed with a power budget (and cooling) supporting all cores running vectorized software at normal clock speeds. In this situation, there is very little room for overclocking. If there were room, the normal clock speeds would be higher!

Sometimes vectorized multithreaded software uses less power because of small variations in the exact operations being performed, but overclocking is effective primarily for unoptimized software, which at normal clock speeds uses much less power. Overclocking then has more room to increase clock speeds. Note that "less power" does not mean "less energy": the overclocked unoptimized software runs much longer than optimized software would have, and consumes much more energy overall.

I've heard that multithreading and vectorization are for long-running computations and overclocking is for bursts of activity. Is that true?

No. Any CPU usage long enough for the user to be waiting is much more time than needed for the CPU to spin up vectors and threads. It doesn't matter whether this is a 2-second burst of activity or 2 hours of video encoding.

I've heard that turning off overclocking creates an "extreme system-wide performance impact". Will my computer be as slow as molasses?

Give it a 1-week trial and see what happens! You'll then be hyper-alert for slowdowns, but you won't encounter what normal people would describe as an "extreme system-wide performance impact".

Many applications are already vectorized and multithreaded and will run at almost exactly the same speed they did before. If there's an annoying slowdown in some unoptimized operation, try documenting which operation it was and how long it took, and try asking the software provider whether that operation can be sped up, for example with vectorization and multithreading.

The source of the "extreme system-wide performance impact" claim has quietly downgraded "extreme" to "significant" and, as of 17 June 2022, still isn't providing any numbers to justify a recommendation of overclocking.

How does overclocking leak secret data?

A CPU's clock frequency directly affects the time taken by each operation. If the CPU is configured to overclock then the CPU's clock frequency at each moment depends on the CPU's power consumption. The CPU's power consumption depends on the data that the CPU is handling, including secret data.

To summarize, overclocking creates a roadway from secret data to visible timings. When information about secrets is disclosed to attackers, cryptographers presume that attackers can efficiently work backwards to recover the secrets, unless and until there have been years of study quantifying the difficulty of this computation and failing to find ways to speed it up.

Nothing here is specific to SIKE. The HertzBleed paper refers to various SIKE details as part of its demo working backwards from visible timings to secret data, but there are many papers demonstrating how to work backwards from power consumption to secrets in a much wider range of computations. The only safe presumption is that all information about power consumption necessary for those attacks is also leaked by overclocking.

Is it possible for software to control its power consumption?

There are many papers on techniques designed to make it more difficult for attackers to work backwards from power consumption to secrets. For example, "2-share XOR masking" replaces each secret bit s with two secret bits, r and XOR(r,s), where r is chosen randomly. There has been extensive investigation of the cost of masking various computations.

It is, however, practically impossible for an auditor to obtain any real assurance that these techniques are secure. There are theorems proving security of masking, but only in limited models that bear little relationship to reality. There have been successful recent attacks against masked implementations of post-quantum cryptosystems. There is an auditing mechanism, "TVLA", that is useful for catching simple vulnerabilities in deterministic implementations, but is of little use in finding more sophisticated attacks against masked implementations.

If masked software is available for the computations that you want to perform on secret data, you should certainly consider the software: there's a good chance that the software doesn't cause any performance problems for you, and it's plausible that the software will slow down attacks. But you shouldn't believe any claims saying how much it slows down attacks, and you shouldn't be surprised to see attacks succeeding despite the masking. Masking is not a substitute for disabling overclocking. The complications of masked software also make correctness audits more difficult and increase the overall chance of bugs, although hopefully this problem will be eliminated in the not too distant future by computer-assisted verification of cryptographic software.

Isn't the new SIKE software supposed to stop HertzBleed?

The new SIKE software stops the HertzBleed demo. The demo uses very simple models and very simple signal processing, making it adequate for showing that something is broken but useless for showing that something is secure.

A pattern we've seen before is the following: the first papers on a particular side channel focus on giving simple demonstrations that the side channel is of interest; there are then overconfident claims of limited impact; these claims are then debunked by subsequent papers. You should expect public overclocking attacks to follow the same path, and you should expect that large-scale attackers have already developed much more advanced attacks.

What about underclocking? Don't I need underclocking to save energy?

Idle CPU cores use much less power than busy cores, but idle cores can further reduce power consumption by running at very low frequency. Normally it is not a secret which programs are running, so it is acceptable from a security perspective to have idle cores automatically switch to low frequency.

One can also try to shave off power usage by having CPUs run at somewhat low frequency when they have a small amount of activity to do. This is acceptable from a security perspective as long as the frequency decision is made purely on the basis of which programs are running, not on physical sensors such as power sensors.

The primary rationale for basing overclocking on power measurements, namely to avoid exceeding the CPU's power budget, does not apply to underclocking. But this does not mean that CPUs do make underclocking decisions independently of power.

Given current tools and documentation, there are clear reasons for concern that attempting to run CPUs at speeds below base speed could create power-dependent frequency variations, so, for the moment, turning off underclocking is the safest course of action for users.

For OS distributors, it should be reasonably straightforward to add support for "idleclocking", but attention is required to details (e.g., disabling HWP on Intel). OS distributors should ask CPU manufacturers for guarantees regarding power-independent handling of intermediate speeds.

Are overclocking and underclocking the only sources of variations in CPU frequency?

No. For example, as noted above, broken fans create underclocking based on temperature.

As a more subtle example, the physical speed of clock crystals is well known to depend slightly on temperature. Instead of assuming that the resulting dependence on secret data is so small as to be unexploitable, CPU manufacturers should proactively protect their clock crystals, adding thermal isolation and active temperature regulation.


Version: This is version 2022.06.19 of the "Overclocking FAQ" web page.