|Performance impact of Spectre, Meltdown patches on Windows|
|By Thom Holwerda on 2018-01-09 18:03:36|
Last week the technology industry and many of our customers learned of new vulnerabilities in the hardware chips that power phones, PCs and servers. We (and others in the industry) had learned of this vulnerability under nondisclosure agreement several months ago and immediately began developing engineering mitigations and updating our cloud infrastructure. In this blog, I'll describe the discovered vulnerabilities as clearly as I can, discuss what customers can do to help keep themselves safe, and share what we've learned so far about performance impacts.
The basic gist here is this: the older your processor and the older your Windows version, the bigger the performance impact will be. Windows 10 users will experience a smaller performance impact than Windows 7 and 8 users, and anyone running Haswell or older processors will experience a bigger impact than users of newer processors.
|By Alfman on 2018-01-09 19:11:50|
I really wish they had given more data. Other parties have provided benchmarks: |
As expected, IO intensive workloads incur the highest penalties for invoking frequent syscalls while CPU-bound processes that don't cross the effected code paths incur almost none at all. For IO bound processes like databases with random access, the performance is just abysmal.
I have not found any studies that actually compare performance between CPU generations, the ones above were for a brand new system. Microsoft says:
> With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we don’t expect most users to notice a change because these percentages are reflected in milliseconds.
With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
Again, no data; I'd prefer to be shown, rather than being told. Anyways, from the vague information we've got about intel's patch, it seems that they were able to disable speculative execution specifically for indirect references. Indirection is a crucial component of many object oriented languages like C++, disabling it shouldn't effect windows and linux that much, but I'm very curious how well polymorphic code scores under intel's patch? Alas, all my computers are too old to be supported by intel's patch so I cannot test it.
|- Score: 4|
|By osvil on 2018-01-09 19:14:37|
... that older version of Windows get a bigger hit. I was going to call it BS, thinking they could be adding some extra "spice" to favor upgrades. However, on the blog entry they actually explain why (and at least to me, it makes sense). |
They should explain the hiccup with AMD processors (I hope it is not related to the Meltdown fix that in theory didn't affect them).
|- Score: 4|
|By JLF65 on 2018-01-09 19:58:19|
|The hiccup with AMD is that there are three variants of the speculation bug: Specter variant 1 and 2 affect all processors, not just Intel; Meltdown variant 3 affects only Intel and was the most hyped as it allows access to kernel space where the other two just leak data between processes at the same privilege level.|
|- Score: 2|
|By galvanash on 2018-01-09 20:22:29|
He means why the windows updates where blue screening on older AMD machines I think... |
I have not seen an actual explanation other than something was documented incorrectly by AMD and while the updates were written to AMD's instructions, the actual hardware (older FX Series I think) choked on it.
Edited 2018-01-09 20:25 UTC
|- Score: 5|
|By osvil on 2018-01-09 20:25:39|
Indeed, I was referring to the patch issues bricking some AMD machines. |
It is a bit perverse that the patch bricks computer using processors that don't have the most critical bug.
|- Score: 5|
|By dionicio on 2018-01-09 20:31:59|
|- Score: 1|
|By Seeprime on 2018-01-09 20:39:06|
|Microsoft's statement implies that they didn't even test it. They used specs, that may have been outdated. We don't know. I see this as Microsoft screwing up another update. More of our customers ask if they should switch away from Windows. It's hard not to advise against doing so, when Linux Mint works so well.|
|- Score: -1|
|By JLF65 on 2018-01-09 20:52:22|
|Ah, sorry then. Didn't get that from the post. As far as how MS can goof up critical updates, they (and other OSes) support a ridiculous number of processors, processors that can have hundreds of issues in the errata (and sometimes not in the errata!). It's easy to miss some, especially when rushing to patch something this critical and big. They probably did minimal testing to get it out as fast as possible.|
|- Score: 4|
|By Alfman on 2018-01-09 21:56:18|
> Ah, sorry then. Didn't get that from the post. As far as how MS can goof up critical updates, they (and other OSes) support a ridiculous number of processors, processors that can have hundreds of issues in the errata (and sometimes not in the errata!). It's easy to miss some, especially when rushing to patch something this critical and big. They probably did minimal testing to get it out as fast as possible.
You make a good point, that all these hardware variation and errata may be passing the abilities of software vendors to keep up and manage them. It's no longer enough just to write correct code, we also have to contend with erratic hardware. Even in linux there are instances of code where things were done in a weird way because of some weird hardware and the errata just keeps adding up to the point where nobody can understand the whole thing. It is like a house of cards, we can go in and "fix" a piece of code that seems wrong, but without a sufficient appreciation for why it was originally developed that way we may in fact be the ones to break it. The "it works for me" philosophy doesn't work when there are just too many combinations to grasp. It makes correctness less absolute and more probabilistic, since the same code can be factually correct in the author's context, and nevertheless be broken in another context or on different hardware.
I know it's not going to happen because reasons, but I often wish we could re-architect our systems and come up with better standards, given what we know now in hindsight. It would simplify CPUs and operating systems greatly to get a fresh start without all the legacy baggage.
Edited 2018-01-09 22:04 UTC
|- Score: 4|
|By dionicio on 2018-01-09 22:23:00|
Hardware houses so often present documentation reflecting a Conceptual Model of their products. Erratas forced to follow up. Guaranteed Mess... |
That wouldn't be possible at
Edited 2018-01-09 22:40 UTC
|- Score: 1|