By now you’ve probably heard of the Meltdown and Spectre vulnerabilities. These are good examples of how more prominent cyber security news is specifically reported in the mainstream news channels. Most of the time when it makes it this far, you know it’s bad. These are big in that they affect the core of how we build computers today. While there are dire predictions of the fixes costing us dearly in computing speed, we are still in the early stages of the process to remediate these flaws. Let’s dive in a little more to better understand why these are so frightening to IT professionals. There are really three concepts coming to head here:
The most basic processor (CPU) diagram of a computer has a single pipeline for performing instructions. One instruction is performed after the other until the program is complete. As computer engineers, this is what we learn in a basic computer architecture course. For many years this was the standard and to make computers faster and faster, we just increased the clock speed (measured in gigahertz, or GHz, these days). If you remember the days of the Pentium processors, you’ll remember ever increasing clock speeds up to 4GHz (that’s 4 billion per second) and slightly beyond. We got to a point where it didn’t make sense to keep trying to cram more calculations into every second. Processor designers started finding other ways to make them more efficient, one of those being multi-processing. Most programs, however, aren’t written to use multiple processors or cores at a single time. So processor designers came up with ways for the chip itself to force the multi-processing. The CPU will take chunks of the program from further down the code (“future” code) and run them concurrently with the current part. Unfortunately, with all the possibilities that could happen in current code, the choice of “future” code to run wasn’t always correct. Great strides have been made in predicting which parts of “future” code would be more efficient to run now. These methods of prediction are what is causing problems now.
That same processor will run instructions far faster than it can fetch them from memory (RAM in this case), so it will cache memory locations nearby the currently used location. Nearby can mean several things, but that is not really germane to this discussion. Just know that the processor can read *any* memory location without restriction. The processors cache memory is faster than standard RAM, but more important is the fact that it is reading ahead and guessing which other information or instructions from memory it will need next. This guessing and the fact that it can ready anywhere is also part of the problem.
Lastly and more recently comes the direct cyber security part. In recent years we have worked to better protect the kernel (the brain of your Windows or Mac operating system that controls the hardware and all the involuntary processes of your computer) from your applications (e.g., Google Chrome, Microsoft Word, Apple Mail, Microsoft Outlook, etc.) and those same applications from other applications. This can be referred to as sandboxing or isolation. It’s a key protection at a foundational level that prevents flaws in one program from affecting other programs or the whole computer. It’s why Google Chrome can crash and it rarely affects your entire computer.
At it’s most basic level, the Meltdown and Spectre flaws use the first two concept to bypass the isolation protections of third concept. This allows carefully crafted code to do things like read other memory locations and find your usernames and passwords that are stored in RAM memory from other applications (remember: everything you do on the computer goes through RAM memory).
So far, we have patches issued or being issued by major and minor vendors for both vulnerabilities. There was a big scare about performance degradation or having to slow the processors way down to fix this. So far, we haven’t seen that with the patches issued. Unfortunately, the patches for Spectre are just workarounds. The real fix will have to come from the processor designers. This is where the major performance hit was speculated. The people making the claims assumed that they’d have to rip out all those efficiencies that give us the speed we desire today. So far we’ve seen little from Intel (desktops, laptops, and servers) or ARM (mobile devices) on the fixes, but you can expect that they will do everything possible not to give up those speed gains in their products. Some very new processors can be patched, but most updates will be made to newer processors being designed today.
Update: We’ve started to see reports of CPU utilization taking a hit after these patches. The reports mostly come from cloud services, however. Most everyday use of desktop and laptop computers do not make constantly high use of the CPU. For those who host CPU intensive services in the cloud, review your CPU load and adjust your subscription as necessary. For those with high CPU usage systems on-premises, you may want to consider adding more systems. Monitor your usage before making any decisions, though.
Update2: More reports of performance hits are starting to trickle out. Most of what we’ve seen are high CPU load measurements or benchmarks (which are meant to push a system to its limits). If you are pushing your system to the limit, expect to see a fairly significant hit to performance. If you are a more casual user, you may notice some slowness, but it shouldn’t be as bad as the benchmarks.
Unfortunately for most, you’ve probably seen all that you will in the mainstream news sources about this. They are all about the shock and sensational factors, but little about the follow up. Make sure your IT or Security staff are plugged into the more security-focused news sources to stay abreast of new developments.
Need some help protecting your business against these flaws and others like them?
Contact us today for a free and confidential consultation.