Meltdown and Spectre — Enterprise Action Plan by SANS Senior Instructor Jake Williams
Blog originally posted January 4, 2018 by RenditionSec
MELTDOWN SPECTRE VULNERABILITIES
Unless you've been living under a rock for the last 24 hours, you've heard about the Meltdown and Spectre vulnerabilities. I did a webcast with SANS about these vulnerabilities, how they work, and some thoughts on mitigation. I highly recommend that you watch the webcast and/or download the slides to understand more of the technical details. In this blog post, I would like to leave the technology behind and talk about action plans. Our goal is to keep this hyperbole free and just talk about actionable steps to take moving forward. Action talks, hyperbole walks.
To that end, I introduce you to the "six step action plan for dealing with Meltdown and Spectre."
- Step up your monitoring plan
- Reconsider cohabitation of data with different protection requirements
- Review your change management procedures
- Examine procurement and refresh intervals
- Evaluate the security of your hosted applications
- Have an executive communications plan
Step 1: Step up your monitoring plan
Meltdown allows attackers to elevate privileges on unpatched systems. This means that attackers who have a toehold in your network can elevate to a privileged user account on a system. From there, they could install backdoor and rootkits or take other anti-forensic measures. But at the end of the day, the vulnerability doesn't exfiltrate data from your network or encrypt your data, delete your backups, or extort a ransom. Vulnerabilities like Meltdown will enable attackers, but they are only a means to an end. Solid monitoring practices will catch attackers whether they use an 0-day or a misconfiguration to compromise your systems. As a wise man once told me, "if you can't find an attacker who exploits you with an 0-day, you need to be worried about more than 0-days."
Simply put, monitor like your network is already compromised. Keeping attackers out is so 1990. Today, we assume compromise and architect our monitoring systems to detect badness. The #1 goal of any monitoring program in 2018 must be to minimize attacker dwell time in the network.
Step 2. Reconsider cohabitation of data with different protection requirements
Don't assume OS controls are sufficient to separate data with different protection requirements. The Spectre paper introduces a number of other possible avenues for exploitation. The smart money says that at least some of those will be exploited eventually. Even if other exploitable CPU vulnerabilities are not discovered (unlikely since this was already an area of active research before these vulnerabilities), future OS privilege escalation vulnerabilities are a near certainty.
Reconsider your architecture and examine how effective your security is if an attacker (or insider) with unprivileged access can elevate to a privileged account. In particular, I worry about those systems that give a large number of semi-trusted insiders shell access to a system. Research hospitals are notorious for this. Other organizations with Linux mail servers have configured the servers so that everyone with an email address has a shell account. Obviously this is far from ideal, but when combined with a privilege escalation vulnerability the results can be catastrophic.
Take some time to determine if your security models collapse when a privilege escalation vulnerability becomes public. At Rendition, we're still running into systems that we can exploit with DirtyCOW — Meltdown isn't going away any time soon. While you're thinking about your security model, ask how you would detect illicit use of a privileged account (see step #1).
Step 3. Review your change management procedures
Every time there's a "big one" people worry about getting patches out. But this month Microsoft is patching several critical vulnerabilities. Some of these might be easier to exploit than Meltdown. When thinking about patches, measure your response time. We advise clients to keep three metrics/goals in mind:
Normal patch Tuesday
Patch Tuesday with "active exploit in the wild"
"Out of cycle" patch
How you handle regular patches probably says more about your organization than how you handle out of cycle patches. But considering all three events (and having different targets for response) is wise.
Because of performance impacts, Meltdown definitely should be patched in a test environment first. Antivirus software has reportedly caused problems (BSOD) with the Windows patches for Spectre and Meltdown as well. This shows a definite example where "throw caution to the wind and patch now" is not advisable. Think about your test cycles for patches and figure out how long is "long enough" to test (both for performance and stability) in your test environment before pushing patches to production.
Step 4. Examine procurement and refresh intervals
There is little doubt that future processors (yet to be released) will handle some functions more securely than today's models. If you're currently on a 5 year IT refresh cycle, should you compress that cycle? It's probably early to tell for hardware, but there are a number of older operating systems that will never receive patches. You definitely need to re-evaluate whether leaving those unpatchable systems in place is wise. Just because you performed a risk assessment in the past, you don't get a pass on this. You have new information today that likely wasn't available when you completed your last risk assessment. Make sure your choices still make sense in light of Meltdown and Spectre.
When budgeting for any IT refresh, don't forget about the costs to secure that newly deployed hardware. Every time a server is rebuilt, an application is installed, etc. there is some error/misconfiguration rate (hopefully small, often very large). Some of these misconfigurations are critical in nature and can be remotely exploited. Ensure that you budget for security review and penetration testing of newly deployed/redeployed assets. Once the assets are in production, ensure that they are monitored to detect anything that configuration review may have missed (see step #1).
Step 5. Evaluate the security of your hosted applications
You can delegate control, but you can't delegate responsibility. Particularly when it comes to hosted applications, cloud servers, and Platform as a Service (PaaS), ask some hard questions of your infrastructure providers. Sure your patching plan is awesome and went off without a hitch (see step #3). What about your PaaS provider? How about your IaaS (Infrastructure as a Service) provider? Ask your infrastructure provider:
Did they know about the embargoed vulnerability?
If so, what did they do to address the issue ahead of patches being available?
Have they patched now?
If not, when will they be fully patched?
What steps are they taking to look for active exploitation of Meltdown? *
* #5 is sort of a trick question, but see what they tell you?
Putting an application, server, or database in the cloud doesn't make it "someone else's problem." It's still your problem, it's just off-prem. At Rendition, we've put out quite a few calls today to MSPs we work with. Some of them have been awesome — they've got an action plan to finish patching quickly and monitoring for all assets. One called us last night for advice, another called us this morning. Others responded with "Melt-what?" leading us to wonder what was going on there. Not all hosting providers are created equal. Even if you evaluated your hosting provider for security before you trusted them with your data, now is a great time to reassess your happiness with how they are handling your security. It is your security after all?
Step 6. Have an executive communications plan
When any new vulnerability of this magnitude is disclosed, you will inevitably field questions from management about the scope and the impact. That's just a fact of life. You need to be ready to communicate with management. To that end, in the early hours of any of these events there's a lot of misinformation out there. Other sources of information aren't wrong, they're just not on point. Diving into the register specific implementations of a given attack won't help explain the business impact to an executive.
Spend some time today and select some sources you'll turn to for information the next time this happens (this won't be the last time). I'm obviously biased towards SANS, but I wouldn't be there if they didn't do great work cutting through the FUD (fear uncertainty and doubt) when it matters. The webcast today was written by me, but reviewed by other (some less technical) experts to make sure that it was useful to a broad audience. My #1 goal was to deliver actionable information you could use to educate a wide range of audiences. I think I hit that mark. I've seen other information sources today that missed that mark completely. Some were overly technical, others were completely lacking in actionable information.
Once you evaluate your data sources for the next "big one," walk through a couple of exercises with smaller vulnerabilities/issues to draft communications to executives and senior leadership. Don't learn how to "communicate effectively" under the pressure of a "big one." Your experience will likely be anything but "effective."
Closing thoughts
Take some time today to consider your security requirements. It's a new year and we certainly have new challenges to go with it. Even if you feel like you dodged this bullet, spend some time today thinking about how your organization will handle the next "big one." I think we all know it's not a matter of "if" but a matter of "when and how bad."
Of course, if I don't tell you to consider Rendition for your cyber security needs, my marketing guy is going to slap me silly tomorrow. So Dave, rest easy my friend. I've got it covered