As incident responders, we are often called upon to not only supply answers regarding "Who, What, When, Where, and How" an incident occurred, but also how does the organization protect itself against future attacks of a similar nature? In other words, what are the lessons learned and recommendations based on the findings?
A new paper from Microsoft titled "Best Practices for Securing Active Directory" provides a wealth of information and guidance that responders can use to answer these types of questions. The paper can be found at the following link: http://blogs.technet.com/b/security/archive/2013/06/03/microsoft-releases-new-mitigation-guidance-for-active-directory.aspx.
I've reviewed the paper and it is an excellent document in my opinion. As the foreword by Microsoft's CISO explains, the paper provides a "practitioner's perspective and contains a set of practical techniques to help IT executives [and IT architects] protect an enterprise Active Directory® environment". It is largely based on the experience of Microsoft's Information Security and Risk Management consulting team, advising both internal customers (MS IT) and external customers in the Global Fortune 500.
For responders, I think this document can serve 2 purposes:
- It can be a great resource for responders to draw upon in your own recommendations following a compromise.
- It can be used proactively to improve preventative and detective measures in your environment, and even help plan for recovery if a significant compromise does occur.
It's not an overstatement to say that many organizations could easily pay thousands of dollars for an assessment from an outside consultant and get a report that is no more thorough or effective than this set of recommendations released by Microsoft for free. Furthermore, due to the importance that many IT organizations place on Microsoft's recommendations, this paper will likely carry more weight than a similar paper authored by internal staff or just about any other external organization.
So with all of that said, of course I recommend you read the entire paper. However, with a body of 120+ pages, it's not a quick read. So if you are short on time, I recommend at least reviewing the 5-page Executive Summary, which includes a table of 22 best practices, ranked roughly in order of priority. The Executive Summary is designed to be a standalone document for circulation within your organization.
Next, take a look at the following synopsis where I'll provide additional details beyond those in the Executive Summary on the recommendations I found particularly interesting.
My Takeaways from "Best Practices for Securing Active Directory"
The following sections are named after the major sections from Microsoft's paper.
Avenues to Compromise
This section discusses many of the common problems that lead to initial compromise and typically quick privilege escalation within an Active Directory domain. The paper provides a good review of the issues you are probably well aware of, from poorly patched operating systems and applications, to shortcomings in antivirus tools, to credential theft and privilege elevation. The Executive Summary highlights the "Avenues of Compromise" section well, so I won't spend any more time on it here.
Reducing the Active Directory Attack Surface
There were quite a few useful ideas discussed in this section. Here is a rundown of the ones that piqued my interest.
Reduce Admin Accounts and Admin Group Memberships
- Reduce the number of user accounts placed in the highest privileged groups. Typical default groups include Enterprise Admins, Domain Admins, Built-in Administrators. Ideally, reduce membership to a point where there are no permanent members of these groups. Users are only added when necessary and then removed when the task is complete. A couple of the third-party tools that can help with this are privileged account management solutions from Cyber-Ark and Lieberman (there are others as well).
Establish Dedicated Administrative Hosts
- A significant emphasis is placed on establishing dedicated, secure administrative systems for accessing and interfacing with the most trusted hosts. The idea here is that users with privileged accounts should only use those privileged accounts on these dedicated, locked-down secure systems (i.e., no RunAs on our own machines). Suggested solutions include using dedicated physical workstations, using VMs, and/or using dedicated "jump point" servers (typically via Remote Desktop).
Mitigate Physical Attack of DCs
- Do not overlook the possibility of physical compromise, including domain controllers. As your cyber security becomes stronger, the weakest link may become physical security?particularly in remote offices in far-flung locations. One suggestion discussed is the use of Read-Only Domain Controllers (RODC). "An RODC provides a way to deploy a domain controller more securely in locations that require fast and reliable authentication services but cannot ensure physical security for a writable domain controller."
- RODC features include:
- By default, an RODC does not store user or computer credentials. To configure the RODC for authentication, an administrator chooses specific users' credentials to replicate to specific RODCs. So for example, if you want all users in the Beijing office to be able to authenticate to their local RODC, you can have only their credentials replicated to the local RODC. For any other user that comes to the Beijing office and needs to authenticate (such as a traveling executive), the local RODC forwards the request to a writable DC for authentication.
- Other than account credentials, by default an RODC holds all other Active Directory objects and attributes that a writable domain controller holds. This can be adjusted, however, as some applications will store their own custom credentials or other sensitive data as AD attributes. These sensitive attributes can be filtered if necessary.
- As the name implies, changes cannot be made to the database that is stored on the RODC. Changes must be made on a writable domain controller and then replicated back to the RODC.
- For more details on RODCs: http://blogs.technet.com/b/askds/archive/2008/01/18/understanding-read-only-domain-controller-authentication.aspx.
Implement Application Whitelisting
- Application White Listing is given as an effective tool in combating malware. Bit9 is the major player in this space and may be the best choice for an enterprise roll out (this is my opinion, not Microsoft's). However, Microsoft's AppLocker may work well for some organizations, including enterprises that want to start slow by deploying only to critical servers.
Utilize "Authentication Mechanism Assurance" for Smart Cards
- Two-factor authentication should be the norm, not the exception, particularly for VIPs and privileged users. Specifically, the paper discusses some of the benefits of smart cards. My own research suggests that smart cards don't solve all our problems, particularly because the user credentials are stored just like a normal password hash, and so pass-the-hash techniques are still in play. Nevertheless, smart cards certainly provide a higher hurdle to attackers in a number of situations.
- A relatively new feature that I was unaware of called "Authentication Mechanism Assurance" appears to place the hurdle a notch higher. This new feature in Windows Server 2008 R2 allows a user's access token to be designated as having logged on with a certificate-based method. Per the previous link, "This makes it possible for network resource administrators to control access to resources, such as files, folders, and printers, based on whether the user logs on with a certificate-based logon method and the type of certificate that is used for logon. For example, when a user logs on by using a smart card, the user's access to resources on the network can be specified as different from what the access is when the user does not use a smart card (that is, when the user logs on by entering a user name and password)."
Utilize Templates for Secure Configurations
- For sensitive systems, such as domain controllers and privileged-account workstations, take advantage of Microsoft Security Compliance Manager. This is a free tool that integrates security configurations that are recommended by Microsoft. Templates are available for all major OS versions and Service Packs, as well as Exchange, IE, and Office. See the available templates here: http://technet.microsoft.com/en-us/library/cc677002.aspx. Furthermore, GPOs can be used to enforce them.
- While not mentioned in the white paper, also consider deploying EMET on at least your secure workstations and possibly servers too. Brian Krebs wrote a nice overview of the latest version (4.0) in his recent blog article Windows Security 101: EMET 4.0.
Address Poorly Coded Apps
- Not to be overlooked, there is a good discussion of the impact of poorly coded applications and their impact. The paper recommends effective security review in Software Development Lifecycle (SDL), otherwise you're left with expensive fixes or vulnerable software in production.
- Regarding the cost, "Some organizations place the full cost of fixing a security issue in production code above $10,000 per issue, and applications developed without an effective SDL can average more than ten high-severity issues per 100,000 lines of code. In large applications, the costs escalate quickly."
Make Security Easy for End Users
- Simplifying security for end users should always be a priority. An example given is to enforce VIPs' access to secure files only from designated secure systems. This allows the VIPs access to files they need, while allowing analysts a greater ability to detect illegitimate access from non-designated systems. This may not be convenient for the VIPs, but the security concept is easy to understand for the end user.
Monitoring Active Directory for Signs of Compromise
This section begins by discussing the various audit categories available to systems prior to Windows Vista and the new subcategories available from Vista forward. If you are on a system running Vista or higher, you can see the configuration of the audit subcategories by opening a command prompt as Administrator and running the command "auditpol.exe /get /category:*".
The paper goes on to suggest how to enforce the audit settings you choose to enable, typically via Group Policy or the auditpol.exe command. There are some tricks to this, since on a given system you can only enable the nine main legacy categories or the new subcategories-but not both. In other words, you need to have logic for configuring subcategories on systems Vista and higher, but still configure appropriate settings for the legacy categories on older systems.
The paper then includes a detailed table of recommendations per audit subcategory, and per operating system type (i.e., one set of recommendations for Windows 7 & 8 and another set of recommendations for Server 2008, 2008 R2, & 2012). The table includes the default settings, a set of baseline audit recommendations (which I believe came from the the Windows Security Resource Kit), and a new "stronger" set of audit recommendations.
While I would love to say, "Go implement Microsoft's strongest audit recommendations and you're done!", unfortunately that would be bad advice. The simple truth is that you need to test in your own environment. I would suggest you use their strongest audit recommendations as a starting point and then scale back where you must according to your resource constraints. Keep in mind that you not only have to have robust hardware to support higher auditing levels (specifically fast disks to support the increased I/O), but also more disk space so that events don't get overwritten quickly. For servers, ideally you will forward the events to a centralized log server, so disk space should not be an issue in this case. For workstations and laptops on the other hand, log forwarding becomes logistically harder and more expensive to implement, so you may need to find a sweet spot between verbose logging and local storage limitations.
Following the recommendation tables is a general discussion of types of events to be monitored. This is a quick discussion with just a couple of examples. The recommendation is to look through the the detailed "Appendix L: Events to Monitor", which lists hundreds of individual event IDs that should be considered for monitoring.
Planning for Compromise
Once you've had an attack that fully compromises Active Directory, there is really no option but to replace it, unless as the paper suggests, "you have a record of every modification the attacker makes or a known good backup, you can never restore the directory to a completely trustworthy state."
Furthermore, "even restoring to a known good state does not eliminate the flaws that allowed the environment to be compromised in the first place."
So, the suggestion is "instead of trying to remediate an environment filled with outdated, misconfigured systems and applications, consider creating a new small, secure environment into which you can safely port the users, systems, and information that are most critical to your business."
The following guidelines are discussed in-depth in order to create a new Active Directory forest that serves as a "secure cell" for your core business infrastructure:
- Identify principles for segregating and securing critical assets:
- Do not configure the new AD forest to trust the legacy forest. This means that legacy accounts cannot logon to the new forest. The paper says you can, however, configure a legacy environment to trust the new forest. I would argue against this trust too, because it allows your new accounts to log into the compromised domain, putting the new accounts at risk if there is still active malware in the legacy environment. The new accounts would particularly be at risk if interactive logons were allowed to the legacy forest.
- Use "nonmigratory" approaches that avoid copying SID history. (More on this in #3)
- Implement fresh OS and application installs. Don't move systems to the new forest.
- Don't allow legacy operating systems or applications in the new forest. Use the latest OS and application versions for improved defenses.
- Define a limited, risk-based migration plan:
- Identify your strategy. Are you only going to migrate VIPs initially? Or perhaps a particular business unit or region? Or all users in one fell swoop?
- Determine what is truly business-critical and that should naturally define and prioritize your migration plan.
- Leverage "nonmigratory" migrations where necessary.
- Don't maintain the SID history from the legacy domains. Utilize tools that will map the new accounts to their corresponding accounts in the legacy forest.
- "Appendix J: Third-Party RBAC Vendors" and "Appendix K: Third-Party PIM Vendors" provide some vendors that can perform "nonmigratory" migrations.
- Implement "creative destruction":
- Eliminate legacy applications and systems not by upgrading them, but by building new, secure applications and systems to replace them. Migrate the data, but not the outdated applications.
- Isolate legacy systems and applications:
- For legacy systems and applications that can't be replaced with newer, updated versions, use a small dedicated domain to support them.
- Like the legacy domain, do not configure the new pristine domain to trust the legacy app domain. The best bet would be no trusts whatsoever, though you may need to configure the legacy app domain to trust the pristine domain?just don't allow interactive logons with the new pristine domain accounts (my advice).
- Simplify security for end users.
- Use simple techniques such as dedicated secure systems for accessing sensitive files.
- Consider alternative authentication methods, such as smart cards, biometrics, or even "authentication data that is secured by trusted platform module (TPM) chips in users' computers".
Creating Business-Centric Security Practices for Active Directory
This section provides strong justification for cooperation between the business and IT with regard to securing IT assets. This is one of those gray areas that is hard to put a finger on and therefore one of the last areas us techies want to deal with. But the message is conveyed clearly from the following two paragraphs:
"In the past, information technology within many organizations was viewed as a support structure and a cost center. IT departments were often largely segregated from business users, and interactions limited to a request-response model in which the business requested resources and IT responded. "
"As technology has evolved and proliferated, the vision of "a computer on every desktop" has effectively come to pass for much of the world, and even been eclipsed by the broad range of easily accessible technologies available today. Information technology is no longer a support function, it is a core business function. If your organization could not continue to function if all IT services were unavailable, your organization's business is, at least in part, information technology. "
Here are a few more salient points from this section:
- "Like you define levels of service for system uptime, you should consider defining levels of security control and monitoring based on criticality of asset."
- Have well-defined processes to identify owners of data, applications, user accounts, and computer accounts.
- "The more routine it becomes for a business owner to attest to the validity or invalidity of data in Active Directory, the more equipped you are to identify anomalies that can indicate process failures or actual compromise. "
- Classify not only the data, but also servers hosting data. Monitor servers based on the classification of data they host. Do the same with applications.
- It is simply not possible in most organizations to monitor all users all the time. Like data, classify user accounts and closely monitor the most important accounts.
Summary of Best Practices
The final section provides the same 22 best practices listed in the Executive Summary. It includes hyperlinks to the sections that discuss each best practice. They are listed roughly in order of priority.
Hopefully you've found this summary useful and will have time to eventually review the entire paper. I think Microsoft's "Best Practices for Securing Active Directory" provides a solid roadmap and set of recommendations that can be used in many organizations for attack prevention, detection, and recovery.
Mike Pilkington is a Sr. Information Security Analyst and Incident Responder for a Fortune 500 company in Houston, TX, as well as a SANS Mentor and Community Instructor. Mike will be mentoring FOR508: Advanced Computer Forensic Analysis and Incident Response in Houston, TX Oct 1 - Dec 3, 2013 and teaching FOR408: Computer Forensic Investigations - Windows In-Depth in Chicago, IL Oct 28 — Nov 2, 2013. Mike will also be speaking at the 2013 SANS DFIR Summit held in Austin, TX July 9 & 10.