Bloodhound – AD Attack Resilience Methodology

Last month I was introduced to BloodHound and the Active Directory Adversary Resilience Methodology via a special workshop put on by SpecterOps.

While a lot of the time and technical nit-picky details center on the Cypher query language, the overall technology and approach is so awesome that I found myself not really caring that it took awhile to figure out how to express what I wanted.

Here’s the punch-line: as a defender, with this approach you have a really excellent tool to figure out how attackers might compromise the high-value targets protected by your Active Directory. This includes a visual map of their potential path, and a way to model how possible mitigations might change what paths are left. The tool itself has an excellent command of the possible exposures in an AD environment–which I can almost guarantee will exceed your awareness and ability to track within your environment.

With a tool and approach like this, you can:

  • identify weak points in your environment which need extra attention
  • have a quantitative way to evaluate possible mitigations or changes proposed
  • quantitatively compare your security posture between two points in time

The tool is not perfect, but for something that is an open-source labor of love that has been released in the last year–it’s pretty impressive–especially when you note the scientific methodology behind the tool.

Here’s how it generally works:

  1. You setup a neo4j database (and web interface) – https://www.youtube.com/watch?v=o22EMEUbrNk (walkthrough)
  2. You setup bloodhound to use that neo4j db – https://github.com/BloodHoundAD/BloodHound/releases/tag/2.0.3 (check for newer versions)
  3. You run bloodhound’s data collector in your environment to populate bloodhound’s db (look for smarthound in the above repo)
  4. You use a combination of the bloodhound UI and the neo4j web interface to explore your environment and the possible attack paths

Neo4j is a graph database, with nodes and edges (relationships between nodes). This allows the modeling needed to happen in an efficient way. Bloodhound defines a great set of AD related nodes and edges in its schema, and the data collector goes about discovering that data in your environment.

Once you’ve got a database with data from your environment, you can use the bloodhound UI or the neo4j web interface (using the cypher query language) to identify attack paths to high-value targets in your environment. A very obvious path you might want to find are ways to go from domain users to domain admins. Finding all such paths in a single query isn’t really practical–instead you might find all paths which are the shortest in terms of the number of hops from node to node. For example, a domain user might be able to log into a computer where a domain admin has a session (i.e. has logged in)–that’s a short path to escalate to domain admin.

You have the power to manually manipulate the nodes and edges in the database. You can add nodes or edges. You can remove nodes or edges. You might do this to simulate what the environment would look like if you applied that mitigation in your real environment.

By iteratively applying manual manipulations and re-running queries for the shortest attack paths you can identify as many weak points in your environment as you care to find. This gives you a laboratory-like environment where you can explore and test a hypothesis using a scientific methodology.

If you layer an analysis tool like PowerBI on top of this, you can put together a dashboard which gives you an objective sense of the overall security stance of your environment in a potential configuration. You’d applying manual manipulations to your bloodhound database and check the PowerBI dashboard to see how much improvement resulted. Likewise, you might use this kind of approach to provide an independent analysis of the security risk profile to any proposed change in your environment before it actually is implemented.

There are some approaches you’ll need to figure out if you use bloodhound as a risk evaluation tool.

Database management

If you are making manual changes, you are likely to want some way to copy and roll-back your database to the a given state. Alternatively, you can save your sharphound data collection and re-import to a fresh database. Do you need multiple copies of the database to support more than one purpose?

Data collection

How often do you do a fresh run of sharphound? And import it?
Which account runs sharphound? From which host(s) is sharphound run?
Do you notify others in your environment that they may see connections from the sharphound host(s)?

Basic reading

https://posts.specterops.io/introducing-the-adversary-resilience-methodology-part-one-e38e06ffd604
https://posts.specterops.io/introducing-the-adversary-resilience-methodology-part-two-279a1ed7863d
https://posts.specterops.io/bloodhound-2-0-bc5117c45a99

Intermediate reading

Sharphound rewrite: https://blog.cptjesus.com/posts/newbloodhoundingestor
Low-level technical details on sharphound: https://blog.cptjesus.com/posts/sharphoundtechnical
Intro to Cypher: https://blog.cptjesus.com/posts/introtocypher
Cypher cheat sheet: https://neo4j.com/docs/cypher-refcard/current/

Engaging

Slack channel for Bloodhound: https://bloodhoundgang.herokuapp.com/

Following

Andy Robbins Twitter: @_wald0
Rohan Vazarkar Twitter: @CptJesus
@SpecterOps

Pass the Hash Mitigations

Over the last couple years, I’ve seen countless articles explaining Pass the Hash, and there are even a few high-level whitepapers about what to do about it. Here’s some of the best of those:

There’s also a good paper covering the Kerberos equivalent:

Awareness and understanding is all good, but there isn’t enough thinking and public discussion about the specific ways to mitigate the risk.

In many ways, it feels like folks are holding their breath and hoping Microsoft will fix it all.

And of course, in the meantime more places are getting compromised. I have little doubt that yesterday’s announcement of the Premera compromise will turn out to be yet another PtH story.

We’ve closely looked at the Protected Users & Authentication Policies stuff. Like several notable 3rd parties, we’re not convinced they are especially effective given the design. I have high hopes for the Aorato Directory App Firewall (or whatever Microsoft chooses to call it after it re-releases it) and its ability to detect PtH. Likewise, I’m hopeful about the Next Generation Credentials that Joe Belfiore publicly revealed more about recently (but it isn’t yet clear that NGCs help with PtH). And there is the Just in Time administration capabilities that should be released soon with MIM, which would limit how long a given account has privs, instead of it always having privs. There’s also the recent guidance on how to reset the KRBTGT account password. Those are the major things Microsoft has cooking/available for this problem. But those are just part of a strategy, most aren’t available yet, and in some cases require major changes that aren’t realistic to expect anytime soon.

Don’t get me wrong, I’m totally impressed with how much investment Microsoft has poured into this, it’s just that I don’t want to sit around idly any more and I don’t think anyone else should either.

We’ve been talking a bit here about potential ways to mitigate risk that don’t require some new thing and wouldn’t necessarily be a major impact to implement. Not eliminate the risk, but limit it. Here’s some ideas we’ve been tossing around:

  • Automatic user profile deletion after X days via group policy setting. Eliminates cached creds laying idly around. Definitely a good idea for servers, maybe not so much for workstations given the roaming scenario. 
  • Develop strategy to block ‘logon over the network’ user right. Idea here is to inhibit the lateral movement. Still struggling with whether this should leverage the block setting or reduce who has the allow, and beginning to lean toward the latter. Things that need more thought are how to break your sets of computers into logical groupings of levels of trust where you’d apply this–put another way, what are the trust boundaries for this idea? Also needing thought are which accounts would lose logon over the network–seems like you’d want to target accounts that have any sensitive permissions.This idea seems to have lots of promise, but the details are a bog.
  • Push Group Managed Service Accounts more heavily. Perhaps to the point of eliminating “normal” user accounts for services, scheduled tasks, etc.
  • Have better education/practices for which admin accounts should be used in which scenarios. “workstation admin” account only used to log into workstations. “server admin” account only used to log into servers. “domain admin” account only used to log into DCs. Never use these on the same computer, because then you’ve created an escalation bridge. If possible detect when a type of admin account has been used improperly, and alert to take actions to eliminate risk from that.

We’re open to hearing other’s ideas, hearing critiques, or collaboration on these ideas.