In this eighth installment of our Blue Team training series, we’ll step into Microsoft Sentinel – Microsoft’s cloud-native SIEM and SOAR platform – to supercharge our threat hunting and automated defenses. In previous posts, John (our IT pro protagonist) witnessed how attackers could breach a system named test001. John is putting on his blue team hat to hunt for those threats and set up alerts and auto-responses using Microsoft Defender XDR capabilities in Sentinel. We’ll cover four key areas in this post:

  • Sentinel Watchlists – how to create and use them to track known bad indicators.
  • KQL Queries – writing simple queries in Sentinel’s powerful query language to spot suspicious behavior.
  • Custom Analytics Rules – turning those queries into automated alerts that flag threats.
  • Automated Response (Logic Apps) – setting up playbooks that automatically take action (like disabling a user or tagging a device) when an alert is triggered.

Our goal is to keep it beginner-friendly and practical. By the end, you’ll see how John uses Sentinel to hunt threats on test001 and beyond, with clear steps and examples you can follow. Let’s dive in!

Overview of Threat Hunting Capabilities

Before we jump into John’s story, here’s a quick overview of the tools and capabilities we’ll be using and how they help the blue team:

Capability What It Does Blue Team Benefit
Microsoft Sentinel Cloud SIEM & SOAR platform that aggregates and analyzes logs from many sources​

learn.microsoft.com

Centralizes security data for threat detection and response (the “single pane of glass” for defenders).
Watchlists Custom lists of data (IPs, users, devices, etc.) you can upload into Sentinel​

techcommunity.microsoft.com

.

Easily reference known bad indicators or critical assets in queries and rules for quick correlation​

techcommunity.microsoft.com

KQL Hunting Queries Search queries using Kusto Query Language (KQL) to filter and find events in log data. Proactively sifted through mountains of logs to spot anomalies or suspicious events that weren’t automatically flagged​

learn.microsoft.com

.

Custom Analytics Rules Scheduled detection rules (often using KQL queries) that run automatically on a schedule. Continuous monitoring – Get alerted when specific threat patterns occur without running searches manually.
Logic Apps (Playbooks) Automated workflows triggered by alerts/incidents (Sentinel uses Azure Logic Apps as “playbooks” for response)​

learn.microsoft.com

SOAR in action – immediate, consistent responses to threats (e.g. disable a compromised account, isolate a machine, notify the team)​

learn.microsoft.com

learn.microsoft.com

Now that we know what’s in our toolkit, let’s walk through how John uses each in a simple threat-hunting scenario.

Using Sentinel Watchlists for Known Bad Indicators

John starts by leveraging Microsoft Sentinel Watchlists to keep track of known bad indicators. Think of a watchlist as a simple custom table of data you can upload (for example, a CSV of IP addresses, domain names, file hashes, user names, etc.). Sentinel watchlists let you quickly import this data and correlate it with your logs and alerts.

This is perfect for a blue team scenario: John can maintain a watchlist of “known bad IPs” or other threat indicators from threat intelligence feeds and use it to hunt for any occurrences of those indicators in his environment.

How to Create a Watchlist: John performs the following steps to create a new watchlist in Sentinel:

  1. Navigate to Watchlists: In the Microsoft Sentinel portal, under Configuration, he clicks on Watchlists.
  2. Add New Watchlist: He clicks the + New Watchlist button, which opens the Watchlist wizard, where he can upload a file or enter data.
  3. Name and Details: John names the watchlist “KnownBadIPs” and adds a brief description (e.g. “Indicators of known malicious IP addresses from threat intel”).
  4. Upload Data: For the Source, John selects Local file and uploads a CSV file containing a list of suspicious IP addresses his team is tracking. For example, his CSV might have a column for “IPAddress” (and perhaps a “Description” or “ThreatType”). He specifies the IPAddress as the SearchKey, which tells Sentinel which column will be the primary key to match against log data.​
  5. Review and Create: He reviews the entries and clicks Create. Sentinel imports the list, and now “KnownBadIPs” is available as a watchlist in his workspace.

John’s watchlist is now ready. Watchlists can be used for many things, not just bad IPs. You could have a watchlist of high-value assets (servers you care about), terminated employees (accounts that should no longer be logging in), etc. For now, John is focusing on known bad indicators.

Hunting with a Watchlist: John can use the “KnownBadIPs” list in his hunting queries. The power here is that instead of manually checking each IP, he can have KQL check any log with an IP against the list. Sentinel provides a special function _GetWatchlist() for this. Let’s see it in action.

John wants to see if test001 (the system attacked in earlier posts) ever communicated with any IP in his bad IP list. He can run a query in Sentinel’s Logs view like this:


// KQL Query: Find any network connections from test001 to our Known Bad IPs watchlist

let badIPs = (_GetWatchlist('KnownBadIPs') | project IPAddress);

DeviceNetworkEvents

| where DeviceName == "test001" // focus on the test001 machine

| where RemoteIP in (badIPs) // RemoteIP is in our bad IP list

| project Timestamp, DeviceName, RemoteIP, InitiatingProcessFileName

In this KQL query, we first use _GetWatchlist(‘KnownBadIPs’) and project the IPAddress column into a variable called badIPs. Then we filter the DeviceNetworkEvents table (which contains endpoint network connection logs from Defender for Endpoint) for any events on test001 where the RemoteIP matches one of those bad IPs. We project relevant fields like the time, the device, the remote IP, and the process that initiated the connection.

What does this tell us? If this query returns results, it means test001 tried to contact a known malicious IP – a big red flag! John can run this query interactively to hunt through historical data. If he finds a match, that’s an indicator that test001 might be compromised (for example, malware on test001 reaching out to a command-and-control server). In an actual situation, John could also run similar queries across all devices (not just test001) to see if any machine in the network contacted those bad IPs.

Sentinel watchlists make this easy by allowing a simple … | where RemoteIP in (badIPs) filter instead of hard-coding dozens of IP addresses into the query. They also make maintenance easier – update the watchlist in one place, and any query or rule using it will use the new data.

Hunting with KQL Queries for Suspicious Behavior

Next, John digs into KQL queries to hunt for other suspicious behaviors in Sentinel. KQL (Kusto Query Language) is the query language that Azure Sentinel uses (and Microsoft Defender’s advanced hunting). It might look like SQL, but it’s read-only and designed for filtering and pattern-matching on large log data sets. The good news is you don’t need to be a developer to use KQL – it’s pretty beginner-friendly with its | pipe syntax to chain filters and operations.

Why KQL? As a blue teamer, John doesn’t want to wait for an alert to pop up for every possible threat (many sophisticated threats might not trigger a default alert). Instead, he proactively searches through the “mountains of data” for anomalies. Microsoft Sentinel’s hunting search tools let him query logs from multiple sources: endpoint logs, Azure AD sign-in logs, Office 365 activity, and firewall logs—all in one place. This proactive hunting helps discover issues that automated alerts might miss.

Let’s walk through a simple KQL query example. John decides to check for suspicious login activities in his environment. Specifically, he wants to identify if any user accounts are experiencing a high number of failed login attempts in a short time (which could indicate a password spraying or brute-force attack). This kind of pattern is suspicious: normally, users might mistype a password once or twice, but 5 failed attempts within a couple of minutes could be an attacker trying to guess the password.

John goes to the Logs section in Sentinel and uses a query like this:


// KQL Query: Detect multiple failed logins (Azure AD sign-in failures)

SigninLogs

| where ResultType == 50126 // 50126 means "Invalid username or password" in Azure AD sign-ins (login failure)

| summarize FailedCount = count() by UserPrincipalName, bin(TimeGenerated, 5m)

| where FailedCount= 5

Let’s break down what this query does:

  • We’re looking at the SigninLogs table containing Azure AD authentication attempts.
  • We filter with ResultType == 50126. In Azure AD, result code 50126 signifies a bad password or invalid credentials entered (essentially a login failure due to wrong password)​

By filtering with this code, we focus only on failed sign-in attempts.

  • We then use summarize … by UserPrincipalName and a time bin of 5 minutes. This groups the sign-in failures by user and by 5-minute time chunks, counting the number of failures each user had in each chunk (FailedCount).
  • Finally, we filter where FailedCount >= 5 – meaning we’re interested in cases where a user had five or more failed logins within a 5-minute window.

When John runs this query, if any user (say John@techmentor.com or any employee) shows up with a FailedCount of 5 in 5 minutes, that’s a suspicious event worth investigating. It could be an attacker trying to brute-force that user’s password. In a production environment, John might adjust the thresholds (maybe 10 failures in 5 minutes, etc., depending on what’s normal vs abnormal for his organization), but for our example, 5 in 5 is a clear sign of something wrong.

Using KQL Results: John can inspect the query results directly in the Sentinel query results pane. He’d see the username and count of failures for each suspicious burst of failures. He could extend the query to get details like the IP addresses those attempts came from, using additional columns in SigninLogs (e.g., IPAddress or Location). This could reveal if those failures are coming from an unusual country or a known malicious IP (which he could cross-check against his watchlist as well!).

Beyond this specific query, John has a whole world of KQL hunting at his fingertips. He can search for processes or commands that attackers use, like looking at DeviceProcessEvents for command-line executions of tools (e.g., checking if powershell.exe was run with a Base64 encoded command, which might indicate obfuscated PowerShell usage), or search Windows event logs for things like new service installations (Event ID 7045) or suspicious scheduled tasks (Event ID 4698). The idea is to think like an attacker (what would they do?) and then query the logs to see if those activities happened. Many example hunting queries are available out of the box or from the community, but even simple ones like the above can uncover important clues.

At this point, John has manually hunted and found some interesting signals: for instance, imagine he found that test001 did contact a known bad IP (from the first query) and also saw a burst of failed logins for the user “Admin@test001” from an IP in Russia (which shouldn’t be happening). These are things he wants to address. Hunting is excellent for one-time discovery, but John doesn’t want to run these queries by hand daily. So next, he will turn these queries into analytics rules that automatically detect and alert on these patterns.

Creating Custom Analytics Rules for Alerts

Custom Analytics Rules in Sentinel allow John to take a KQL query (like the ones we just used) and have Sentinel run it on a schedule, automatically generating an alert (and an incident ticket) if the query finds results. In other words, if our hunting query is a sound detection, we can “operationalize” it so John’s team will be notified when it triggers. This shifts us from purely proactive hunting to automated detection – a core part of Blue Team operations.

Microsoft Sentinel has many built-in rule templates (covering common threats like brute force attacks, malware outbreaks, etc.), but it also lets you create custom rules tailored to your environment. John creates a custom rule based on the query about failed logins we wrote above. This way, he’ll get an alert if any account has five failed sign-in attempts in 5 minutes.

Here’s how John creates the rule step by step:

  1. Open Analytics Blade: In the Sentinel portal, John navigates to Analytics (in the left menu). Here, he sees existing active rules and a button to create new ones.
  2. Create a Scheduled Rule: He clicks + Create and selects “Scheduled query rule” (since this will run periodically on a schedule)​
  3. General Settings: On the General tab, John gives the rule a name and description. For example, he calls it “Multiple Failed Login Attempts (Brute Force Detect)” and describes it as “Detects 5 failed logins within 5 minutes for the same user—potential brute force attack.” He sets the severity to Medium or High (since this could indicate an attack in progress, he might choose High). He leaves the rule enabled (Active).​
  4. Set Rule Logic: This is where John enters the KQL query logic. He copies the KQL from his hunting query and pastes it into the query editor. The wizard also asks for a few additional settings:
    • Run frequency: John chooses to run this query every 5 minutes.
    • Lookup period (or query period): He sets it to look at the last 5 minutes of data on each run. (This aligns with our query, which bins data in 5-minute intervals.)
    • Trigger threshold: John specifies when the rule should trigger an alert. He can choose “trigger on > 0 results” (meaning if the query returns anything, fire the alert) or set a threshold like “trigger when FailedCount >= 5”. In our case, the query already filters to those with >= five fails, so any result means an alert. He opts to alert on any result.
    • (Optional) Entity mapping: Sentinel allows mapping fields in the query to entities like Accounts, Hosts, and IP addresses. John maps the UserPrincipalName from his query to the Account entity so the alert will clearly show which user account is involved. He could also map an IP entity if relevant. (This step helps Sentinel attach contextual information to incidents, but the query results will be in the alert details even if skipped.)
  5. Incident Settings: John can define how incidents are created. Each rule run that finds something will create a new incident by default. He decides to keep it simple: one incident per detection. (Sentinel has options to group multiple alerts into a single incident to reduce noise – for example, group all failed login alerts for the same user into one incident – but John will tune that later once he sees the volume. For now, each alert is crucial enough to handle separately.)
  6. Automated Response: (We’ll dig deeper into this in the next section.) John can attach a playbook (Logic App) on this tab to run when the alert is generated automatically. John has a playbook in mind to disable users with suspected brute force attacks, so he selects that here. (If he hasn’t created it, he could still finish the rule and add the playbook later.)
  7. Review and Create: John reviews the rule settings and hits Create. The new analytics rule is now active in Sentinel.

Once this rule is live, Sentinel will continuously run the query in the background. If, say, an attacker tries to brute-force the password of John’s account or any user tomorrow and fails five times in five minutes, Sentinel will immediately create an alert/incident for John’s team. They’ll see something like “Multiple Failed Login Attempts (Brute Force Detect)” triggered, with details on which user account and how many failures.

John no longer has to hunt for this pattern manually – he’s effectively taught Sentinel to watch out for it 24/7. He can create additional custom analytics rules for other things he wants to catch. For example, he might create a rule using the watchlist query “Outbound Connection to Known Bad IP” that triggers if any device connects to an IP in his KnownBadIPs watchlist. The process is similar: write a KQL query using the watchlist, decide on frequency (maybe every 1 hour for that one), and create the rule. This way, he gets alerts for brute force logins and any machine contacting malicious IPs (which could indicate malware beaconing out).

John is strengthening his organization’s security monitoring by building a library of these custom detections. He’s essentially encoding the knowledge he gains from threat hunting into automated rules. Let’s take it one step further – automating the response when those alerts fire.

Automated Response with Logic Apps (SOAR Playbooks)

Detecting threats quickly is great, but responding quickly is just as important for the blue team. Microsoft Sentinel helps here through Automation Rules and Playbooks, which are built on Azure Logic Apps. A playbook is essentially a workflow of actions that can be executed in response to an alert or incident. This is Sentinel’s SOAR (Security Orchestration, Automation, and Response) capability in action.

Why automate? Imagine John’s failed login alert fires at 3 AM while he is asleep. Instead of waiting for him to wake up and react, a playbook could automatically disable the affected user account, block the attacker’s IP, and notify the on-call analyst—all within seconds of the alert. This can contain a threat early and prevent further damage. Automation also ensures consistent responses every time.

Sentinel provides hundreds of ready-to-use playbook templates in its Content Hub, covering everyday response actions. These include things like:

  • Disabling a compromised user account – e.g., immediately block the user in Azure AD (Microsoft Entra ID) when an incident indicates the account is compromised​
  • Tagging or isolating a device – e.g., if a host (like test001) is confirmed infected, isolate it from the network via Microsoft Defender for Endpoint or add a “Compromised” tag to it for tracking.​
  • Sending notifications – Send an email to the security team or a message to a Microsoft Teams channel with details of the incident.​
  • Collecting more info – trigger a script to gather forensic data, or create a ticket in an ITSM system, etc.

John implements an automated response for the brute force detection rule he created. He uses a playbook template called “Block Azure AD User,” which will disable a user in Azure AD. He also adds steps to this playbook to notify the IT team via Teams. Here’s how John sets it up:

  1. Obtain/Configure the Playbook: In the Content Hub (or Automation blade), John finds the “Disable user account” playbook. He deploys it to his Sentinel workspace. This creates an Azure Logic App with predefined steps. He opens the Logic App designer to tweak it – for example, specifying to use his tenant’s directory and customizing the Teams message text.
  2. Link to Analytics Rule: John goes to the Automated Response tab in the Analytics rule (either during creation or by editing it after). He adds a new Automation Rule that says: When an incident is created by this analytics rule, run the “Block Azure AD User” playbook. He sets it to run automatically. He could also set conditions (e.g., only run if the alert is High), but in this case, it’s always high severity, so it’s okay.
  3. Test the Flow: To be safe, John tests the playbook with a sample incident (Sentinel allows you to trigger a playbook on a past incident manually). The playbook runs, and he verifies that the test user account gets disabled and that a Teams message appears as expected.

Now, the entire chain is automated. Let’s replay the scenario: an attacker tries to brute force an account’s password. Within minutes, Sentinel’s analytics rule detects it and creates an alert. Instantly, the linked playbook triggers: Azure AD blocks the user account (preventing the attacker from getting in even if they guess the password later), and the security team gets a Teams notification that “User X was disabled due to 5 failed login attempts in 5 minutes on host test001 – possible brute force attack.” The team can then investigate the incident first thing in the morning (or if someone is on call, they’ve already been notified in Teams).

John can set up similar automation for other scenarios. For the “Known Bad IP connection” rule, he might attach a playbook that isolates the machine (e.g., disconnects test001 from the network via Defender for Endpoint) to stop any communication with the malicious IP. He could also have a playbook that creates a ticket in their helpdesk system whenever a high-severity incident is generated, ensuring IT is tracking it.

It’s worth noting that while automation is powerful, John will use it judiciously – some alerts might warrant a human review before taking action (to avoid false positives locking out a user accidentally). Sentinel’s automation rules can be configured to run only for certain incident types, during off-hours, etc., to balance caution and speed. In our brute force example, disabling a user is a safe response (the user can always call IT to unlock their account, and it’s better than letting an attacker in). For isolating a server like test001, John might choose to have that require manual approval in some cases, depending on the confidence of the alert.

Wrapping Up: Blue Team Wins with Sentinel

In this post, we saw how John used Microsoft Sentinel’s threat hunting and automation capabilities to enhance his blue team operations on the test001 scenario greatly. To recap:

  • Watchlists allowed John to import a list of known bad IPs and effortlessly check his logs for any occurrences of those indicators. This turned external threat intel into actionable data in his SIEM.
  • KQL queries empowered John to sift through log data and uncover suspicious patterns (like repeated failed logins) that might otherwise hide in the noise. Even as a beginner, John found the queries logical to write and extremely powerful in results.
  • Custom Analytics Rules let him automate those discoveries, essentially teaching Sentinel what to look out for. Now, potential attacks (like brute force attempts or connections to malicious IPs) will set off alarms for his team immediately rather than being found days later.
  • Automated responses (Logic App Playbooks) allowed John to respond to threats in seconds. By pre-defining actions like disabling accounts or alerting the team, he ensured that when the alerts fire, they’re not just noisy bells – they trigger absolute containment and notification measures. This closed the loop from detection to response, embodying the “XDR” vision (eXtended Detection and Response).

These tools might seem advanced for an entry-level IT professional in a blue team role, but hopefully, this walk-through showed that they can be used in a simple, step-by-step way to improve security significantly. John started with a straightforward scenario (failed logins and bad IPs) and gradually built an end-to-end solution in Sentinel. You can start small, too: create a watchlist of something you care about, write a basic query, and turn it into an alert. Then, you can add a playbook to automate an email notification to yourself for that alert. Step by step, you’ll become comfortable with the technology and the workflow.

By implementing Microsoft Defender XDR capabilities in Sentinel as described, John has turned test001 from an initially compromised system into a well-monitored and protected asset. More importantly, he’s gained visibility and control across his environment. The blue team now has the upper hand – with Sentinel watching their backs and ready to pounce on threats, attackers will have a more challenging time going undetected.

Stay tuned for the next part of the series, where we’ll continue to build on these blue team skills. In the meantime, happy threat-hunting! Remember, the tools might be high-tech, but the strategy is simple: know what’s bad, look for it, and respond fast. Microsoft Sentinel makes doing that a lot easier for you and John. Remember, this is companion content to our new book Red Teaming and Blue Teaming with Defender XDR

Thanks,

John Sr.