The Hidden Dangers of Microsoft Teams Open Federation
Microsoft Teams has become a backbone of enterprise collaboration, connecting employees across departments and even across organizations. However, its open federation feature – enabled by default – can be a double-edged sword. Open federation allows external users (even from entirely different domains or companies) to find, call, and message your employees by default. This convenience opens the door to serious security risks if left unchecked. In this post, we explore how threat actors – including Iranian state-sponsored groups like Storm-0133 – are exploiting Teams’ open federation by impersonating IT staff to launch phishing and social engineering attacks. We’ll break down how open federation works, real-world attacker tactics (TTPs) and known indicators of compromise (IOCs), and conclude with urgent steps to mitigate these threats.
What is Open Federation in Teams (and Why It’s Risky)?
By design, Microsoft Teams is meant to federate with external organizations. In open federation mode (the default setting), any external Teams user can initiate a chat or call with your employees, unless you explicitly restrict or disable it. In practice, this means an attacker can create a Microsoft Teams account (for example, through another Office 365 tenant) and contact people in your organization by username – no special access required. Microsoft’s own documentation confirms that with open federation, external users “find, call and chat with people in your organization in any domain” by default.
From a security standpoint, open federation exposes organizations to unsolicited external communications by default. Many companies are unaware that strangers (or threat actors) can directly reach employees via Teams without any email phishing or malware needed. This external access is intended for trusted business partners – but threat actors can abuse it. Attackers leverage these default settings to their advantage, impersonating trusted parties and blending into what looks like normal internal collaboration.
Why is this such a problem? Because users inherently trust their corporate chat apps. An unexpected email from outside the company might raise suspicion, but a message on Teams often feels internal and legitimate. Attackers exploit this trust. They count on the fact that employees don’t always notice the small “External” label on a contact, or understand that by default any external entity can message them. In short, open federation can trick employees into letting the wolf right through the front door.
How Threat Actors Exploit Teams’ Open Federation
Cybercriminals and nation-state hackers alike have begun abusing Teams’ external communication features in clever phishing and social engineering schemes. Phishing is no longer just an email problem – it’s now happening through Teams chats and calls. Here’s a high-level look at how these attacks work and who is behind them:
- Impersonating Internal IT Staff: The most common ruse is posing as a help desk or IT support representative. Adversaries send a Teams message (or call via Teams) to an employee while pretending to be from the company’s IT department. Because the communication arrives through Teams – the same channel real internal IT might use – victims are more likely to trust it. The attacker often uses a display name and profile picture that mirror a real IT employee or a generic “IT Support” persona, exploiting the trust and authority those roles carry.
- Leveraging Iranian APT Social Engineering: Sophisticated state-aligned groups are in on the action too. Iranian threat actors like Storm-0133 (also known as Lyceum/Hexane) are infamous for crafty social engineering and impersonation schemes. In previous campaigns, the Lyceum APT built entire fake infrastructures to impersonate a target company’s staff and lure victims. It’s a logical progression for such groups to turn their sights on collaboration platforms like Teams. Microsoft reports Storm-0133 is an Iranian Ministry of Intelligence and Security (MOIS)–affiliated cluster active in targeting organizations in sectors like government, defense, and telecom. Given their penchant for impersonation and phishing, it’s no surprise they would abuse open federation to masquerade as insiders on Teams. In short, nation-state attackers are now using the same tricks as scammers – and vice versa – on corporate chat platforms.
- Financially Motivated Hackers: It’s not just nation-states. Cybercriminal gangs are exploiting Teams for phishing as well. For example, researchers recently uncovered a phishing campaign on Teams where attackers posed as IT support to deploy malware, linked to a financially motivated group dubbed “EncryptHub”. Another group known as Scattered Spider (linked to ransomware operations) has been impersonating IT staff on both Microsoft Teams and Slack to phish employees, according to a joint FBI cybersecurity advisory. These incidents highlight that multiple threat actors – from APTs to ransomware crews – have realized that Teams with open federation is a soft target.
Phishing in Teams: Tactics, Techniques, and Procedures (TTPs)
How exactly do these attacks play out? Threat actors have developed a repeatable playbook for abusing Teams’ external access. Below we outline the common TTPs observed, step-by-step:
- Initial Setup – Fake Accounts and Domains: The attacker first acquires a Microsoft 365 account outside the target organization. This can be done by compromising an existing account in another tenant or by registering a new Office 365 tenant (often using a free trial or cheap license). Notably, attackers have been seen creating new Azure AD (Entra ID) tenants with the default .onmicrosoft.com domain. They often choose tenant names or account aliases that resemble the target company or IT department. For example, an attacker might register an account like Helpdesk@<em>Something</em>.onmicrosoft.com, or a variation of the company’s name, to appear legitimate. Indicator: Be wary of external user accounts with “.onmicrosoft.com” domains, especially those containing keywords like “helpdesk”, “ITSupport”, or your company name.
- “Smoke Screen” Distraction (Email Bombing): Before reaching out on Teams, some attackers create a diversion. A spam or email-bombing attack floods the employee’s inbox with junk mail – sometimes thousands of emails in minutes. This overwhelming barrage is intentional: it creates a sense of urgency and frustration. The victim is primed for someone to step in and “help” with the issue. Indicator: A sudden flood of spam to an employee’s inbox, followed shortly by a message from IT via Teams, is a red flag. Real IT staff wouldn’t normally use Teams to resolve an email spam issue out of the blue.
- Initiating Contact via Teams (External Chat or Call): With the stage set, the attacker (posing as IT) initiates a one-on-one chat or Teams call with the target. Because external messaging and calling are enabled by default in Teams, this goes through unless the organization has restricted it. Microsoft Teams will typically show a small notice that the user is external and may even flash an “external user” warning banner. However, users often click past these warnings. In fact, once the target accepts the chat or call, the warning banner disappears from the Teams interface.
- In some cases attackers bypass the warning entirely by using a clever trick: they schedule an instant Teams meeting and invite/tag the user, causing a chat window to pop up without the usual external user prompts. On voice calls, it’s even stealthier – by default, Teams does not show an external warning for incoming calls. Indicator: Any unsolicited **Teams chat or call claiming to be from internal IT – especially if it comes right after an “incident” like a spam flood – should be treated with skepticism. Always verify if the person is actually from your organization.
- Exploiting Trust in Conversation: Once the chat or call is accepted, the attacker’s goal is to engineer the victim socially. They often claim there is an urgent problem (virus infection, account issue, the aforementioned email flood) that needs immediate fixing. Posing as IT, they then guide the user through steps that actually compromise security. For example, the attacker might say “We noticed unusual activity on your PC; I need to run a scan” and then ask the user to share their screen or grant remote control. Microsoft Teams supports screen sharing in chats. Although external users cannot take remote control of your screen by default, a well-coached victim might still be tricked into running a remote assistance tool. Attackers commonly instruct targets to open Microsoft’s Quick Assist (a legitimate remote desktop tool) or have them install a remote management app like AnyDesk or a similar Remote Monitoring and Management (RMM) tool. Tactic: Under the guise of “tech support,” the attacker is effectively walking the victim through the steps of compromising their own machine. Once the attacker has screen view or remote access, it’s game over – they can steal passwords, sensitive data, or deploy malware.
- Malware Delivery via Teams: In some cases, attackers take it a step further by transferring malicious files through the Teams chat. By default, external one-to-one chats in Teams don’t easily support file attachments (to prevent abuse), but threat actors have found workarounds. For instance, they intercept the chat’s web requests and manipulate them to include a file hosted on the attacker’s SharePoint (associated with their fake tenant). The victim sees what looks like a standard file icon in Teams, clicks it, and fetches the payload from SharePoint – a clever way to slip past some defenses. In a recent phishing campaign, criminals delivered a malicious ZIP file via Teams that, when opened, installed backdoor malware on the system. Indicator: Unexpected file share via Teams, especially from an external contact, should be treated as suspicious. Security teams can look for SharePoint file URLs in Teams message logs as an indicator of potential malicious file sharing.
- Follow-On Exploitation: With a foothold on the victim’s machine (either through stolen credentials, remote access, or malware), the attacker can perform a variety of actions:
- Credential Theft & Lateral Movement: If the user divulges login credentials (e.g., to “verify your account”), the attacker will attempt to use them to access VPNs, email, or other internal systems. If they gained remote desktop control, they may dump password hashes or install keyloggers. Iranian actors in particular often seek valid credentials for deeper network access.
- Privilege Escalation: Some attackers, like Scattered Spider, combine techniques – after initial Teams-based social engineering, they have been known to bypass multifactor authentication (MFA) by bombardment or SIM-swapping, allowing them to escalate privileges in the network.
- Deploy Malware or Ransomware: In financially motivated cases, once the attacker has admin access, they may deploy ransomware. Coalition’s incident response team observed attackers using the remote access gained via Teams to execute PowerShell commands, establish a backdoor, and attempt (though ultimately fail) to launch ransomware. The EncryptHub campaign mentioned earlier tried to install remote admin tools to control the system and drop malware entirely.
- Quiet Espionage: State-aligned attackers might use the access more stealthily – planting custom malware (for example, an Iranian tool codenamed “Mango” was used by Storm-0133 in other intrusions) to surveil the organization, exfiltrate data, or maintain persistence for later operations.
Indicators of Compromise (IOCs) for Teams Federation Abuse
Defenders should keep an eye out for several telltale signs that external actors are abusing Teams. Some known IOCs and patterns from documented attacks include:
- Unfamiliar External Domains or Tenant Names: Logs or user reports showing external contacts from domains you don’t usually do business with. Especially new or odd-looking .onmicrosoft.com addresses are suspect. In past incidents, attackers have used tenant names that include terms like your company name or IT-related words. E.g., Helpdesk@<random>.onmicrosoft.com contacting an employee is highly suspicious. Security teams should monitor for newly seen external domains in Teams communications.
- External User Display Names Impersonating Staff: Attackers often set their display name to an internal entity. If a user reports a chat from someone named “IT Support” or the name of an actual IT staff member, check if that account was external. Teams will label external users, but employees may overlook it. Any instance of an external account using a company executive’s name or an IT role name is an IOC for an impersonation attempt.
- Burst of Teams “ChatCreated” Events with External Initiators: In Microsoft 365 audit logs, every new Teams 1:1 chat generates a ChatCreated event. Suppose you suddenly see chat creation events initiated by users from outside tenants (identified by an unfamiliar OrganizationId in the log) targeting multiple employees. In that case, that’s a sign of an ongoing campaign. Particularly, multiple employees getting first-time chats from the same external tenant warrants investigation.
- “TeamsImpersonationDetected” Alerts: Microsoft 365 can sometimes flag suspected impersonation. In at least one simulation, a TeamsImpersonationDetected log was generated when an attacker’s tenant name closely resembled the target’s name. This is not guaranteed to trigger, but if you see this event type in your logs, treat it as an urgent incident (someone may be imitating your org in Teams).
- User Reports of Strange IT Requests: Often, the “IOC” is a human one – an employee might report that “someone on Teams asked me to share my screen, claiming to be IT.” Any such report should be taken extremely seriously and investigated. It likely indicates the early stages of an attack. Educate users that IT will rarely (if ever) cold-call via Teams for urgent fixes – especially not without prior notice or multi-factor verification.
- Concurrent Email Bombing (TIMailData spikes): As noted, adversaries sometimes couple email flooding with their Teams phish. If your mail security logs show a sudden spike of junk mail to a user (or multiple users) and around the same time Teams logs show external chats starting with those users, you may have a targeted attack underway. The correlation of these two events is a strong indicator of the fake helpdesk tactic in action.
- Presence of Remote Support Tools or Scripts: After a social engineering attack, you might detect unusual software on the user’s machine. Look for installations or executions of tools like Quick Assist (which is built into Windows), AnyDesk, TeamViewer, DWAgent, or other remote admin utilities that the user typically doesn’t use. In one case, PowerShell scripts and a ZIP file appeared on the system immediately after a Teams support session, indicating malware setup. These are IOCs for post-compromise activity resulting from a Teams phishing lure.
In summary, watch for the early warning signs: unknown external contacts, IT impersonators, user reports of unusual support chats, and technical breadcrumbs in logs. The faster you detect the social engineering attempt, the better chance you have of stopping the whole attack chain.
Not Just Teams: Other Collaboration Platforms at Risk
While this post focuses on Microsoft Teams, it’s worth noting that similar risks exist in other collaboration and chat platforms. Any tool that connects organizations or allows external messaging can be abused in similar ways:
- Slack: Slack’s cross-org direct messaging (Slack Connect) can potentially allow strangers to send you messages. In fact, the Scattered Spider group mentioned earlier has also infiltrated Slack at some victims, using the exact IT support impersonation to phish employees. Attackers exploit the trust in Slack messages just like Teams, sometimes dropping malicious links or files in DMs. Slack workspace invites or shared channels could likewise be used to masquerade as a trusted partner.
- Zoom & Other Meeting Apps: Video conferencing platforms can be used for phishing by impersonating meeting invites or tech support. For example, an attacker could send a Zoom meeting request appearing to be from IT, or pose as a co-worker asking for help on a call. Zoom chats are less commonly used for unsolicited messaging, but Zoom meetings themselves have been used in vishing (voice phishing) schemes – e.g., “Zoom support” calling to troubleshoot an issue. Also, consider platforms like WebEx, Google Meet, etc., which all allow external invites.
- Other Corporate Messaging (Webex, Skype, etc.): Any system where an external user can send a message or call can be a vector. Some organizations have been targeted via older platforms like Skype for Business or even LinkedIn and WhatsApp messages from someone impersonating internal staff. The common thread is impersonation and abuse of trust within a communication channel that employees consider “safe” or internal.
The key point is that collaboration tools are the new frontier for social engineering. Email is still heavily used by attackers, but as defenders shore up email security, adversaries are moving to where the defenses are weaker. Collaboration apps present an appealing attack surface – they often bypass email filters entirely and reach users in real-time. Many organizations haven’t trained users to be skeptical of in-app messages the same way they have for emails.
Note: This doesn’t mean you should abandon external collaboration features altogether. Slack and Teams are powerful business enablers. But it does mean you should approach open communications with caution and policies. As we’ll discuss next, there are ways to securely configure and monitor these tools to reduce the risk dramatically.
Mitigation: How to Defend Against Teams Federation Abuse
Given the urgency of this threat, IT teams and CISOs should take immediate steps to harden Microsoft Teams and educate their users. Here is a high-level mitigation plan:
1. Restrict or Disable Open Federation on Teams
The simplest way to prevent these attacks is to shut the door on external Teams communications, or at least narrow the opening. Evaluate if your organization really needs open federation enabled by default. If not, consider turning it off or switching to an allowlist model for external access:
- Disable External Access if Not Needed: In the Teams admin center, you can turn off external access entirely. This means Teams chats/calls will be confined to your tenant only. If your business doesn’t have regular communication via Teams with outside partners, this is the safest setting. Microsoft notes that if you turn it off, “your users cannot communicate with external Teams users via chat or calls.”
- Use Allow/Deny Lists for Domains: If outright disabling is too extreme, use the federation allow/block list. Set external access to “Allow specific domains” and input only your known partner domains. All other domains will be blocked. Conversely, you could “Block specific domains” by listing known inadequate or irrelevant domains, but given attackers can spin up new domains easily, an allowlist is more effective for security.
- Audit and Tighten Federation Settings: Double-check that open federation isn’t enabled unintentionally. Sometimes organizations may think it’s off when it’s on. Go to Teams admin center > Users > External access and review the policy. If you must keep federation open, consider disabling certain features for external contacts (like prevent them from sending files). Microsoft is also rolling out features to manage external meeting access and chat invitations better – stay updated on those and configure them for stricter control.
By reducing the exposure of Teams to only vetted external domains (or none at all), you dramatically lower the risk of rogue actors reaching your users. Many of the recent fake IT support attacks would have failed if external chats were blocked, since the impostor’s message would not be delivered.
2. Enable Teams Security Features and Monitoring
Microsoft has some built-in protections for Teams – make sure they are enabled and leverage them:
- Impersonation and Spam Filtering in Teams: Recently, Microsoft introduced improved spam detection for external Teams chats. For instance, when an external user attempts to message, Teams might flag “potential phishing” and require the user to click an accept banner. Encourage users to pay attention to these warnings. Additionally, ensure that Teams is set to alert users about external contacts (these are on by default, but verify no one turned off those banners). Microsoft documentation suggests users double-check the sender’s email address on external chat requests – educate staff on how to do this.
- Monitor Teams Audit Logs: As part of your SOC monitoring or SIEM, ingest the Microsoft 365 audit logs related to Teams activities. Specifically, track events like ChatCreated, MessageSent, UserAccepted, and any TeamsImpersonationDetected. By analyzing these, you can spot anomalies (e.g., an unusual number of new chats with external tenants in a short span, or a user accepting external chats they usually wouldn’t). Set up alerts for patterns indicative of phishing – for example, multiple employees receiving chats from the same new external tenant, or a single user suddenly engaging with an external “HelpDesk” account. Hunters’ researchers suggest building detection rules that flag onmicrosoft.com external senders, especially with IT-themed names, and spikes in TIMailData (email) events around the same time.
- Leverage Defender for Office 365 (if available): If you have advanced Microsoft 365 security licenses, see if features like Safe Links/Safe Attachments extend to Teams (Microsoft has been working on Safe Links for Teams). Some Defender for O365 policies now cover files and links in Teams chats. Ensure these policies are turned on to scan any files or URLs that external users send.
- Consider Third-Party Security Tools: There are emerging solutions specifically for collaboration security (CASBs and others that monitor Slack/Teams). These can sometimes catch abnormal behavior in real-time (like an external user suddenly messaging many people). If budget allows, augment your defenses with a tool that focuses explicitly on Teams/Slack security events, or integrate your EDR/XDR to watch for processes that commonly follow a Teams social engineering (e.g., OneNote.exe spawning PowerShell, or Teams spawning QuickAssist).
3. Cut Off Common Attack Techniques
Next, try to neutralize the tools and techniques attackers rely on once they reach a user:
- Disable or Secure Remote Assistance Tools: If your organization doesn’t need Microsoft Quick Assist, consider disabling it or restricting its use. Microsoft Intune or group policy can remove Quick Assist or limit local user permissions to run it. If you require remote support, use tools that have better logging and security (and train users that IT will only use those official tools). Similarly, lock down other remote desktop tools – e.g., block TeamViewer, AnyDesk, etc., by policy unless explicitly allowed.
- Tune Spam Filters to Prevent Email Bombs: The email bombing isn’t always easy to prevent (since the spam may come from various sources), but ensure your email spam filtering is aggressive. If attackers can’t flood the inbox, they lose one pretext. Some security teams set up rules to detect sudden spikes in inbound email to a single user and automatically quarantine or rate-limit messages. At a minimum, prepare an incident response playbook for email bombing. So your help desk can quickly assist users in the event it occurs (clearing out the junk and reassuring the user it’s okay, etc., so they don’t panic and fall for the fake IT person).
- Lock Down File Sharing with Externals: Review Teams’ policies on file sharing with external users. You might enforce that external users cannot share files in one-on-one chats, or that any files shared are scanned. If your DLP or cloud security solution can scan files in SharePoint/OneDrive, ensure it scans the folder where Teams chat files are typically saved (often a user’s OneDrive “Microsoft Teams Chat Files” folder).
- Apply Conditional Access: If feasible, use Azure AD Conditional Access policies to scope down external communication. For example, you could require that external access to Teams is only allowed from compliant devices or specific locations. However, this approach is tricky, as external contacts are, by definition, outside your organization. Conditional access can potentially block personal account logins on corporate devices. This is an advanced approach and may not directly stop external chats, but it can help contain damage if an employee’s account is being targeted or if an attacker tries to use a stolen session token, etc.
4. Educate and Simulate: Build User Awareness
Technology defenses alone are not enough – these attacks exploit human trust, so your people are the first line of defense. Conduct targeted security awareness training around this new threat:
- Alert Employees to the Scheme: Send out a security bulletin explaining that “IT will never randomly reach out via Teams asking for your password or to remote into your PC.” Instruct users on how to verify an IT person’s identity: for instance, recommend that if anyone gets an unsolicited Teams message or call from someone claiming to be IT, they should hang up and independently contact your IT help desk (via official phone or internal directory) to confirm. Establish clear protocols: e.g., IT will always reference an existing ticket number, or IT will never ask you to install software via chat. Empower employees to say no if they feel unsure – a legitimate IT support person won’t mind someone being cautious.
- Train Users to Spot the Signs: Include examples in training of what an external Teams chat looks like (highlight the external label), and how attackers might mimic profile pictures or language. Emphasize the context – e.g., “If you’re flooded with emails and then someone on Teams offers help, that’s a scenario attackers use.” Teach staff to recognize urgent language and pressure tactics even in chat. Just as with email phishing training (“verify the sender address, beware of urgent requests for credentials”), create a checklist for Teams: verify the external tag, confirm the person’s identity via a known company channel, never share screens or click links from unsolicited chats, etc.
- Simulated Phishing Exercises on Teams: If possible, run an internal phishing simulation using Teams (with leadership buy-in). This could involve someone from security, posing as “IT support” on Teams, messaging a few employees to see if they report it. Be very careful and transparent with such tests to avoid confusion or eroding trust, but if done right, it can be eye-opening. Microsoft doesn’t yet have an official Teams phish sim tool (as they do for email), but you can do controlled tests manually. The result will help you gauge how many users might fall for it and adjust training accordingly.
- Encourage Reporting and No Penalty for Mistakes: Ensure employees understand that reporting a suspected phishing attempt won’t result in punishment, even if they almost fell for it or took action and later realized their mistake. The sooner IT knows, the faster you can respond. Create an easy channel for reporting suspicious Teams messages – for example, a dedicated Teams channel or a one-click button if available. Some organizations integrate a bot that users can @mention to report phishing in Teams. The key is to foster a culture where people are vigilant and comfortable raising an alarm.
5. Response Plan: Be Ready to React
Finally, be prepared in case despite all precautions, an incident occurs:
- Incident Response for Teams Phishing: Update your incident response playbooks to include scenarios of an intruder via Teams/social engineering. Identify what logs to pull (Teams chat logs, call records, device logs from the affected machine, etc.), and have a plan to triage access (e.g., reset the user’s passwords, scan their device for any malware installed). Time is of the essence – if an employee reports they just allowed an external “IT” to remote in, assume compromise and isolate that machine immediately for forensics.
- Collect IOCs and Notify Others: If you catch a malicious external account, gather IOCs, including the external user’s Teams ID, tenant ID, and any domain names or files they used. You may choose to report it to Microsoft or share it with industry peers (via an ISAC) so others can block the known bad actors. Microsoft support can sometimes provide additional information if the tenant was fraudulent.
- Cross-Platform Checks: If an attacker got in via Teams, check other systems too. Often, these attackers won’t stop at Teams – they might also send phishing emails or try Slack if you have it. Ensure your IR process looks for any other related alerts around that time (failed login attempts, unusual VPN activity, etc.), which might indicate the same actor probing multiple angles.
- Learn and Iterate: After any incident or near-miss, update your training and defenses. For example, if a particular new tactic was used (say, the attacker used a Zoom invite after being blocked on Teams), incorporate that into your strategy. The threat landscape is evolving, and so should your mitigations.
Conclusion: Act Now to Secure Your Collaboration Platforms
Microsoft Teams’ open federation is a powerful feature for productivity – but as we’ve seen, it can also be a conduit for highly convincing phishing and social engineering attacks if left wide open. Threat actors, including Iranian APTs like Storm-0133misp-galaxy.org and financially driven ransomware crews, are actively exploiting this “feature-turned-weakness” by impersonating internal staff and abusing the inherent trust of corporate chat. They have already succeeded in tricking employees into installing malware, sharing screens, and giving up credentials by posing as IT support in Teams.
The good news is that we are not powerless against this threat. By taking urgent action – tightening external access settings, educating users, and monitoring aggressively – you can significantly reduce the risk. In this post, we provided a high-level roadmap: disable or limit Teams federation, deploy technical controls to detect impostors (watch for those odd onmicrosoft.com domains!), and ensure your people are aware of the danger and know how to respond. Don’t forget to apply similar diligence to other collaboration tools like Slack or Zoom, which face analogous risks.
In cybersecurity, trust is a vulnerability that attackers seek to exploit. Open federation in Teams extends the circle of trust of your internal communications to the entire internet – unless you rein it in. Given the rise of these “fake IT support” attacks, treating Teams security as seriously as email security is now mandatory. As CISOs and IT leaders, we must adapt our defenses to cover this new vector. Close the gaps by locking down what you can, keeping a watchful eye on what you can’t, and arming your users with knowledge. By doing so, you’ll ensure that Microsoft Teams and similar platforms remain the productive collaboration tools they’re meant to be – without becoming the weakest link in your security chain.
Stay Safe out there,
Dave Kawula – MVP