Best practices for defending Azure Virtual Machines
Responsive design benefits in Azure Sentinel
Without responsive design, security operations center (SOC) analysts trying to use Azure Sentinel would experience difficulty when trying to navigate around the interface, especially if they are using a mobile device. For example, they would need to scroll to the right side in order to visualize pages with large amounts of text, increasing the friction they experience while trying to get their work done. With Azure Sentinel incorporating responsive design in the user interface, users can now expect an enriched experience in the following key areas:
Mobile access
Responsive design now enhances the usability of the Azure Sentinel portal from any device, including browsers on mobile phones. This now greatly improves the convenience of using the products and facilitates the mobility of the experience, allowing users to access the portal from light-weight devices that the users typically carry with them. When it comes to incident response, time is of the essence—the ability to respond from anywhere from a portable device is of great benefit. Below is a screenshot of an incident in Azure Sentinel opened from a mobile phone.
Figure 1: Azure Sentinel incident opened on a mobile device.
Enhanced zoom
It is now possible to zoom in to up to 400 percent without distorting user interface elements. This capability makes it possible to move away from the constraints of fixed-width designs to one that adjusts screen elements without distorting them even when a user zooms to such high percentages. As a result, the capability significantly improves the accessibility of the user interface to users with low vision or even to anyone who prefers to read larger text. For users with limited dexterity, the ability to enlarge text makes user interface elements larger, making selections easier.
Figure 2: Azure Sentinel Analytics blade at 400 percent zoom at 1920×1080 display resolution.
Content reflow
The ability to accommodate different viewport sizes across devices of varying sizes without requiring the user to perform multiple scrolling operations is of significant benefit to anyone with accessibility needs and is a desirable user experience for any other user. With content reflow, the content automatically adjusts to fit the screen size, eliminating the need for horizontal scrolling to view content as depicted below:
Figure 3: Example of how text reflows from a large to small glass device and vice versa.
Linear order
Linear order is important for structure as it maintains predictability when navigating through content (like the appearance of columns in the source order determines how screen readers or Windows narrator reads out the content). With reflow, the order of item presentation in the user interface is preserved, which makes for a consistent and accessible experience. For example, users typically expect the flow to be from left to right, top to bottom as depicted in the image below.
Figure 4. Example of the linear order for mobile screen view.
One billion. This is the number of people with disabilities across the world. Designing software or hardware with this population in mind pushes the limits of creativity to new boundaries, resulting in improved products and user experiences for all. Additionally, it increases the chances for people with disabilities to be gainfully employed with jobs that have been enabled by accessible technology. By proactively building accessibility into product designs right at the onset, we at Microsoft make technology adapt to user preferences as opposed to the other way round. We are excited that the new reflow-powered features in Azure Sentinel will make the product more usable and the experience more portable for our customers. Log in to your Azure Sentinel portal today from a device of any size and respond to incidents from the convenience of your favorite device.
Learn more
- Doubling down on accessibility: Microsoft’s next steps to expand accessibility in technology, the workforce and workplace
- Responsive design techniques
- Microsoft Inclusive Design
- Azure Sentinel
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Special thanks to Ishan Soni for his input and Menny Mezamar-Tov and the rest of the accessibility engineering team for building the reflow capability into Azure Sentinel.
The post Accessibility and usability for all in Azure Sentinel appeared first on Microsoft Security Blog.
Learn how to plan and prepare for migration from a traditional on-premises SIEM to Microsoft’s cloud-native SIEM for intelligent security analytics at cloud scale.
The post Preparing for your migration from on-premises SIEM to Azure Sentinel appeared first on Microsoft Security Blog.
The pandemic of 2020 has reshaped how we engage in work, education, healthcare, and more, accelerating the widespread adoption of cloud and remote-access solutions. In today’s workplace, the security perimeter extends to the home, airports, the gym—wherever you are. To keep pace, organizations require a security solution that delivers centralized visibility and automation; one that can scale to meet their needs across a decentralized digital estate.
As a cloud-native security information and event management (SIEM) solution, Microsoft Azure Sentinel is designed to fill that need, providing the scope, flexibility, and real-time analysis that today’s business demands. In this blog series, we’ll look at planning and undertaking a migration from an on-premises SIEM to Azure Sentinel, beginning with the advantages of moving to a cloud-native SIEM, as well as preliminary steps to take before starting your migration.
Why move to a cloud-native SIEM?
Many organizations today are making do with siloed, patchwork security solutions even as cyber threats are becoming more sophisticated and relentless. As the industry’s first cloud-native SIEM and SOAR (security operation and automated response) solution on a major public cloud, Azure Sentinel uses machine learning to dramatically reduce false positives, freeing up your security operations (SecOps) team to focus on real threats.
Moving to the cloud allows for greater flexibility—data ingestion can scale up or down as needed, without requiring time-consuming and expensive infrastructure changes. Because Azure Sentinel is a cloud-native SIEM, you pay for only the resources you need. In fact, The Forrester Total Economic Impact (TEI) of Microsoft Azure Sentinel found that Azure Sentinel is 48 percent less expensive than traditional on-premises SIEMs. And Azure Sentinel’s AI and automation capabilities provide time-saving benefits for SecOps teams, combining low-fidelity alerts into potential high-fidelity security incidents to reduce noise and alert fatigue. The Forrester TEI study showed that deploying Azure Sentinel led to a 79 percent decrease in false positives over three years—reducing SecOps workloads and generating $2.2 million in efficiency gains.
So, when you’re ready to make your move to the cloud, how should you get started? There are a few key considerations for planning your migration journey to Azure Sentinel.
Understanding the key stages of SIEM migration
Ingesting data into Azure Sentinel only requires a few clicks. However, migrating your SIEM at scale requires some careful planning to get the most from your investment. There are three basic architecture stages of the migration process:
- On-premises SIEM architecture: The classic model with analytics and database functions both residing on-premises. This type of SIEM has limited scalability and is typically not designed with AI. Therefore, it may overwhelm your SecOps team with alerts. The on-premises SIEM can be seen as your “before” state prior to the migration.
- Side-by-side architecture: In this configuration, your on-premises SIEM and Azure Sentinel operate at the same time. Typically, the on-premises SIEM is used for local resources, while Azure Sentinel’s cloud-based analytics are used for cloud resources or new workloads. Most commonly, this state is a temporary transition period, though sometimes organizations will choose to run two SIEMs side-by-side for an extended period or indefinitely. We will be talking more about this in the next blog.
- Cloud-native architecture (full Azure Sentinel deployment): In this model, both security analytics and data storage use native cloud services. For this blog series, we are considering this to be the end state: a full Azure Sentinel deployment.
Note: the side-by-side phase can be a short-term transitional phase or a medium-to-long-term operational model, leading to a completely cloud-hosted SIEM architecture. While the short-term side-by-side transitional deployment is our recommended approach, Azure Sentinel’s cloud-native nature makes it easy to operate side-by-side with your traditional SIEM if needed—giving you the flexibility to approach migration in a way that best fits your organization.
Identify and prioritize your use cases
Before you start your migration, you will first want to identify your key core capabilities, also known as “P0 requirements.” Look at the key use cases deployed with your current SIEM, as well as the detections and capabilities that will be vital to maintaining effectiveness with your new SIEM.
The key here is not to approach migration as a 1/1 lift-and-shift. Be intentional and thoughtful about which content you migrate first, which you de-prioritize, and which might not need to be migrated at all. Your team may have an overwhelming number of detections and use cases running in your current SIEM. Use this time to decide which ones are actively useful to your business (and which do not need to be migrated). A good starting place is to look at which detections have produced results within the last year (false positive versus positive rate). Our recommendation is to focus on detections that would enforce 90 percent true positive on alert feeds.
Compare and contrast your SIEMs
Over the course of your migration, as you are running Azure Sentinel and your on-premises SIEM side-by-side, plan to continue to compare and evaluate the two SIEMs. This allows you to refine your criteria for completing the migration, as well as learn where you can extract more value through Azure Sentinel (for example, if you are planning on a long-term or indefinite side-by-side deployment). Based on Microsoft’s experience with real-world attacks, we’ve built a list of key areas to evaluate:
- Attack detection coverage: Compare how well each SIEM is able to detect the full range of attacks using MITRE ATT&CK or a similar framework.
- Responsiveness: Measure the mean time to acknowledge (MTTA)—when an alert appeared in the SIEM and when the analyst first started working on it. This will likely be similar between any SIEMs.
- Mean time to remediate (MTTR): Compare incidents investigated by each SIEM (with analysts at an equivalent skill level).
- Hunting speed and agility: Measure how fast your teams can hunt—from hypothesis to querying data, to getting the results on each SIEM.
- Capacity growth friction: Compare the level of difficulty in adding capacity as your cloud use grows. Cloud services and applications tend to generate more log data than traditional on-premises workloads.
- Security orchestration, automation, and remediation: Measure the cohesiveness and integrated toolsets in place for rapid threat remediation.
In the next two installments of this series, we’ll get more in-depth on running your legacy SIEM side by side with Azure Sentinel, as well as provide some best practices for migrating your data and what to consider when finishing your migration.
For a complete overview of the migration journey, download the white paper: Azure Sentinel Migration Fundamentals.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Preparing for your migration from on-premises SIEM to Azure Sentinel appeared first on Microsoft Security Blog.
We discovered vulnerabilities in NETGEAR DGN-2200v1 series routers that can compromise a network's security—opening the gates for attackers to roam untethered through an entire organization. We shared our findings with NETGEAR through coordinated vulnerability disclosure via Microsoft Security Vulnerability Research (MSVR), and worked closely with NETGEAR security and engineering teams to provide advice on mitigating these issues.
The post Microsoft finds new NETGEAR firmware vulnerabilities that could lead to identity theft and full system compromise appeared first on Microsoft Security Blog.
The continuous improvement of security solutions has forced attackers to explore alternative ways to compromise systems. The rising number of firmware attacks and ransomware attacks via VPN devices and other internet-facing systems are examples of attacks initiated outside and below the operating system layer. As these types of attacks become more common, users must look to secure even the single-purpose software that run their hardware—like routers. We have recently discovered vulnerabilities in NETGEAR DGN-2200v1 series routers that can compromise a network’s security—opening the gates for attackers to roam untethered through an entire organization.
We discovered the vulnerabilities while researching device fingerprinting in the new device discovery capabilities in Microsoft Defender for Endpoint. We noticed a very odd behavior: a device owned by a non-IT personnel was trying to access a NETGEAR DGN-2200v1 router’s management port. The communication was flagged as anomalous by machine learning models, but the communication itself was TLS-encrypted and private to protect customer privacy, so we decided to focus on the router and investigate whether it exhibited security weaknesses that can be exploited in a possible attack scenario.
In our research, we unpacked the router firmware and found three vulnerabilities that can be reliably exploited. We shared our findings with NETGEAR through coordinated vulnerability disclosure via Microsoft Security Vulnerability Research (MSVR), and worked closely with NETGEAR security and engineering teams to provide advice on mitigating these issues while maintaining backward compatibility. The critical security issues (those with CVSS Score: 7.1 – 9.4) have been fixed by NETGEAR. See NETGEAR’s Security Advisory for Multiple HTTPd Authentication Vulnerabilities on DGN2200v1.
We are sharing details from our research with the broader community to emphasize the importance of securing the full range of platforms and devices, including IoT, and how cross-domain visibility continues to help us uncover new and unknown threats to continually improve security.
Obtaining and unpacking the firmware
The firmware was available from the vendor’s website, making it easier for us to obtain a copy for examination. It is a simple .zip file containing release notes (.html) and the firmware image itself (.chk file). Running binwalk on the .chk file ended up extracting the filesystem (squash-fs).
Figure 1. Extracting the filesystem from the firmware
The filesystem itself is a standard Linux root filesystem, with some minor additions. The relevant ones for our research are:
- /www – contains html pages and .gif pictures
- /usr/sbin – contains various custom binaries by NETGEAR, including HTTPd, FTPC, and others
Since we saw the anomalous communication use the standard port that HTTPd serves, we focused on HTTPd. The HTTPd itself is a 32-bit big-endian MIPS ELF, compiled against uClibc (the standard libc for embedded devices), stripped. It seems the entire server-side logic (CGI) was compiled into the HTTPd.
Figure 2. HTTPd information with some symbols
Exploration
When exploring an embedded web service, the first few questions that come to mind are:
- Does the web service present some pages without authentication? If so, how are they governed?
- How does the web service perform authentication?
- Does the web service handle requests correctly (that is, with no memory corruption bugs)?
- Does the web service implement certain security measurements, such as (anti-) cross-site request forgery tokens or Content Security Policy?
To answer these questions, we performed a static analysis of the HTTPd binary, along with some dynamic analysis by running QEMU, an open-source emulator, and hooking the specialized invocations (for example, NVRAM getters and setters).
Vulnerabilities in DGN-2200v1 routers
Accessing router management pages using authentication bypass
While examining how HTTPd dictates which pages should be served without authentication, we found the following pseudo code:
Figure 3. Pseudo code in HTTPd
This code is the first page handling code inside HTTPd, and it automatically approved certain pages such as form.css or func.js. While there is no harm in approving those pages, one thing that stood out was the fact that NETGEAR decided to use strstr to check if a page has “.jpg”, “.gif” or “ess_” substrings, trying to match the entire URL.
We can therefore access any page on the device, including those that require authentication, by appending a GET variable with the relevant substring (like “?.gif”). For example: hxxps://10[.]0[.]138/WAN_wan.htm?pic.gif. This is a complete and fully reliable authentication bypass.
Deriving saved router credentials via a cryptographic side-channel
At this stage, we already had complete control over the router, but we continued investigating how the authentication itself was implemented.
If a page had to be authenticated, HTTPd would require HTTP basic authentication. The username and password would be encoded as a base64 string (delimited by a colon), sent in the HTTP header, and finally verified against the saved username and password in the router’s memory. The router stores this information (along with the majority of its configuration) in NVRAM, that is, outside the filesystem that we had extracted.
However, when we examined the authentication itself, we discovered a side-channel attack that can let an attacker get the right credentials:
Figure 4. Authentication process
Note that the username and the password are compared using strcmp. The libc implementation of strcmp works by comparing character-by-character until a NUL terminator is observed or until a mismatch happens.
An attacker could take advantage of the latter by measuring the time it takes to get a failure. For example, when measuring the times of the first character, we get the following graph:
Figure 5. Time of reply per character attempt
This indicates that the first character is “n”. An attacker could repeat this process (“na”, “nb”, “nc” and so on) to get the second character, until the entire username and password is revealed.
We recommended to NETGEAR that they can avoid such attacks by performing XOR-based memory comparison, as such:
Figure 6. XOR-based memory comparison
This function continues even upon a byte mismatch. Similar approaches can be seen in cryptography secure libraries, such as OpenSSL’s CRYPTO_memcmp.
Retrieving secrets stored in the device
After using the first authentication bypass vulnerability, we still wanted to see if we could recover the username and the password used by the router using other existing weaknesses. To that end, we decided to use the router’s configuration backup\restore feature. We can abuse the authentication bypass mentioned earlier to simply get the file: hxxp://router_addr:8080/NETGEAR_DGN2200[.]cfg?pic[.]gif.
The file itself has high entropy, which suggests it was either compressed or encrypted so we couldn’t read it directly. Additionally, binwalk did not produce any meaningful results:
Figure 7. High-entropy configuration file
Our suspicion became real when we reverse-engineered the backup\restore functionality:
Figure 8. Constant password used for DES encryption
After some preparatory steps, the contents are DES-encrypted with a constant key “NtgrBak”. This allows an attacker to get the plaintext password (which is stored in the encrypted NVRAM) remotely. The user name, which can very well be variations of ‘admin’, can be retrieved the same way.
Enhancing router security through CVD and threat intelligence-sharing
As modern operating system security continues to advance, attackers are forced to look for alternative ways to compromise networks, and network devices such as routers are a prime candidate. This makes an endpoint discovery solution a critical asset to any security operations.
The new endpoint and network device discovery capability in Microsoft Defender for Endpoint locates unmanaged devices to ensure organizations have comprehensive visibility into their environment. This lets security operators detect anomalous network activity, in this case, the attacker’s anomalous connection to the router’s management port.
Figure 9. Device inventory in Microsoft 365 Defender
In addition, with ReFirm Labs recently joining Microsoft, we continue to enrich our firmware analysis and security capabilities across devices. ReFirm’s firmware analysis technology will enhance existing capabilities to detect firmware vulnerabilities and help secure IoT and OT devices via Azure Defender for IoT.
With this research, we have shown how a simple anomalous connection to a router, found through the endpoint discovery service, drove us to find several vulnerabilities on a popular router.
Routers are integral to networking, so it is important to secure the programs supporting its functions. Collaboration between vulnerability researchers, software vendors and other players is crucial to helping secure the overall user experience. This includes disclosing vulnerabilities to vendors under the guiding principles of Coordinated Vulnerability Disclosure (CVD). We would like to thank the NETGEAR security and engineering teams for their cooperation.
Learn how Microsoft Defender for Endpoint delivers a complete endpoint security solution that covers preventative protection, post-breach detection, automated investigation, and response.
Jonathan Bar Or
Microsoft 365 Defender Research Team
The post Microsoft finds new NETGEAR firmware vulnerabilities that could lead to identity theft and full system compromise appeared first on Microsoft Security Blog.
US Executive Order on Cybersecurity delivers valuable guidance for both public and private organizations to make the world safer for all.
The post The critical role of Zero Trust in securing our world appeared first on Microsoft Security Blog.
We are operating in the most complex cybersecurity landscape that we’ve ever seen. While our current ability to detect and respond to attacks has matured incredibly quickly in recent years, bad actors haven’t been standing still. Large-scale attacks like those pursued by Nobelium1 and Hafnium, alongside ransomware attacks on critical infrastructure indicate that attackers have become increasingly sophisticated and coordinated. It is abundantly clear that the work of cybersecurity and IT departments are critical to our national and global security.
Microsoft has a unique level of access to data on cyber threats and attacks globally, and we are committed to sharing this information and insights for the greater good. As illustrated by recent attacks, we collaborate across the public and private sectors, as well as with our industry peers and partners, to create a stronger, more intelligent cybersecurity community for the protection of all.
This collaborative relationship includes the United States government, and we celebrate the fast-approaching milestones of the US Cybersecurity Executive Order2 (EO). The EO specifies concrete actions to strengthen national cybersecurity and address increasingly sophisticated threats across federal agencies and the entire digital ecosystem. This order directs agencies and their suppliers to improve capabilities and coordination on information sharing, incident detection, incident response, software supply chain security, and IT modernization, which we support wholeheartedly.
With these national actions set in motion and a call for all businesses to enhance cybersecurity postures, Microsoft and our extensive partner ecosystem stand ready to help protect our world. The modern framework for protecting critical infrastructure, minimizing future incidents, and creating a safer world already exists: Zero Trust. We have helped many public and private organizations to establish and implement a Zero Trust approach, especially in the wake of the remote and hybrid work tidal wave of 2020-2021. And Microsoft remains committed to delivering comprehensive, integrated security solutions at scale and supporting customers on every step of their security journey, including detailed guidance for Zero Trust deployment.
Zero Trust’s critical role in helping secure our world
The evidence is clear—the old security paradigm of building an impenetrable fortress around your resources and data is simply not viable against today’s challenges. Remote and hybrid work realities mean people move fluidly between work and personal lives, across multiple devices, and with increased collaboration both inside and outside of organizational boundaries. Entry points for attacks—identities, devices, apps, networks, infrastructure, and data—live outside the protections of traditional perimeters. The modern digital estate is distributed, diverse, and complex.
This new reality requires a Zero Trust approach.
Section 3 of the EO calls for “decisive steps” for the federal government “to modernize its approach to cybersecurity” by accelerating the move to secure cloud services and Zero Trust implementation, including a mandate of multifactor authentication and end-to-end encryption of data. We applaud this recognition of the Zero Trust strategy as a cybersecurity best practice, as well as the White House encouragement of the private sector to take “ambitious measures” in the same direction as the EO guidelines.
Per Section 3, federal standards and guidance for Zero Trust are developed by the National Institute of Standards and Technology (NIST) of the US Department of Commerce, similar to other industry and scientific innovation measurements. NIST has defined Zero Trust in terms of several basic tenets:
- All resource authentication and authorization are dynamic and strictly enforced before access is allowed.
- Access to trust in the requester is evaluated before the access is granted. Access should also be granted with the least privileges needed to complete the task.
- Assets should always act as if an attacker is present on the enterprise network.
At Microsoft, we have distilled these Zero Trust tenets into three principles: verify explicitly, use least privileged access, and assume breach. We use these principles for our strategic guidance to customers, software development, and global security posture.
Organizations that operate with a Zero Trust mentality are more resilient, consistent, and responsive to new attacks. A true end-to-end Zero Trust strategy not only makes it harder for attackers to get into the network but also minimizes potential blast radius by preventing lateral movement.
While preventing bad actors from gaining access is critical, it’s only part of the Zero Trust equation. Being able to detect a sophisticated actor inside your environment is key to minimizing the impact of a breach. Sophisticated threat intelligence and analytics are critical for a rapid assessment of an attacker’s behavior, eviction, and remediation.
Resources for strengthening national security in the public and private sectors
We believe President Biden’s EO is a timely call-to-action, not only for government agencies but as a model for all businesses looking to become resilient in the face of cyber threats. The heightened focus on incident response, data handling, collaboration, and implementation of Zero Trust should be a call-to-action for every organization—public and private—in the mission to better secure our global supply chain, infrastructure resources, information, and progress towards a better future.
Microsoft is committed to supporting federal agencies in answering the nation’s call to strengthen inter- and intra-agency capabilities unlocking the government’s full cyber capabilities. Recommended next steps for federal agencies have been outlined by my colleague Jason Payne, Chief Technology Officer of Microsoft Federal. As part of this responsibility, we have provided Federal agencies with key Zero Trust Scenario Architectures mapped to NIST standards, as well as a Zero Trust Rapid Modernization Plan.
Microsoft is also committed to supporting customers in staying up to date with the latest security trends and developing the next generation of security professionals. We have developed a set of skilling resources to train teams on the capabilities identified in the EO and be ready to build a more secure, agile environment that supports every mission.
In addition to EO resources for federal government agencies, we are continuing to publish guidance, share learnings, develop resources, and invest in new capabilities to help organizations accelerate their Zero Trust adoption and meet their cybersecurity requirements.
Here are our top recommended Zero Trust resources:
- For details on how Microsoft defines Zero Trust and breaks down solutions across identities, endpoints, apps, networks, infrastructure, and data, download the Zero Trust Maturity Model.
- To assess your organization’s progress in the Zero Trust journey and receive suggestions for technical next steps, use our Zero Trust Assessment tool.
- For technical guidance on deployment, integration, and development, visit our Zero Trust Guidance Center for step-by-step guidance on implementing Zero Trust principles.
- If you’d like to learn from our own Zero Trust deployment journey at Microsoft, our Chief Information Security Officer Bret Arsenault and team share their stories at Microsoft Digital Inside Track.
Tackling sophisticated cyber threats together
The EO is an opportunity for all organizations to improve cybersecurity postures and act rapidly to implement Zero Trust, including multifactor authentication and end-to-end encryption. The White House has provided clear direction on what is required, and the Zero Trust framework can also be used as a model for private sector businesses, state and local governments, and organizations around the world.
We can only win as a team against these malicious attackers and significant challenges. Every step your organization takes in advancing a Zero Trust architecture not only secures your assets but also contributes to a safer world for all. We applaud organizations of every size for embracing Zero Trust, and we stand committed to partnering with you all on this journey.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
1Nobelium Resource Center, Microsoft Security Response Center. 04 March 2021.
2President Signs Executive Order Charting New Course to Improve the Nation’s Cybersecurity and Protect Federal Government Networks, The White House, 12 May 2021.
The post The critical role of Zero Trust in securing our world appeared first on Microsoft Security Blog.
]]>Microsoft is pleased to announce the publication of the Security Stack Mappings for Azure project in partnership with the Center for Threat-Informed Defense.
The post MITRE ATT&CK® mappings released for built-in Azure security controls appeared first on Microsoft Security Blog.
]]>The Security Stack Mappings for Azure research project was published today, introducing a library of mappings that link built-in Azure security controls to the MITRE ATT&CK® techniques they mitigate against. Microsoft once again worked with the Center for Threat-Informed Defense and other Center members to publish the mappings, which pair the familiar language of the ATT&CK framework with the concrete coverage Azure provides to protect organizations’ attack surfaces. Microsoft is pleased that community interest in seeing such mappings for Azure led to its use as the pilot cloud platform for this endeavor.
The project aims to fill an information gap for organizations seeking proactive security awareness about the scope of coverage available natively in Azure. The project does this by creating independent data showing how built-in security controls for a given technology platform, in this case Azure, secure their assets against the adversary tactics, techniques, and procedures (TTPs) most likely to target them.
Microsoft has worked to expand the suite of built-in security controls in Azure which, while highly effective for protecting customer environments, can feel overwhelming to understand across an organization’s entire Azure estate. MITRE has developed the ATT&CK framework into a highly respected, community-supported tool for clarifying adversary TTPs. Pairing the two together provides a helpful view for organizations to understand their readiness against today’s threats in a familiar vocabulary that enables easy communication to their stakeholders.
Aside from the mapping files, key project deliverables include a methodology that describes how the project team assessed the mappings and a scoring rubric to explain how the mapping scores were decided. Accompanying the methodology and scoring rubric are a YAML data format that houses each mapping and a mapping tool that validates and produces corresponding ATT&CK Navigator layer files for easy visualization. The ATT&CK Navigator view is particularly useful for assessing multiple security controls concurrently to identify similarities or differences in coverage capabilities.
Azure’s built-in security controls map to broad ATT&CK technique coverage
The Security Stack Mappings research project was undertaken in response to the lack of data available to explain how a technology platform’s built-in security controls mitigate against adversary TTPs as described by ATT&CK techniques. Azure was chosen for the inaugural iteration of this research, with plans for the Center to repeat the process for other technology platforms in the future.
The methodology distinguishes between security controls that function independently and features that can be enabled as part of another product or service, choosing only the former as candidates for mappings.
The scoring rubric is comprised of three main factors:
- The intended function of the security control—whether it is meant to protect, detect, or respond to an adversary behavior.
- The coverage level of the control for the mapped ATT&CK technique—minimal, partial, or significant.
- Factors found to be useful considerations for assessing a mapping—coverage, temporal (real-time, periodic, or externally triggered), and accuracy (such as false positive or false negative rates).
The list of Azure security controls that were mapped was compiled by the Center, with input from Microsoft and other Center members. Part of the list was based on the Azure Security Benchmark (v2), previously published by Microsoft, which provides guidance on best practices and recommendations for improving the security of workloads, data, and services on Azure. As many organizations are now using ATT&CK to keep track of their overall security posture, Microsoft is pleased to be able to support them through the ready-made mappings produced through this project.
For a list of the Azure security controls that were mapped, see the Center’s list of Azure controls.
Each control was mapped to one or more techniques and categorized using thematic tags for an alternate coverage view. For example, the “Analytics” tag returns the following set of controls:
- Azure Alerts for Network Layer
- Azure Network Traffic Analytics
- Azure Sentinel
Whereas the “Containers” tag returns this set:
- Azure Defender for Container Registries
- Azure Defender for Kubernetes
- Docker Host Hardening
Microsoft previously partnered with the Center and other Center members to develop the ATT&CK for Containers matrix, which used the threat matrix for Kubernetes developed by the Azure Security Center team for Azure Defender for Kubernetes, as a starting point to expand on. You may notice that the mapped techniques for the container security controls above don’t reference some of those newly added in the ATT&CK for Containers matrix as part of the release of ATT&CK v9 in April 2021. As this ATT&CK version was released mid-project, the team chose to focus on ATT&CK v8 instead, with plans to update mappings to ATT&CK v9 already in the works.
Just the beginning for technology platform mappings
With the successful completion of the Security Stack Mappings for Azure research project, Microsoft and the rest of the industry now have a consistent, repeatable approach available to use for mapping built-in security controls to ATT&CK techniques. As Microsoft continues to expand the built-in tools available to protect customer workloads, data, and services in Azure, and as MITRE continues to expand the number of adversary TTPs described by the ATT&CK matrices, there will be new mappings available to help organizations make sense of their security coverage level. Microsoft will continue to support community efforts such as this to enable easier understanding and communication of security coverage wherever we can.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post MITRE ATT&CK® mappings released for built-in Azure security controls appeared first on Microsoft Security Blog.
The cybersecurity challenges of today require a diversity of skills, perspectives, and experiences, yet women remain underrepresented in this field. Girl Security and Microsoft Security are forging a new fellowship around a shared commitment to make cybersecurity more accessible to all, especially girls and women.
The post Encouraging women to embrace their cybersecurity superpowers appeared first on Microsoft Security Blog.
The cybersecurity challenges of today require a diversity of skills, perspectives, and experiences, yet women remain underrepresented in this field. On International Women’s Day, some Microsoft Security women leaders penned a powerful blog highlighting the underrepresentation of women in cybersecurity (women make up just 24 percent of the cybersecurity workforce, according to the 2019 (ISC)² report, Cybersecurity Workforce Study: Women in Cybersecurity1), and the critical need for diverse perspectives in solving 21st Century cybersecurity challenges. While recent studies2 indicate an increase in the percentage of women in cybersecurity, they remain the minority of the workforce.
Women in cybersecurity
How do girls identify their superpowers in cybersecurity while women continue to make gains? To explore this key question, Microsoft Security in partnership with Girl Security, a nonpartisan, nonprofit organization preparing girls, women, and gender minorities for careers in national security, co-hosted an event on April 27, 2021, alongside thirty or more girls and women in high school and university from across the United States and globally.
Joining the Girl Security participants was an extraordinary panel of women in cybersecurity from Microsoft Security, including Amy Hogan-Burney, General Manager of the Digital Crimes Unit, Associate General Counsel, Microsoft; Vasu Jakkal, Corporate Vice President, Microsoft Security, Compliance, and Identity; Ann Johnson, Corporate Vice President of Security, Compliance, and Identity, Business Development; Edna Conway, Vice President, Chief Security and Risk Officer, Azure Microsoft Corporation; and Valecia Maclin, General Manager Engineering, Customer Security and Trust, Microsoft Corporation.
Girl Security and Microsoft Security are forging a new fellowship around a shared commitment to make cybersecurity more accessible to all, especially girls and women who remain underrepresented in the cybersecurity workforce. This first co-hosted event offered an exciting opportunity for participants to ask firsthand questions and participate in intimate breakout sessions with a diverse group of leading experts. Importantly, participants were able to hear women experts share personal narratives that described their unique—and often non-linear—pathways into cybersecurity.
Security requires all
Vasu Jakkal, who leads Microsoft Security, Compliance, and Identity strategy, kicked off the event with a simple but powerful message: “Security is for all, and security requires all.” Over the one-hour program, the group explored a wide-ranging series of topics that included career-oriented questions, such as the importance of certifications to recruiters and pathways into cybersecurity, as well as topics on advances in the field, such as emerging cybersecurity threats, the impact of quantum cryptography and artificial intelligence on the digital landscape, and the intersection of policy and security.
When asked how someone with a humanities background might consider a pathway in cybersecurity, Valecia Maclin poignantly noted that cybersecurity is a complex field with many components, including law, policy, technical competencies, and more. She emphasized the need for professionals who can bridge the gaps between those areas, but also work within them. In addition, she added that beginning one’s career in one area of cybersecurity does not preclude a transition into other areas of cybersecurity. In response to a question posed to all panelists, Maclin noted that her cybersecurity “sheros” included the long unsung African American women codebreakers who provided crucial intelligence to the United States during WWII.
The many pathways of a cybersecurity career
The narrative that pathways into cybersecurity are nonlinear is a crucial message for girls and women who may not perceive their own technical competencies or seek to pursue more technical careers, but whose strengths and interests may lie in the field’s myriad tracts. Girl Security works with girls, women, and gender minorities across the United States and globally to convey the message that girls already have the competencies they need to excel in security’s many pathways. Additionally, Girl Security is exploring the best analytical approaches to better understanding girls’ interests in cybersecurity. Combining an equity-informed approach to existing STEM models assessing girls’ interests and pathways can offer important insights into needed interventions.
As the field continues to forge new, crucial approaches to supporting girls’ interests in cybersecurity, reaffirming that there is no one “right” path into cybersecurity offers timely reassurance to girls and women amid a more challenging pandemic economy. One high school participant noted that she was graduating high school and pursuing community college. Another participant, who was transferring from community college into UC Berkeley, jumped in to reassure her that community colleges offer many pathways into the field. Jakkal, in response to observing the participants’ positive support, highlighted the importance of building peer and lateral networks at the onset and throughout one’s career.
Edna Conway, who began her career in law, emphasized the value of career twists and turns. Detours, she noted, can provide invaluable experience. She added that the most important aspect of any career is bringing one’s whole self to the job and appreciating the process. She explained, “Understand what gives you energy and follow that.” Ann Johnson, who leads Microsoft’s security and compliance road map across industries worldwide, agreed: “Bring who you are to what you do.” Conway also noted that women tend to have an inclination toward critical thinking and problem solving, making them particularly qualified for cybersecurity challenges. And in the event of a professional roadblock? Johnson reassured participants: “As for help, don’t take no for an answer. You’re going to stumble and that’s ok.”
Amy Hogan-Burney, who holds a law degree and began her career as an attorney with the U.S. Department of Justice and Federal Bureau of Investigation, now leads Microsoft’s Digital Crimes Unit, a global team of attorneys, investigators, engineers, and analysts working to fight cybercrime. She encouraged participants to trust their instincts, noting, “It is easy to make things hard and hard to make things easy, so trust yourself and trust your capabilities and ask questions.”
Girl Security participants offered meaningful feedback following the event about the importance of visible women role models in cybersecurity, the value of “face-to-face” (albeit virtual) interaction with women leaders, and the need for additional programming that highlights the field’s diverse pathways. As one participant noted, “I never realized how broad the field is. It’s exciting to think my interests could lead to a career!”
What’s next
As part of this exciting new partnership, Girl Security and Microsoft Security will continue to host programming and cybersecurity education. On June 28, 2021, at 4 PM CST, Girl Security and Sara Manning Dawson, Chief Technology Officer, Enterprise Security at Microsoft, will conduct a session on disinformation, cybersecurity, and national security alongside budding cybersecurity leader Kyla Guru for Girl Con—an international tech conference (for high school students, by high school students) aiming to empower the next generation of female leaders. In addition, Girl Security and Microsoft Security will join forces for a new leadership program on cybersecurity for the Girl Scouts Greater Chicago, Northwest Indiana, and more. Sign up to be the first to learn about new Girl Security and Microsoft Security events.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
1Cybersecurity Workforce Study: Women in Cybersecurity, (ISC)² Cybersecurity Workforce Report, 2019.
2(ISC)² survey shows women increasingly embracing cybersecurity as a career path, Security, BNP Media, July 21, 2020.
The post Encouraging women to embrace their cybersecurity superpowers appeared first on Microsoft Security Blog.
Over the last year, PCs have kept us connected to family, friends, and enabled businesses to continue to run. This new hybrid work paradigm has got us thinking about how we will continue to deliver the best possible quality, experience, and security for the more than 1 billion people who use Windows.
The post Windows 11 enables security by design from the chip to the cloud appeared first on Microsoft Security Blog.
Over the last year, PCs have kept us connected to family, friends, and enabled businesses to continue to run. This new hybrid work paradigm has got us thinking about how we will continue to deliver the best possible quality, experience, and security for the more than 1 billion people who use Windows. While we have adapted to working from home, it’s been rare to get through a day without reading an account of a new cybersecurity threat. Phishing, ransomware, supply chain, and IoT vulnerabilities—attackers are constantly developing new approaches to wreak digital havoc.
But as attacks have increased in scope and sophistication, so have we. Microsoft has a clear vision for how to help protect our customers now and in the future and we know our approach works.
Today, we are announcing Windows 11 to raise security baselines with new hardware security requirements built-in that will give our customers the confidence that they are even more protected from the chip to the cloud on certified devices. Windows 11 is redesigned for hybrid work and security with built-in hardware-based isolation, proven encryption, and our strongest protection against malware.
Security by design: Built-in and turned on
Security by design has long been a priority at Microsoft. What other companies invest more than $1 billion a year on security and employ more than 3,500 dedicated security professionals?
We’ve made significant strides in that journey to create chip-to-cloud Zero Trust out of the box. In 2019, we announced secured-core PCs that apply security best-practices to the firmware layer, or device core, that underpins Windows. These devices combine hardware, software, and OS protections to help provide end-to-end safeguards against sophisticated and emerging threats like those against hardware and firmware that are on the rise according to the National Institute of Standards and Technology as well as the Department of Homeland Security. Our Security Signals report found that 83 percent of businesses experienced a firmware attack, and only 29 percent are allocating resources to protect this critical layer.
With Windows 11, we’re making it easier for customers to get protection from these advanced attacks out of the box. All certified Windows 11 systems will come with a TPM 2.0 chip to help ensure customers benefit from security backed by a hardware root-of-trust.
The Trusted Platform Module (TPM) is a chip that is either integrated into your PC’s motherboard or added separately into the CPU. Its purpose is to help protect encryption keys, user credentials, and other sensitive data behind a hardware barrier so that malware and attackers can’t access or tamper with that data.
PCs of the future need this modern hardware root-of-trust to help protect from both common and sophisticated attacks like ransomware and more sophisticated attacks from nation-states. Requiring the TPM 2.0 elevates the standard for hardware security by requiring that built-in root-of-trust.
TPM 2.0 is a critical building block for providing security with Windows Hello and BitLocker to help customers better protect their identities and data. In addition, for many enterprise customers, TPMs help facilitate Zero Trust security by providing a secure element for attesting to the health of devices.
Windows 11 also has out of the box support for Azure-based Microsoft Azure Attestation (MAA) bringing hardware-based Zero Trust to the forefront of security, allowing customers to enforce Zero Trust policies when accessing sensitive resources in the cloud with supported mobile device managements (MDMs) like Intune or on-premises.
- Raising the security baseline to meet the evolving threat landscape. This next generation of Windows will raise the security baseline by requiring more modern CPUs, with protections like virtualization-based security (VBS), hypervisor-protected code integrity (HVCI), and Secure Boot built-in and enabled by default to protect from both common malware, ransomware, and more sophisticated attacks. Windows 11 will also come with new security innovations like hardware-enforced stack protection for supported Intel and AMD hardware, helping to proactively protect our customers from zero-day exploits. Innovation like the Microsoft Pluton security processor, when used by the great partners in the Windows ecosystem, help raise the strength of the fundamentals at the heart of robust Zero Trust security.
- Ditch passwords with Windows Hello to help keep your information protected. For enterprises, Windows Hello for Business supports simplified passwordless deployment models for achieving a deploy-to-run state within a few minutes. This includes granular control of authentication methods by IT admins while securing communication between cloud tools to better protect corporate data and identity. And for consumers, new Windows 11 devices will be passwordless by default from day one.
- Security and productivity in one. All these components work together in the background to help keep users safe without sacrificing quality, performance, or experience. The new set of hardware security requirements that comes with this new release of Windows is designed to build a foundation that is even stronger and more resistant to attacks on certified devices. We know this approach works—secured-core PCs are twice as resistant to malware infection.
- Comprehensive security and compliance. Out of the box support for Microsoft Azure Attestation enables Windows 11 to provide evidence of trust via attestation, which forms the basis of compliance policies organizations can depend upon to develop an understanding of their true security posture. These Azure Attestation-backed compliance policies validate both the identity, as well as the platform, and form the backbone for the Zero Trust and Conditional Access workflows for safeguarding corporate resources.
This next level of hardware security is compatible with upcoming Pluton-equipped systems and also any device using the TPM 2.0 security chip, including hundreds of devices available from Acer, Asus, Dell, HP, Lenovo, Panasonic, and many others.
Windows 11 is a smarter way for everyone to collaborate, share, and present—with the confidence of hardware-backed protections.
Learn more
For more information, check out the other features that come with Windows 11:
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Windows 11 enables security by design from the chip to the cloud appeared first on Microsoft Security Blog.
Red Canary Director of Intelligence Katie Nickels shares her thoughts on strategies, tools, and frameworks to build an effective threat intelligence team.
The post Strategies, tools, and frameworks for building an effective threat intelligence team appeared first on Microsoft Security Blog.
How to think about building a threat intelligence program
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Red Canary Director of Intelligence Katie Nickels, a certified instructor with the SANS Institute. In this blog, Katie shares strategies, tools, and frameworks for building an effective threat intelligence team.
Natalia: Where should cyber threat intelligence (CTI) teams start?
Katie: Threat intelligence is all about helping organizations make decisions and understand what matters and what doesn’t. Many intelligence teams start with tools or an indicator feed that they don’t really need. My recommendation is to listen to potential consumers of the intel team, understand the problems they are facing, and convert their challenges into requirements. If you have security operations center (SOC) analysts, talk to them about their pain points. They may have a flood of alerts and don’t know which ones are the most important. Talk to systems administrators who don’t know what to do when something big happens. It could be as simple as helping an administrator understand important vulnerabilities.
The intel team can then determine how to achieve those requirements. They may need a way to track tactics, techniques, procedures (TTPs), and threat indicators, so they decide to get a threat intelligence platform. Or maybe they need endpoint collection to understand what adversaries are doing in their networks. They may decide they need a framework or a model to help organize those adversary behaviors. Starting with the requirements and asking what problems the team needs to solve is key to figuring out how to make a big impact.
Also, threat intel analysts must be selfless people. We produce intelligence for others, so setting requirements is more about listening than telling.
Natalia: What should security teams consider when selecting threat intelligence tools?
Katie: I always joke that one of the best CTI tools of all time is a spreadsheet. Of course, spreadsheets have limitations. Many organizations will use a threat intelligence platform, either free, open-source software, like MISP, or a commercial option.
For tooling, CTI analysts need a way to pull on all these threads. I recommend that organizations start with free tools. Twitter is an amazing source of threat intelligence. There are researchers who track malware families like Qbot and get amazing intelligence just by following hashtags on Twitter. There are great free resources, like online sandboxes. VirusTotal has a free version and a paid version.
As teams grow, they may get to a level where they have tried the free tools and are hitting a wall. There are commercial tools that provide a lot of value because they can collect domain information for many years. There are commercial services that let you look at passive Domain Name Server (DNS) information or WHOIS information so you can pivot. This can help teams correlate and build out what they know about threats. Maltego has a free version of a graphing and link analysis tool that can be useful.
Natalia: How should threat intelligence teams select a framework? Which ones should they consider?
Katie: The big three frameworks are the Lockheed Martin Cyber Kill Chain®, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.”
The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.
Natalia: How do you design a threat model?
Katie: There are very formal software engineering approaches to threat modeling, in which you think of possible threats to software and how to design it securely. My approach is, let’s simplify it. Threat modeling is the intersection of what an organization has that an adversary might target. A customer might say to us, “We’re really worried about the Lazarus Group and North Korean threats.” We’d say, ”You’re a small coffee shop in the middle of the country, and that threat might not be the most important to you based on what we’ve seen this group do in the past. I think a more relevant threat for you is probably ransomware.” Ransomware is far worse than anyone expected. It can affect almost every organization; big and small organizations are affected equally by ransomware.
If teams focus on all threats, they’re going to get burnt out. Instead, ask, “What does our organization have that adversaries might want?” When prioritizing threats, talking to your peers is a great place to start. There’s a wealth of information out there. If you’re a financial company, go talk to other financial companies. One thing I love about this community is that most people, even if they’re competitors, are willing to share. Also, realize that people in security operations, who aren’t necessarily named threat intel analysts, still do intelligence. You don’t have to have a threat intel team to do threat intel.
Natalia: What is the future of threat intelligence?
Katie: Cyber threat intelligence has been around for maybe a few decades, but in the scope of history, that’s a very short time. With frameworks like ATT&CK or the Diamond Model, we’re starting to see a little more formalization. I hope that builds, and there’s more professionalization of the industry with standards for what practices we do and don’t do. For example, if you’re putting out an analysis, here are the things that you should consider. There’s no standard way we communicate except for those few frameworks like ATT&CK. When there are standards, it’s much easier for people to trust what’s coming out of an industry.