Geopolitical Tensions Fuel Worsening Cyberattack Scenario

Geopolitical Tensions Fuel Worsening Cyberattack Scenario

Ongoing Geopolitical Tensions Involving China, Russia, North Korea and Iran are Leading to Cyberattacks

A new report suggests that China and Russia alone account for 47% of cyber-attacks throughout 2019. Within all cyber-attacks, the use of custom malware and process hollowing is growing, destructive/integrity attacks are increasing, and a new form of island hopping is worrying.

While it is problematic to attribute sources of attacks, these details come from the latest VMware Carbon Black Global Incident Response Threat Report (PDF). “What makes this study unique,” VMware Carbon Black’s head security strategist Tom Kellermann told SecurityWeek, “is that it is not limited to VMware Carbon Black facts and figures. The thirty largest incident response and MSSP firms in the world contributed to this study, from SecureWorks to Booz Allen and Deloitte, and across all of their investigations over the last six months.”

The findings described in the report come from incident responders working in the field rather than from a survey of opinions or a single company’s private telemetry. “These incident responders,” continued Kellermann, “are suggesting Russian and Chinese sources from the nature of the forensic footprints and the secondary C2 locations, not just the primary C2 locations, they discover. That doesn’t dismiss the possibility of false flag and proxying operations, but combined with human intelligence on the malicious code, possible motives and cui bono? [who benefits?] applied to the situations, this is probably quite accurate.”

Cyber AttacksUnderlying the source of attacks is the current global geopolitical tension. Attacks from Iran are increasing — and indeed, from the U.S. itself. This latter, suggests Kellermann, “is probably because there is an undercurrent of disillusionment in America right now, and we’re seeing this expressed in cyberspace.” While this includes ‘domestic terrorism’, it is likely to be exacerbated in future years as an unintended consequence of attempts to close the skills gap. School kids are being taught cyber skills in high school — but unless they go all the way through university to degree level, they will not find employment in security. It is almost inevitable that drop outs will look elsewhere to use their skills — and Kellermann believes that this has already happened in Brazil: successful attempts to improve Brazilian cyber skills over the last two decades is the root cause in the growth of a robust and skilled Brazilian hacking community today.

The global geopolitical condition is a particular concern for Kellermann, with the 2020 presidential elections approaching. “Current listings for state voter database dumps are available for sale on the dark web,” says the report. One bundle of data from 27 states has been sold at least 47 times as of October 2019, suggesting that criminals see multiple ways of monetizing the personal information.

But what worries Kellermann most is application of the growing use of ‘access mining’. Stolen data is being sold by the criminals for near-term profit; but the criminals concerned, he suggests, “are waiting or the big fish buyer to come and say, ‘sell me the access to the system where you got this data’.”

He is worried that nation states and politically motivated non-states will get access to those systems and start manipulating details to cause voter disenfranchisement.

“This can be done quite simply by changing the integrity of the records,” he said. “When a person attempts to vote, they are denied because their name or address has been misspelled. That is a viable way to curtail democracy.” Given the tightness of voting in the swing states, it wouldn’t take a great deal to change the outcome — and just a few swing states could choose the president. “On the list of the databases for sale,” said Kellermann, “there are three of the most significant swing states with the greater part of their voting data: Michigan, Ohio, and Florida are included. Whoever wins those three states will win the election.”

Within current cyber-attacks, two new developments are highlighted in the report. The first is an increasing use of destructive malware. Criminal gangs are seeking stealth and persistence — perhaps with a mind to future access mining. When stealth is lost, and incident responders are detected, rather than just leave, the criminals are increasingly destroying the environment. Forty-one percent of attacks, up 10% on the previous two quarters, now include destructive/integrity attacks.

“In some ways,” says the report, “destruction is the ultimate in counter-incident response: As a victim calls the police during a home invasion, the attacker decides to burn the house down. Once the house is burnt down, detectives aren’t likely to figure out how the thieves broke in or what was stolen, thus erasing the evidence.” This is easily achieved with wiper malware or ransomware with no ransom.

The second development is a new form of island hopping, that employs reverse business email compromise. “The commandeering of mail servers,” Kellermann told SecurityWeek, “is followed by selectively deploying fileless malware against a handful of targets, particularly the board members of customer corporations. For executives, the worst-case scenario is no longer the theft of data; it is island hopping, as your brand will be used to attack your customers,” he continued. “This is the dark side of digital transformation.”

While the attacks themselves are becoming more sophisticated, so too are the criminals and the malware they use. For several years there have been reports of cyber criminals using old malware. This seems to have reversed. Custom malware is now used in 41% of attacks, an increase of 5% over Q1, 2019.

“Improved attack capabilities from the elite hackers has multiple causes,” Kellermann told SecurityWeek. “Firstly, they are aligned with their national governments and act like cyber militia for those governments. Secondly we’re living in an era of improvement and enhancement to the sophisticated attacks leaked by Shadow Brokers. And because of the absence of international norms of behavior and the global political angst that exists across the world between the haves and the have-nots, this manifests in cyber space in the form of various activities.”

Despite the clearly worsening cyber outlook, Kellermann believes there are steps that can and must be taken to improve things. “One of the biggest problems for business,” he said, “is that the security industry is fragmented and competes with itself rather than against the dark web. The criminals are more united and cooperative. We need to concentrate on two constructs as an industry: we need to open up our APIs and integrate with each other as much as possible so that we can really get to a single pane of glass; and we also need to change the way we protect things.”

The failure of perimeter defense is not a new idea. But, says Kellermann, “We need to protect things from inside out. We should be using an architectural model similar to a modern penitentiary rather than a castle-like fortress.”

Achieving this begins with really understanding viable attack paths and limiting lateral movement, “which in turn,” he suggests, “would limit our worst-case scenario — which is island hopping. We need to baseline vulnerabilities, we should be using just-in-time administration, and we should employ application control.”

Microsegmentation is an imperative to inhibiting lateral movement and particularly attacks like process hollowing, and we should be using big data. “But we cannot be completely dependent on machine learning or AI,” he added, “because they can be polluted. Finally, when we collaborate, not only do we need to embrace MITRE ATT&CK, we need to appreciate the fact that in order to be predictive on future TTPs and the combination of future TTPs, we need to get away from the linear kill chain and embrace the cognitive attack loop.”

Related: Securing the 2020 Elections From Multifarious Threats

Related: The Increasing Effect of Geopolitics on Cybersecurity

Related: The Geopolitical Influence on Business Risk Management

Related: Geopolitical Context a Prerequisite for Finished Intelligence

view counter

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.


Source: Geopolitical Tensions Fuel Worsening Cyberattack Scenario

Ongoing Research Project Examines Application of AI to Cybersecurity

Ongoing Research Project Examines Application of AI to Cybersecurity

Project Blackfin: Multi-Year Research Project Aims to Unlock the Potential of Machine Intelligence in Cybersecurity

Project Blackfin is ongoing artificial intelligence (AI) research challenging the current automatic assumption that deep-learning neural network principles are the best way to teach a system to detect anomalous behavior or malicious activity on a network. Run by security firm F-Secure, the project is examining the alternative applicability of distributed swarm intelligence in decision making.

“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do,” explains Matti Aksela, F-Secure’s VP of artificial intelligence. “Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do.”

Project Blackfin is being run by F-Secure with collaboration between in-house engineers, researchers, data scientists and academic partners. “We created Project Blackfin,” continued Aksela, “to help us reach that next level of understanding about what AI can achieve.” Although it is a long-term project, some early principles are already being incorporated into F-Secure’s own products.

The primary problem with many current anomaly detection AI systems is well-known: too many false positives or too many false negatives. This is difficult to solve simply by the nature of how the systems work. Streams of data from endpoints and network traffic are centralized and analyzed on arrival, and then stored for later audit or forensic analysis. Because the data arrives from multiple sources it is difficult to correlate events across multiple sources. Since attackers often build delays into their attacks, new events may also need to be related to historical events to be able to contextualize possibly malicious activity.

The result is that finding the best sensitivity settings for detection of behaviors is critical. Set high to ensure nothing is missed results in huge numbers of false positives that need to be manually triaged by the security team. Set too low to reduce the false negatives increases the potential for false positives.

Blackfin is exploring the use of distributing the AI as agents within each endpoint and server of a network in a collaborative manner. That intelligence becomes expert in the acceptable use of its own host. The model is inspired by the patterns of collective behavior found in nature, such as the swarm intelligence found in ant colonies or schools of fish. “The project aims to develop these intelligent agents to run on individual hosts,” says F-Secure. “Instead of receiving instructions from a single, centralized AI model, these agents would be intelligent and powerful enough to communicate and work together to achieve common goals.”

Consider the machine learning predictive text input capabilities of individual phones. They learn the text habits of their owners very quickly, being able to rapidly offer probable word completions based on their owners’ habits. This is the type of distributed intelligence being explored by Blackfin, with the intelligence located in the device — but with the added ability for each intelligence to collaborate with the intelligence of adjacent intelligences. What may be just suspicious activity in the context of one endpoint can be confirmed as malicious or benign in the context of its action on adjacent endpoints — each of which has its own endpoint-specific intelligence.

This improves the correlation and contextualization of suspicious activity since the event is immediately, in situ, seen in the context of both the source and destination hosts. In our phone example, it might be equivalent for the text input intelligence on one phone being able collaborate with the destination intelligence and say, ‘Stop. You should not use that language with your grandmother.’

“Essentially,” said Aksela, “you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone.”

F-Secure has published the first of what it expects to be regular papers on the progress of Blackfin (PDF). For now, it is exploring different anomaly detection models to detect specific phenomena. “By combining the outputs of multiple different models associated with each of the [different categories],” says the paper, “a contextual understanding of what is happening on a system can be derived, enabling downstream logic to more accurately predict whether a specific event or item is anomalous, and if it is, if it is worth alerting on. This approach enables generic methodologies for detecting attacker actions (or sequences of actions), without baking specific logic into the detection system itself.”

Research is ongoing and will continue for several years. Nevertheless, says F-Secure, through Blackfin, it has “identified a rich set of interactions between models running on endpoints, servers, and the network that have the potential to vastly improve breach detection mechanisms, forensic analysis capabilities, and response capabilities in future cyber security solutions… we expect to regularly report new results and findings as they present themselves ”

Related: Artificial Intelligence in Cybersecurity is Not Delivering on its Promise

Related: Are AI and Machine Learning Just a Temporary Advantage to Defenders?

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Related: The Role of Artificial Intelligence in Cyber Security

view counter

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.


DopplePaymer Ransomware Spreads via Compromised Credentials: Microsoft

DopplePaymer Ransomware Spreads via Compromised Credentials: Microsoft

The DopplePaymer ransomware spreads via existing Domain Admin credentials, not exploits targeting the BlueKeep vulnerability, Microsoft says.

The malware, which security researchers believe to have been involved in the recent attack on Mexican state-owned oil company Petróleos Mexicanos (Pemex), has been making the rounds since June 2019, with some earlier samples dated as far back as April 2019.

Initially detailed in July this year, DopplePaymer is said to be a forked version of BitPaymer, likely the work of some members of the TA505 threat group (the hackers behind Dridex and Locky) who decided to leave the cybercrime gang and start their own illegal operation.

In a new blog post, Dan West and Mary Jensen, both senior security program managers at Microsoft’s Security Response Center, explain that while DopplePaymer represents a real threat to organizations, information on its spreading method is misleading.

Specifically, the tech company says that information regarding DopplePaymer spreading across internal networks via Microsoft Teams and the Remote Desktop Protocol (RDP) vulnerability BlueKeep is incorrect.

“Our security research teams have investigated and found no evidence to support these claims. In our investigations we found that the malware relies on remote human operators using existing Domain Admin credentials to spread across an enterprise network,” Microsoft’s researchers explain.

The company recommends that security administrators enforce a good credential hygiene, apply the principle of least privilege, and implement network segmentation to keep their environments protected.

These best practices, Microsoft notes, can help prevent not only DopplePaymer attacks, but also other malware from compromising networks, disabling security tools, and leveraging privileged credentials to steal or destroy data.

Microsoft, which has already included protection from DopplePaymer and other malware in Windows Defender, says it will continue to enhance protections as new emerging threats are identified.

“Globally, ransomware continues to be one of the most popular revenue channels for cybercriminals as part of a post-compromise attack,” Microsoft warns.

Attackers, the company says, typically use social engineering to compromise enterprises. The practice involves tricking an employee to visit a malicious site or to open downloaded or emailed documents that drop malware onto their computers.

Related: Cyber Hygiene 101: Implementing Basics Can Go a Long Way

Related: Mexican Oil Company Pemex Hit by Ransomware

Related: The Growing Threat of Targeted Ransomware

view counter

Ionut Arghire is an international correspondent for SecurityWeek.


Source: DopplePaymer Ransomware Spreads via Compromised Credentials: Microsoft

New IBM Cloud Security Solution Combines Data From Existing Tools

New IBM Cloud Security Solution Combines Data From Existing Tools

IBM on Wednesday announced the general availability of Cloud Pak for Security, a solution designed to help organizations hunt for threats by combining data from multiple tools and clouds.

Cloud Pak for Security can obtain and translate security data from existing tools. It currently supports tools from IBM, Carbon Black, Elastic, Tenable, Splunk and BigFix. However, since it leverages open source technology, customers can connect other data sources as well, IBM says.

IBM is one of the founding members of the recently launched Open Cybersecurity Alliance (OCA), which has allowed it to create partnerships with many important vendors.

According to the company, Cloud Pak for Security can be easily installed on premises, private cloud, or public cloud environments. The solution works with IBM Cloud, AWS and Microsoft Azure clouds.

One of the main advantages of Cloud Pak for Security, IBM says, is that organizations can obtain useful insights without having to actually move any data to the platform for analysis.

“Moving data in order to analyze it often drives untenable cost, complexity, and compliance issues,” explained Justin Youngblood, VP of strategy and offering management at IBM Security.

The new solution’s Data Explorer application should help security analysts quickly find threat indicators across their environments without the need to conduct manual searches in each environment.

Cloud Pak for Security also provides incident response orchestration and automation capabilities, which can help security teams save valuable time.

“Organizations have rapidly adopted new security technologies to keep up with the latest threats, but are now juggling dozens of disconnected tools which don’t always work well together,” said Jon Oltsik, senior principal analyst at Enterprise Strategy Group. “The industry needs to solve this issue for customers by shifting to more open technologies and unified platforms that can serve as the connective glue between security point tools. IBM’s approach aligns with this requirement and has the potential to bring together every layer of the security stack within a single, simplified interface.”

Related: Cloud is Creating Security and Network Convergence

Related: IBM Launches z15 Mainframe With New Data Protection Capabilities

Related: IBM Adds New Service to Cloud Identity Offering

Related: Trend Micro Unveils New Cloud Security Platform

Eduard Kovacs (@EduardKovacs) is a contributing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.


Source: New IBM Cloud Security Solution Combines Data From Existing Tools

NSA Issues Advisory on Mitigation of Risks Associated With TLSI

NSA Issues Advisory on Mitigation of Risks Associated With TLSI

The U.S. National Security Agency (NSA) has published an advisory to provide information on possible mitigations for risks associated with Transport Layer Security Inspection (TLSI).

Also known as TLS break and inspect, TLSI is a mechanism that allows for the inspection of encrypted traffic within a network and involves the decryption of that traffic, inspection of contents, and re-encryption.

TLSI is usually performed by a proxy device to expose the underlying plaintext of a TLS session and allow firewalls, and intrusion detection/prevention systems (IDS/IPS) to detect indicators of threat or compromise. Legacy Secure Sockets Layer (SSL) traffic is also inspected.

According to the NSA’s advisory (PDF), one of the risks associated with TLSI is improper control and external processing of decrypted traffic when in a forward proxy or near the enterprise boundary.

A forward proxy is a device that intercepts requests from internal network clients and forwards them to external servers. It also receives responses from those servers and sends them to internal network clients.

In a forward proxy, the TLSI mechanism manages forward proxy traffic flows, establishes TLS sessions, and issues trusted certificates. Thus, it can protect enterprise clients from the high risk environment outside the forward proxy.

However, the forward proxy could misroute the traffic, thus exposing it to unauthorized or weakly protected networks, the NSA advisory says.

Deploying firewalls and monitoring network traffic flow can protect the TLSI implementation from potential exploits, while implementing analytics on the logs ensures the system is operating as expected. Moreover, these mitigations can also detect abuse by security admins and misrouted traffic.

TLSI occurs in real-time and replaces the end-to-end TLS session with a “TLS chain” that consists of two independently negotiated TLS connections: one with the internal network client and another with the external server. The TLS traffic, however, flows as if there is a single connection.

The TLS chaining could result in a TLS protection downgrade, as the TLS version or cipher suites could differ from one connection to the other. This, the NSA points out, could lead to a passive exploitation of the session or of vulnerabilities in the weaker TLS versions or cipher suites.

To prevent that, admins should properly configure TLS security settings, including version, cipher suites, and certificates, and disable weak TLS versions and cipher suites on the server-side and prevent clients from using them.

Where outdated applications that require weak TLS versions and cipher suites are used, admins should constrain that usage so that they are negotiated for exempted clients only.

Applying certificate pinning should help detect unauthorized changes to TLS certificates received from external servers — this might be an indicator of man-in-the-middle attacks against the proxy — and certificate transparency can be used to report unauthorized certificates to the owners of the external servers.

TLSI forward proxy devices also include a certification authority (CA) function to issue and sign new certificates that represent the external servers to the TLS client — the client is configured to trust the CA.

This mechanism, however, can be abused to issue unauthorized certificates trusted by the TLS clients, which allows an attacker to sign malicious code and bypass host IDS/IPSs.

Protections include issuing the embedded CA’s signing certificate for TLS inspection purposes only, not using default or self-signed certificates, and monitoring traffic for unexpectedly issued certificates.

Certificate revocation services should be enabled for certificates cached by the TLSI system, so that any unauthorized certificates can be easily revoked. The embedded CA should be configured to issue only TLS server authentication certificates.

“Configure TLS clients to trust the external CA so they only trust the certificates the TLSI system issued for TLS server authentication. Issue the embedded CA with a certificate that has name constraints to reinforce limitations of the inspection authorization and prevent impersonation of enterprise services,” the advisory reads.

Adversaries, the NSA points out, could focus on exploiting only the device where potential traffic of interest is decrypted. To mitigate this risk, admins should set a policy to enforce decryption and inspection of traffic only as authorized, and to ensure that decrypted traffic is contained to an isolated segment of the network.

To prevent internal abuse from security administrators that manage the TLSI implementation, the principles of least privilege and separation of duties should be used. This ensures that only authorized admins have access to decrypted data, while others don’t. A separate auditor role should be used to detect policy modifications and other potential abuse.

The NSA also points out that, in the United States, enterprises operating TLSI capabilities are subject to privacy laws, policies, and regulations and that they should be aware of requirements and prevent unauthorized exposure of data.

To minimize risks associated with TLSI, the NSA notes, breaking and inspecting TLS traffic should only be conducted once within the network, as that is enough to detect encrypted traffic threats. Performing the inspection multiple times could complicate diagnosing network issues with TLS traffic.

“Also, multi-inspection further obscures certificates when trying to ascertain whether a server should be trusted. In this case, the “outermost” proxy makes the decisions on what server certificates or CAs should be trusted and is the only location where certificate pinning can be performed,” the advisory reads.

Related: Over 100,000 Fake Domains With Valid TLS Certificates Target Major Retailers

Related: Study Finds Rampant Sale of SSL/TLS Certificates on Dark Web

Ionut Arghire is an international correspondent for SecurityWeek.


Source: NSA Issues Advisory on Mitigation of Risks Associated With TLSI