Whether It’s Stealing Your Information or Selling it Online, the Holidays are a Bonanza for Cybercriminals
The joy of the season, at least as expressed through the growing dollar volume of Black Friday and Cyber Monday online sales, was huge this year. Final numbers aren’t yet available, but according to Salesforce, U.S. sales exceeded $8 billion while Adobe Analytics expects that number to be closer to $9.4 billion in U.S. online orders. Either way, the total would represent a new record.
But the surge in bargain hunting and online buying isn’t just limited to traditional holiday shoppers. It is also a bonanza for cybercriminals whose own sales and purchases of contraband on the dark web mirror the one-day-only specials of their consumer-facing counterparts.
How do I know that? My company’s primary mission involves peering around the digital shadows of the internet that most people never see – variously known as the Dark Web or the Deep Web – to determine whether a client’s credentials, credit cards, or other high-value data have been stolen and offered for sale through online black marketplaces. And the volume of contraband material available is huge.
Take, for example, BriansClub, which specializes in the sale of stolen credit cards and other financial information. In October, the site offered customers who spent $500 or more in the shop a Black Friday bonus and eligibility for special discounts. We suspect that this largesse was an attempt at public relations to win customers back after experiencing some bad publicity that followed an attack on the site’s data center which exposed 26 million credit and debit cards.
Like their above-ground counterparts, cyber criminals are always on the outlook for good deals. And as vendors of contraband, they use many of the same tools to attract customers as legitimate businesses. For example, BlackHatWorld is an online forum for members that focuses on black hat search engine optimization strategies and other dark marketing tactics to attract prospective buyers to their marketplace. Those members, in turn, use the forum to track news about other deals that they come across. In fact, there are so many members clamoring for others to follow a particular thread, that clandestine forum moderators often intervene in an attempt to impose order on the shopping chaos.
Display advertising using high profile banners also proliferate around the holidays, drawing attention to dark web Black Friday deals. One marketplace, UnderMarket 2.0, boasts a variety of goods including stolen credit cards, counterfeit products, and drugs. During the Black Friday/Cyber Monday weekend, its deals typically include bargains like 30 percent off everything with extra discounts available for buyers who spend more than $2,000.
Not surprisingly, the volume of dark web traffic reaches a peak on Black Friday but continues through the holiday season and beyond. By tracking across chat messages, forum posts, and other dark web pages, we have found that mentions of Black Friday spike sharply in the days immediately after U.S. Thanksgiving. But the Black Friday concept has now grown beyond its calendar limits. The term has become so widely understood as a synonym for bargains that we have seen “Black Friday” sales pop up well outside of the winter shopping season.
Dark web vendors are usually, and appropriately, associated with the sale of stolen credit cards and other illegal products. But even when they’re not offering wares for sale, Black Friday can also be an opportunity for cybercriminals to improve the tools of their trade. Discounts are frequently available on SEO kits, on HTTPS proxies, and virtual private network services – all of which can be used to trick and defraud unsuspecting targets. For example, a black hat SEO strategy coupled with backlink software could allow cybercriminals to push malicious websites higher on Bing or Google searches, drawing much larger audiences. And those tools can be used all year long.
So, what are businesses to do in order to remain safe? First of all, it’s important to understand that nothing is perfectly secure and that the techniques of cybercriminals will continue to evolve and become even more difficult to evade as time goes on. That said, however, here are some guidelines which can narrow the opportunity for becoming a victim.
• Be diligent about your supply chain: Point of sale (POS) devices are prime targets, so make sure they are protected and monitored regularly for suspicious activity. Besides POS devices, don’t forget about third-party vendors such as your HVAC vendor, IT services, third-party software, etc. Have a defined supply chain onboarding process to include a robust vendor review, implement least privilege access, ensure there are strict security controls, and remember to revisit every step on a regular basis or if the scope of the vendor partnership changes.
• Use payer authentication and validation: Requiring card verification numbers (CVNs), using an address verification service (AVS), or using a 3-D Secure payer authentication service can help reduce the use of stolen credit cards.
• Monitor dark web forums and marketplaces for mentions of your company: The presence of your company domain on a criminal forum is a good indication you are being targeted by credential stuffing tools.
• Use anti-CNP (Card-Not-Present) tools to validate transactions: Device fingerprinting, customer history, velocity monitoring, and negative lists (in-house or shared) are all valuable tools to disrupt fraudsters.
• Plan ahead and stay one step ahead of cybercrime: Have a process in place to handle compromised customer accounts, be prepared to deal with extortion scenarios, and use threat intelligence to track actors and understand their threat level.
Enjoy your success this holiday season, but take these steps to avoid sharing the joy with cybercriminals.
Source: Cybercriminals Celebrate the Holidays
Legislation that aims to protect the U.S. energy grid from cyberattacks passed the House this week after being added to the 2020 National Defense Authorization Act (NDAA).
The 2020 NDAA passed the House by a vote of 377 to 48 and President Donald Trump is expected to sign it soon.
The annual military bill includes the Securing Energy Infrastructure Act, which establishes a two-year pilot program within Energy Department national laboratories with the goal of identifying vulnerabilities and isolating critical grid systems.
The Securing Energy Infrastructure Act was introduced by Sen. Angus King and Sen. Jim Risch, and a companion bill has been introduced in the House of Representatives by Rep. Dutch Ruppersberger and Rep. John Carter.
The bill proposes solutions such as the use of analog backup systems, which could prevent cyberattacks from causing too much damage.
“This approach seeks to thwart even the most sophisticated cyber-adversaries who, if they are intent on accessing the grid, would have to actually physically touch the equipment, thereby making cyber-attacks much more difficult,” according to a press release from Sen. Angus’ office.
The bill also requires the creation of a working group that would analyze the solutions proposed by national laboratories and develop a national strategy for protecting the energy grid.
“The energy grid powers our financial transactions, communications networks, healthcare services and most of our daily life– so if this critical infrastructure is compromised by a hacker, these building blocks of American life are at risk,” said Senator King. “Protecting our energy grid is commonsense, bipartisan, and vital to national security, and I’m happy this year’s NDAA will enshrine this needed provision into law.”
The cyber and physical security of North America’s energy grid was tested recently as part of a major exercise called GridEx V. More than 6,500 participants representing more than 425 government and energy sector organizations in the United States, Canada and Mexico took part in the two-day exercise.
Earlier this year, a power utility in the U.S. reported interruptions to electrical system operations as a result of a denial-of-service (DoS) attack that involved the exploitation of a known vulnerability in Cisco firewalls.
Related: House Passes Bill to Enhance Industrial Cybersecurity
Related: U.S. Energy Firm Fined $2.7 Million Over Data Security Incident
Related: U.S. to Help Secure Baltic Energy Grid Against Cyber Attacks
Source: Bill to Protect U.S. Energy Grid From Cyberattacks Passes With NDAA
New Jersey’s largest hospital system said Friday that a ransomware attack last week disrupted its computer network and that it paid a ransom to stop it.
Hackensack Meridian Health did not say in its statement how much it paid to regain control over its systems but said it holds insurance coverage for such emergencies.
The attack forced hospitals to reschedule nonemergency surgeries and doctors and nurses to deliver care without access to electronic records.
The system said it was advised by experts not to disclose until Friday that it had been the victim of a ransomware attack. It said that its network’s primary clinical systems had returned to being operational, and that information technology specialists were working to bring all of its applications back online.
Hackensack Meridian said it had no indication that any patient information was subject to unauthorized access or disclosure.
It quickly notified the FBI and other authorities and spoke with cybersecurity and forensic experts, it said.
Hackensack Meridian operates 17 acute care and specialty hospitals, nursing homes, outpatient centers, and the psychiatric facility Carrier Clinic.
Related: The Case for Cyber Insurance
Source: Large Hospital System Hit by Ransomware Attack
Ransomware was detected after a suspected cyberattack prompted a shutdown of city government computers in New Orleans on Friday, officials said.
The city had not received any ransom demands as of Friday afternoon, however, Mayor LaToya Cantrell said at a news conference. City officials said the shutdown was done out o f “an abundance of caution.”
Cantrell said city employees were ordered to shut down computers around 11 a.m. — an order that rang out through th e speakers of a public address system in City Hall. City officials said suspicious activity was noticed as early as 5 a.m. They didn’t go into detail but said the activity included “phishing” emails designed to obtain passwords.
As of Friday afternoon, there was no indication that any city employee had provided passwords or other information that might have inadvertently led to a breach, said City IT director Kim LaGrue.
Officials said they couldn’t say when computers would be back online or whether any important files were compromised. They stressed that city financial records are backed up through a cloud-based system, and said all city emergency services were operating with telephones and radios.
State officials are investigating along with the FBI and Secret Service, Cantrell said.
The hurricane-vulnerable city is prepared for the loss of internet, said the city’s homeland security director, Collin Arnold.
“We will go back to marker boards. We will go back to paper,” he said.
The governor’s office said in an email that the Louisiana National Guard and state police were helping the city gauge the effects of the suspected attack, the second in a matter of days. Last week, a suspected cyberattack was reported in the city of Pensacola, Florida. City officials there confirmed Friday that hackers had tried to extort the city for money, but they have not said whether they planned to pay.
Last month, the Louisiana Office of Motor Vehicle operations was hobbled by a cyberattack.
Source: Cyberattack, Ransomware Hobbles New Orleans City Government
A new approach could make it easier to train computer for “extreme classification problems” like speech translation and answering general questions, researchers say.
The divide-and-conquer approach to machine learning can slash the time and computational resources required.
Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.
The researchers will present their work at the 2019 Conference on Neural Information Processing Systems in Vancouver. The results include tests from 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice University, visited Amazon Search in Palo Alto, California.
In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, the researchers showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the training resources of some state-of-the-art commercial systems.
“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” says Shrivastava, an assistant professor of computer science.
Machine learning for better search
Medini, a PhD student, says product search is challenging, in part, because of the sheer number of products. “There are about 1 million English words, for example, but there are easily more than 100 million products online.”
There are also millions of people shopping for those products, each in their own way. Some type a question. Others use keywords. And many aren’t sure what they’re looking for when they start. But because millions of online searches are performed every day, tech companies like Amazon, Google, and Microsoft have a lot of data on successful and unsuccessful searches. And using this data for a type of machine learning called deep learning is one of the most effective ways to give better results to users.
Deep learning systems, or neural network models, are vast collections of mathematical equations that take a set of numbers called input vectors, and transform them into a different set of numbers called output vectors. The networks are composed of matrices with several parameters, and state-of-the-art distributed deep learning systems contain billions of parameters that are divided into multiple layers. During training, data is fed to the first layer, vectors are transformed, and the outputs are fed to the next layer and so on.
“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must train on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.
“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini says. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”
“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini says. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”
A better way to tackle extreme classification problems
MACH takes a very different approach. Shrivastava describes it with a thought experiment randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he says. “It’s a drastic reduction from 100 million to three.”
In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.
“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he says. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he says. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”
Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he says. “So I have reduced my search space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”
In their experiments with Amazon’s training database, the researchers randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini says.
He says MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what the separate, independent worlds represent.
“They don’t even have to talk to each other,” Medini says. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”
“In general, training has required communication across parameters, which means that all the processors that are running in parallel have to share information,” says Shrivastava.
“Looking forward, communication is a huge issue in distributed deep learning. Google has expressed aspirations of training a 1 trillion parameter network, for example. MACH, currently, cannot be applied to use cases with small number of classes, but for extreme classification, it achieves the holy grail of zero communication.”
Support for the research came from the National Science Foundation, the Air Force Office of Scientific Research, Amazon Research, and the Office of Naval Research.
Source: Rice University
The post How to train computers faster for ‘extreme’ datasets appeared first on Futurity.
Source: How to train computers faster for ‘extreme’ datasets