Firefox 72 Blocks Fingerprinting Scripts by Default

Firefox 72 Blocks Fingerprinting Scripts by Default

Mozilla this week released Firefox 72 to the stable channel with advanced privacy protections that involve the blocking of fingerprinting scripts by default.

Long focused on protecting users’ privacy when browsing the Internet, Mozilla launched Enhanced Tracking Protection (ETP) last year, which keeps users safe from cross-site tracking.

Last week, it also announced that it would let users delete telemetry data, a reaction to the California Consumer Privacy Act (CCPA).

The release of Firefox 72 this week marked another milestone in the organization’s effort toward a more private browsing experience, by expanding the protection to also include browser fingerprinting.

Scripts that have been designed for fingerprinting collect unique characteristics of a user’s browser and device, so as to leverage the information to identify that user. Collected details include screen size, browser and operating system, installed fonts, and other device properties.

The collected information is then used to differentiate one user’s browser from another, which allows companies to track users for long periods of time, even after they cleared browsing data.

Both standards bodies and browser vendors agree that fingerprinting is harmful, but its use has increased across the web over the past ten years, Mozilla says.

Protecting users from fingerprinting without breaking websites, the organization explains, involves blocking parties that participate in fingerprinting, and modifying or removing APIs used for fingerprinting.

With the release of Firefox 72, the organization is now blocking third-party requests to companies known to engage in fingerprinting.

Thus, these companies should no longer be able to gather device details using JavaScript and will not receive information revealed through network requests either — such as the user’s IP address or the user agent header.

The protection is provided in partnership with Disconnect, which maintains a list of companies known for cross-site tracking and a list of those that fingerprint users. Firefox now blocks all parties at the intersection of these two classifications.

Mozilla also adapted measurement techniques from previous academic research to help find new fingerprinting domains, and explains that Disconnect performs a rigorous evaluation of each potential domain that is added to the list.

Following this first step, Mozilla plans on expanding the fingerprinting protection through both script blocking and API-level protections.

“We will continue to monitor fingerprinting on the web, and will work with Disconnect to build out the set of domains blocked by Firefox. Expect to hear more updates from us as we continue to strengthen the protections provided by ETP,” Mozilla concludes.

In addition to this privacy enhancement, Firefox 72 includes patches for 11 vulnerabilities, including 5 rated high severity, 5 medium risk, and one low severity.

The high-severity bugs include a memory corruption in parent processes during new process initialization on Windows, bypass of @namespace CSS sanitization during pasting, type confusion in XPCVariant.cpp, and memory safety bugs in both Firefox 71 and Firefox ESR 68.3.

Medium-severity flaws patched this month include the Windows keyboard in Private Browsing mode retaining word suggestions; Python files could be inadvertently executed upon opening a download; Content Security Policy not applied to XSL stylesheets applied to XML documents; heap address disclosure in parent processes during content process initialization on Windows; and CSS sanitization does not escape HTML tags.

The low-severity bug patched in this release could result in an invalid state transition in the TLS State Machine, as the client may negotiate a lower protocol than TLS 1.3 after a HelloRetryRequest has been sent.

Related: Firefox 72 Will Let Users Delete Telemetry Data

Related: Mozilla Hardens Firefox Against Injection Attacks

Ionut Arghire is an international correspondent for SecurityWeek.


How to train computers faster for ‘extreme’ datasets

How to train computers faster for ‘extreme’ datasets

A new approach could make it easier to train computer for “extreme classification problems” like speech translation and answering general questions, researchers say.

The divide-and-conquer approach to machine learning can slash the time and computational resources required.

Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.

The researchers will present their work at the 2019 Conference on Neural Information Processing Systems in Vancouver. The results include tests from 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice University, visited Amazon Search in Palo Alto, California.

In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, the researchers showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the training resources of some state-of-the-art commercial systems.

“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” says Shrivastava, an assistant professor of computer science.

Machine learning for better search

Medini, a PhD student, says product search is challenging, in part, because of the sheer number of products. “There are about 1 million English words, for example, but there are easily more than 100 million products online.”

There are also millions of people shopping for those products, each in their own way. Some type a question. Others use keywords. And many aren’t sure what they’re looking for when they start. But because millions of online searches are performed every day, tech companies like Amazon, Google, and Microsoft have a lot of data on successful and unsuccessful searches. And using this data for a type of machine learning called deep learning is one of the most effective ways to give better results to users.

Deep learning systems, or neural network models, are vast collections of mathematical equations that take a set of numbers called input vectors, and transform them into a different set of numbers called output vectors. The networks are composed of matrices with several parameters, and state-of-the-art distributed deep learning systems contain billions of parameters that are divided into multiple layers. During training, data is fed to the first layer, vectors are transformed, and the outputs are fed to the next layer and so on.

“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must train on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.

“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini says. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”

“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini says. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”

A better way to tackle extreme classification problems

MACH takes a very different approach. Shrivastava describes it with a thought experiment randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he says. “It’s a drastic reduction from 100 million to three.”

In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.

“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he says. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he says. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”

Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he says. “So I have reduced my search space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”

In their experiments with Amazon’s training database, the researchers randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini says.

He says MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what the separate, independent worlds represent.

“They don’t even have to talk to each other,” Medini says. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”

“In general, training has required communication across parameters, which means that all the processors that are running in parallel have to share information,” says Shrivastava.

“Looking forward, communication is a huge issue in distributed deep learning. Google has expressed aspirations of training a 1 trillion parameter network, for example. MACH, currently, cannot be applied to use cases with small number of classes, but for extreme classification, it achieves the holy grail of zero communication.”

Support for the research came from the National Science Foundation, the Air Force Office of Scientific Research, Amazon Research, and the Office of Naval Research.

Source: Rice University

The post How to train computers faster for ‘extreme’ datasets appeared first on Futurity.

Source: How to train computers faster for ‘extreme’ datasets

Doctors give electronic health records an ‘F’

Doctors give electronic health records an ‘F’

Source: Yale University

Electronic health records may improve quality and efficiency for doctors and patients alike—but physicians give them an “F” for usability and they may contribute to burnout, according to new research.

By contrast, in similar but separate studies, Google’s search engine earned an “A” and ATMs a “B.” The spreadsheet software Excel got an “F.”

“A Google search is easy,” says Edward R. Melnick, assistant professor of emergency medicine and director of the Clinical Informatics Fellowship at Yale University. “There’s not a lot of learning or memorization; it’s not very error-prone. Excel, on the other hand, is a super-powerful platform, but you really have to study how to use it. EHRs mimic that.”

A bar graph of the ratings for different software shows up inside an illustrated manila folder with a stethoscope hanging over it, with a doctor character pointing towards the chartUsability ratings for everyday products measured with the System Usability Scale. Google: 93%; microwave: 87%; ATM: 82%; Amazon: 82%; Microsoft Word: 76%; digital video recorder: 74%; global positioning system: 71%; Microsoft Excel: 57%; electronic health records: 45%. (Credit: Michael S. Helfenbein)

There are various electronic health record systems that hospitals and other medical clinics use to digitally manage patient information. These systems replace hard-copy files, storing clinical data, such as medications, medical history, lab and radiology reports, and physician notes.

The systems were developed to improve patient care by making health information easy for healthcare providers to access and share, reducing medical error.

But the rapid rollout of EHRs following the Health Information Technology for Economic and Clinical Health Act of 2009, which pumped $27 billion of federal incentives into the adoption of EHRs in the US, forced doctors to adapt quickly to often complex systems, leading to increasing frustration.

Two hours of personal time

According to the study, physicians spend one to two hours on EHRs and other deskwork for every hour spent with patients, and an additional one to two hours daily of personal time on EHR-related activities.

“As recently as 10 years ago, physicians were still scribbling notes,” Melnick says. “Now, there’s a ton of structured data entry, which means that physicians have to check a lot of boxes.

“Often this structured data does very little to improve care; instead, it’s used for billing. And looking for communication from another doctor or a specific test result in a patient’s chart can be like trying to find a needle in a haystack. The boxes may have been checked, but the patient’s story and information have been lost in the process.”

For the current study, published in Mayo Clinic Proceedings, Melnick zeroed in on the effect of EHRs on physician burnout.

The AMA, along with researchers at the Mayo Clinic and Stanford University, surveys over 5,000 physicians every three years on topics related to burnout. Most recently, the burnout rate was 43.9%—a drop from the 54.4% of 2014, but still worryingly high, researchers say. The same survey found that burnout for the general US population was 28.6%.

Electronic health records and burnout

Researchers also asked one quarter of the respondents to rate their EHR’s usability by applying a measure, System Usability Scale (SUS), previously used in over 1,300 other usability studies in various industries.

Users in other studies ranked Google’s search engine an “A.” Microwave ovens, ATMs, and Amazon got “Bs.” Microsoft Word, DVRs, and GPS got “Cs.” Microsoft Excel, with its steep learning curve, got an “F.”

In Melnick’s study, EHRs came in last, with a score of 45—an even lower “F” score than Excel’s 57.

Further, EHR usability ratings correlated highly with burnout—the lower physicians rated their EHR, the higher the likelihood that they also reported symptoms of burnout.

The study found that certain physician specialties rated their EHRs especially poorly—among them, dermatology, orthopedic surgery, and general surgery.

Specialties with the highest SUS scores included anesthesiology, general pediatrics, and pediatric subspecialties.

Demographic factors like age and location matter, too. Older physicians found EHRs less usable, and doctors working in veterans’ hospitals rated their EHR higher than physicians in private practice or in academic medical centers.

Benchmarking physicians’ feelings about EHRs will make it possible to track the effect of technology improvements on usability and burnout, Melnick says.

“We’re trying to improve and standardize EHRs,” Melnick says. “The goal is that with future work, we won’t have to ask doctors how they feel about the EHR or even how burned out they are, but that we can see how doctors are interfacing with the EHR and, when it improves, we can see that improvement.”

The post Doctors give electronic health records an ‘F’ appeared first on Futurity.

Source: Doctors give electronic health records an ‘F’

How proteins stabilize and repair broken DNA

How proteins stabilize and repair broken DNA

New research shows how some types of proteins stabilize damaged DNA and thereby preserve DNA function and integrity.

Two proteins called 53BP1 and RIF1 engage to build a three-dimensional “scaffold” around the broken DNA strands. This scaffold then locally concentrates special repair proteins, which are in short supply, and that are critically needed to repair DNA without mistakes.

“This could be compared to putting a plaster cast on a broken leg.”

The finding also explains why people with congenital or acquired defects in certain proteins cannot keep their DNA stable and develop diseases such as cancer.

Every day, the body’s cells divide millions of times, and the maintenance of their identity requires that a mother cell passes complete genetic information to daughter cells without mistakes.

This is not a small task because our DNA is constantly under attack, not only from the environment but also from the cell’s own metabolic activities. As a result, DNA strands can break at least once during each cell division cycle and this frequency can be increased by certain lifestyles, such as smoking, or in individuals who are born with defects in DNA repair.

In turn, this can lead to irreversible genetic damage and ultimately cause diseases such as cancer, immune deficiency, dementia, or developmental defects.

“Understanding the body’s natural defense mechanisms enables us to better understand how certain proteins communicate and network to repair damaged DNA,” says professor Jiri Lukas, director of the Novo Nordisk Foundation Center for Protein Research.

“This could be compared to putting a plaster cast on a broken leg; it stabilizes the fracture and prevents the damage from getting worse and reaching a point where it can no longer heal,” says first author Fena Ochs, a postdoctoral researcher at the Novo Nordisk Foundation Center for Protein Research.

The previous assumption was that proteins such as 53BP1 and RIF1 act only in the closest neighborhood of the DNA fracture. However, with the help of super-resolution microscopes, scientists were able to see that error-free repair of broken DNA requires a much larger construction.

“Roughly speaking, the difference between the proportions of the protein-scaffolding and the DNA fracture corresponds to a basketball and a pin head,” says Ochs.

According to the researchers, the fact that the supporting protein scaffold is so much bigger than the fracture underlines how important it is for the cell to not only stabilize the DNA wound, but also the surrounding environment.

This allows it to preserve the integrity of the damaged site and its neighborhood and increases the likelihood of attracting the highly specialized “workers” in the cell to perform the actual repair.

One of the most notable benefits of basic research such as the new study is that it provides scientists with molecular tools to simulate, and thus better understand, conditions that happen during development of a real disease.

When the scientists prevented cells to build the protein scaffold around fractured DNA, they observed that large parts of the neighboring chromosome rapidly fell apart.

This caused DNA-damaged cells to start alternative attempts to repair themselves, but this strategy was often futile and exacerbated the destruction of the genetic material.

According to the researchers, this can explain why people who lack the scaffold proteins are prone to diseases caused by unstable DNA.

The study appears in the journal Nature.

Source: University of Copenhagen

The post How proteins stabilize and repair broken DNA appeared first on Futurity.

Source: How proteins stabilize and repair broken DNA

DDoS Attack Hits Amazon Web Services

DDoS Attack Hits Amazon Web Services

Amazon Web Services (AWS) customers experienced service interruptions yesterday as the company struggled to fight off a distributed denial-of-service (DDoS) attack.

As part of such an assault, attackers attempt to flood the target with traffic, which would eventually result in the service being unreachable.

While customers were complaining of their inability to reach AWS S3 buckets, on its status page yesterday the company revealed that it was having issues with resolving AWS Domain Name System (DNS) names.

The issues, AWS said, lasted for around 8 hours, between 10:30 AM and 6:30 PM PDT. A very small number of specific DNS names, the company revealed, experienced a higher error rate starting 5:16 PM.

While reporting on Twitter that it was investigating reports of intermittent DNS resolution errors with Route 53 and external DNS providers, Amazon also sent notifications to customers to inform them of an ongoing DDoS attack.

“We are investigating reports of occasional DNS resolution errors. The AWS DNS servers are currently under a DDoS attack. Our DDoS mitigations are absorbing the vast majority of this traffic, but these mitigations are also flagging some legitimate customer queries at this time,” AWS told customers.

The company also explained that the DNS resolution issues were also intermittently impacting other AWS Service endpoints, including ELB, RDS, and EC2, given that they require public DNS resolution.

During the outage, AWS was redirecting users to its status page, which currently shows that all services are operating normally.

One of the affected companies was Digital Ocean, which has had issues with accessing S3/RDS resources inside Droplets across several regions starting October 22.

“Our Engineering team is continuing to monitor the issue impacting accessibility to S3/RDS/ELB/EC2 resources across all regions,” the company wrote on the incident’s status page at 23:25 UTC on Oct 22.

Accessibility to the impacted resources has been restored, but it was still monitoring for possible issues, the company announced yesterday.

Related: Compromised AWS API Key Allowed Access to Imperva Customer Data

Related: AWS S3 Buckets Exposed Millions of Facebook Records

Related: Mirai-Based Botnet Launches Massive DDoS Attack on Streaming Service

view counter

Source: DDoS Attack Hits Amazon Web Services

Microsoft, NIST to Partner on Best Practice Patch Management Guide

Microsoft, NIST to Partner on Best Practice Patch Management Guide

– NIST National Cybersecurity Center of Excellence (NCCoE) has partnered with Microsoft to develop concise industry guidance and standards on enterprise best practice patch management.

The pair is also calling on vendors and organizations to join the effort, including those that provide technology offerings for patch management support or those with successful enterprise patch management experience.

According to Mark Simos, Microsoft’s Cybersecurity Solutions Group lead cybersecurity architect, the effort began following the massive 2017 WannaCry cyberattack. Microsoft released a patch for the targeted flaw months before the global cyber incident, but many organizations failed to patch, which allowed the malware to proliferate.

“We learned a lot from this journey, including how important it is to build clearer industry guidance and standards on enterprise patch management,” Simos wrote.

Over the last year, NCCoE and Microsoft have worked closely with the Center for Internet Security, Department of Homeland Security, and the Cybersecurity and Infrastructure Security Agency (CISA) to better understand the risks and necessary patching processes.

The groups also sat down with their customers to better understand the challenges and just why organizations aren’t applying timely patches. Microsoft found that many organizations were struggling with determining the right type of testing to use for patch testing, as well as just how quickly patches should be applied.

The project will include building a common enterprise patch management reference architectures and processes. Vendors will also build and validate implementation instructions at the NCCoE lab, and the results will be shared in a NIST Special publication as a practice guide.

For the healthcare sector, a patch management guide would be critical as industry stakeholders have long stressed that patching issues have added significant vulnerabilities to a sector that heavily relies on legacy platforms.

In March, CHIME told Sen. Mark Warner, D-Virginia, that patching, data inventory, and a lack of regulatory alignment are some of healthcare’s greatest vulnerabilities.

To NIST, the issue goes beyond awareness as there is widespread agreement that patching can be effective at mitigate some security risks. Organizations are challenged by the resource-intensive patching process, as well as concern that patching can reduce system and service availability.

Often, attempts to expedite the process, like not testing patches before production deployment can inadvertently break system functionality and disrupt business operations, NIST officials explained. However, patching delays increase the risk a hacker will take advantage of system vulnerabilities.

For NIST, the partnership with Microsoft will examine how both commercial and open-source tools can help with some of the biggest challenges of patching, including “system characterization and prioritization, patch testing, and patch implementation tracking and verification.”

Ultimately, this project will result in a NIST Cybersecurity Practice Guide, a publicly available description of the practical steps needed to implement a cybersecurity reference design that addresses this challenge throughout the device lifecycle.

“Applying patches is a critical part of protecting your system, and we learned that while it isn’t as easy as security departments think, it isn’t as hard as IT organizations think,” Simos explained. “In many ways, patching is a social responsibility because of how much society has come to depend on technology systems that businesses and other organizations provide.”

“This situation is exacerbated today as almost all organizations undergo digital transformations, placing even more social responsibility on technology,” he added. “Ultimately, we want to make it easier for everyone to do the right thing and are issuing this call to action.”

Interest stakeholders should visit the NCCoE posting in the Federal Register for more information.

Source: Microsoft, NIST to Partner on Best Practice Patch Management Guide