Tool monitors flu mutations in real time

Tool monitors flu mutations in real time

A new tool can monitor influenza A virus mutations in real time, researchers report.

The tool could help virologists learn how to stop viruses from replicating, according to the new study.

The gold nanoparticle-based probe measures viral RNA in live influenza A cells. It is the first time in virology that experts have used imaging tools with gold nanoparticles to monitor mutations in influenza, with unparalleled sensitivity.

“Our probe will provide important insight on the cellular features that lead a cell to produce abnormally high numbers of viral offspring and on possible conditions that favor stopping viral replication,” says senior author Laura Fabris, an associate professor in the materials science and engineering department in the School of Engineering at Rutgers University-New Brunswick.

Viral infections are a leading cause of illness and deaths. The new coronavirus, for example, has led to more than 24,000 confirmed cases globally, including more than 3,200 severe ones and nearly 500 deaths as of February 5, according to a World Health Organization report.

Influenza A, a highly contagious virus that arises every year, is concerning due to the unpredictable effectiveness of its vaccine. Influenza A mutates rapidly, growing resistant to drugs and vaccines as it replicates.

The new study highlights a promising new tool for virologists to study the behavior of influenza A, as well as any other RNA viruses, in host cells and to identify the external conditions or cell properties affecting them.

Until now, studying mutations in cells has required destroying them to extract their contents. The new tool enables analysis without killing cells, allowing researchers to get snapshots of viral replication as it occurs.

Next steps include studying multiple segments of viral RNA and monitoring the influenza A virus in animals.

Additional researchers from Rutgers and the University of Illinois at Urbana Champaign contributed to the study, which appears in the Journal of Physical Chemistry.

Source: Rutgers University

The post Tool monitors flu mutations in real time appeared first on Futurity.

Source: Tool monitors flu mutations in real time

Flexible tech harvests body heat to power health wearables

Flexible tech harvests body heat to power health wearables

A flexible device can harvest the heat energy from the human body to monitor health, researchers report.

The device surpasses all other flexible harvesters that use body heat as their sole energy source.

In a paper in Applied Energy, the researchers report significant enhancements to the flexible body heat harvester they first reported in 2017. The harvesters use heat energy from the human body to power wearable technologies—think of smart watches that measure your heart rate, blood oxygen, glucose, and other health parameters—that never need to have their batteries recharged. The technology relies on the same principles governing rigid thermoelectric harvesters that convert heat to electrical energy.

Flexible harvesters that conform to the human body are highly desired for use with wearable technologies. Superior skin contact with flexible devices, as well as the ergonomic and comfort considerations to the device wearer are the core reasons behind building flexible thermoelectric generators, or TEGs, says corresponding author Mehmet Ozturk, a professor of electrical and computer engineering at North Carolina State University.

The performance and efficiency of flexible harvesters, however, currently trail well behind rigid devices, which have been superior in their ability to convert body heat into usable energy.

“The flexible device reported in this paper is significantly better than other flexible devices reported to date and is approaching the efficiency of rigid devices, which is very encouraging,” Ozturk says.

The proof-of-concept TEG originally reported in 2017 employed semiconductor elements that were connected electrically in series using liquid-metal interconnects made of EGaIn—a nontoxic alloy of gallium and indium. EGaIn provided both metal-like electrical conductivity and stretchability. Researchers embedded the entire device in a stretchable silicone elastomer.

The upgraded device employs the same architecture but it significantly improves the thermal engineering of the previous version, while increasing the density of the semiconductor elements responsible for converting heat into electricity. One of the improvements is an improved silicone elastomer—essentially a type of rubber—that encapsulates the EGaIn interconnects.

“The key here is using a high thermal conductivity silicone elastomer doped with graphene flakes and EGaIn,” Ozturk says. The elastomer provides mechanical robustness against punctures while improving the device’s performance.

“Using this elastomer allowed us to boost the thermal conductivity—the rate of heat transfer—by six times, allowing improved lateral heat spreading,” he says.

Ozturk adds that one of the strengths of the technology is that it eliminates the need for device manufacturers to develop new flexible, thermoelectric materials because it incorporates the very same semiconductor elements used in rigid devices. Ozturk says future work will focus on further improving the efficiencies of these flexible devices.

The research group has a recent patent on the technology. Funding for the work came from the NC State’s National Science Foundation-funded Advanced Self-Powered Systems of Integrated Sensors and Technologies Center.

Source: NC State

The post Flexible tech harvests body heat to power health wearables appeared first on Futurity.

Source: Flexible tech harvests body heat to power health wearables

Liquid Metal Biosensors for Healthcare Monitoring

Liquid Metal Biosensors for Healthcare Monitoring

Flexible biosensors are a popular new field of research. Soft pressure sensors are of particular interest because there are many applications for them in healthcare. Most flexible pressure sensors are based on solid-state components that tend to rely on carbon nanotubes and graphene. Carbon nanotubes or graphene flakes are seeded through a stretchy material to maintain conductivity while being squeezed and pulled, but the signal that is passed through changes when the material is deformed. This makes sensing using such materials somewhat inaccurate. Now researchers at KAIST, South Korea’s institute of science and technology, have been able to use a liquid metal to make highly accurate flexible pressure sensors that can be manufactured relatively inexpensively.

Liquid metals, such as Galinstan, an alloy of gallium, indium, and tin, have been tried inside flexible pressure sensors but the devices produced were not sensitive enough to detect heartbeats and other biological signals. The KAIST team created a 3D printed sensor that integrates liquid metal and a rigid microbump array to produce accurate, highly sensitive pressure readings.

The 3D printing makes manufacturing of such devices relatively easy, specifically the integration of the microbump array and a channel for the liquid metal. The capability allows for high sensitivity, enough to detect heartbeats on the skin, and a signal drift next to nonexistent, even after 10,000 stretching cycles.

These sensors can withstand moisture and other environmental variables and have already been integrated into a proof-of-concept wristband that monitors the pulse rate, heel pressure monitor, and as a non-invasive blood pressure sensor that estimates BP readings based on pulse travel times.

“It was possible to measure health indicators including pulse and blood pressure continuously as well as pressure of body parts using our proposed soft pressure sensor,” said Inkyu Park, the senior author of the study published in journal Advanced Healthcare Materials. “We expect it to be used in health care applications, such as the prevention and the monitoring of the pressure-driven diseases such as pressure ulcers in the near future. There will be more opportunities for future research including a whole-body pressure monitoring system related to other physical parameters.”

Study in Advanced Healthcare Materials: Wearable Sensors: Highly Sensitive and Wearable Liquid Metal‐Based Pressure Sensor for Health Monitoring Applications

Via: KAIST

TrendMD v2.4.6

Source: Liquid Metal Biosensors for Healthcare Monitoring

The More Authentication Methods, the Merrier

The More Authentication Methods, the Merrier

An Increasingly Diverse, Dynamic Workforce Is Driving Dramatic Change in How Users Authenticate

Remember when being part of an organization’s workforce meant being an employee of that organization, and being “at work” meant sitting in an office at a desktop? In today’s digital age, the latter hasn’t been the case for many people for quite a long time, and in the growing gig economy, the former is becoming less and less common. The workforce is growing more distributed, diverse and dynamic every day, which is driving dramatic change in who’s working, where they’re working, and how they’re connecting with the resources they need to do their work. And if you’re in the business of enabling those connections, it’s driving dramatic change for you.

There are not only more users, but also more kinds of users working in more places, all needing to authenticate in a way that keeps resources secure without making access unduly difficult or time-consuming. And there’s the rub: There’s no one way to achieve that. You need an authentication solution that allows you to authenticate users in multiple ways, both to meet different users’ needs for convenient access and to make multi-factor authentication possible for security purposes. I touched on this in an earlier column about how to evaluate and choose authentication methods; now, let’s take a closer look at some examples of diverse users and their needs, and at what an authentication solution must deliver to meet those needs.

Meet Greg, the Fast-Moving Sales Exec Who’s Never in One Place for Long

We all know this type of user, who is constantly on the go and relies almost entirely on a mobile phone or tablet for access. To make that access easy for him, and secure for the organization, authentication methods that are made for mobility make the most sense. After all, if he has a device in his hand all the time, why not take advantage of it for authentication purposes? Phone-based biometrics, like fingerprint or face recognition, make it easy for this kind of user to quickly authenticate and connect. And on the rare occasions when he needs access through an office workstation or laptop, all he has to is walk up to it for the device to unlock; as long as he has his authenticating mobile device at hand, proximity authentication does the rest.

Then There’s Judy, Who’s Only in One Place… and Can’t Use a Mobile Device There

Mobile authentication may work perfectly for Greg, but it’s not an option for Judy, a helpdesk representative who works in a call center where mobile devices are prohibited. In this scenario, a physical authenticator like an employer-issued USB security key may be ideal. Hardware-based one-time passcode (OTP) keys may also be great options. There’s also a place for risk-based authentication that takes location into account. Since Judy works in the same building and at the same workstation every day, as long as she logs in from that workstation, she can be quickly authenticated using location services that confirm where she is. This makes authenticating quick and simple, yet secure for the organization. If there’s ever an attempt to log in from a different location using Judy’s credentials, an additional layer of authentication could be required to prove the person attempting to log in is really her. Or the organization could elect to have access automatically denied when a request comes from a different location – which would be reasonable in this case, since Judy only works from one location, without exception.

And Let’s Not Forget the Contractor Who Relies Entirely on Devices You Don’t Control

What about contractors or gig workers who aren’t traditional employees? How do you provide them with the access they require, absent direct control of the devices they’re using to access your organization’s resources? This is a perfect use case for a hardware or software token. A hardware token-based one-time passcode, or a software app that generates passcodes on a mobile phone, will make it possible for non-employees to prove they are who they say they are, no matter what devices they use for access.

Hardware- and software-based OTP solutions also work well for all types of users in environments with no network or internet connectivity. They’re ideal replacements for desktop passwords when the work environment provides no easy way for laptop, desktop or infrastructure components to connect to remote authentication services. In fact, I’m writing this on a flight that has limited Wi-Fi capabilities, and I was able to use my trusty software OTP on my iPhone (in airplane mode) to securely log into my laptop. This is especially important at a time when a lot of attention is paid to protecting connections to web-based applications or cloud-based SaaS applications. We all need to remember the critical nature of information that exists on people’s devices, including laptops, and the need to protect that information.

As the examples above illustrate, diversity in the workforce drives the need for diversity in authentication. As the workforce continues to evolve, a one-size-fits-all approach won’t work for different identity and access management needs across organizations. Managing access in ways that keep diverse users productive and engaged while also keeping your organization’s information secure will continue to be a challenge. Meeting that challenge depends on identity teams understanding the needs of different users and choosing a solution that provides a unified platform for secure enrollment, flexible choices for authentication and identity assurance, and features to reduce the burden on the IT help desk when users lose their credentials or obtain new mobile devices. Keep in mind, too, that adding a layer of risk-based authentication to augment all the options for authentication can further increase security and also reduce user friction.

In my next column, I’ll share ways risk-based authentication can make access experiences better for all the users I’ve described here. As always, awareness is the first step, and I hope the information provided is helpful to you in your journey.

Source: The More Authentication Methods, the Merrier

Large Hospital System Hit by Ransomware Attack

Large Hospital System Hit by Ransomware Attack

New Jersey’s largest hospital system said Friday that a ransomware attack last week disrupted its computer network and that it paid a ransom to stop it.

Hackensack Meridian Health did not say in its statement how much it paid to regain control over its systems but said it holds insurance coverage for such emergencies.

The attack forced hospitals to reschedule nonemergency surgeries and doctors and nurses to deliver care without access to electronic records.

The system said it was advised by experts not to disclose until Friday that it had been the victim of a ransomware attack. It said that its network’s primary clinical systems had returned to being operational, and that information technology specialists were working to bring all of its applications back online.

Hackensack Meridian said it had no indication that any patient information was subject to unauthorized access or disclosure.

It quickly notified the FBI and other authorities and spoke with cybersecurity and forensic experts, it said.

Hackensack Meridian operates 17 acute care and specialty hospitals, nursing homes, outpatient centers, and the psychiatric facility Carrier Clinic.

Related: The Case for Cyber Insurance

Tags:

Source: Large Hospital System Hit by Ransomware Attack

How to train computers faster for ‘extreme’ datasets

How to train computers faster for ‘extreme’ datasets

A new approach could make it easier to train computer for “extreme classification problems” like speech translation and answering general questions, researchers say.

The divide-and-conquer approach to machine learning can slash the time and computational resources required.

Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.

The researchers will present their work at the 2019 Conference on Neural Information Processing Systems in Vancouver. The results include tests from 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice University, visited Amazon Search in Palo Alto, California.

In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, the researchers showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the training resources of some state-of-the-art commercial systems.

“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” says Shrivastava, an assistant professor of computer science.

Machine learning for better search

Medini, a PhD student, says product search is challenging, in part, because of the sheer number of products. “There are about 1 million English words, for example, but there are easily more than 100 million products online.”

There are also millions of people shopping for those products, each in their own way. Some type a question. Others use keywords. And many aren’t sure what they’re looking for when they start. But because millions of online searches are performed every day, tech companies like Amazon, Google, and Microsoft have a lot of data on successful and unsuccessful searches. And using this data for a type of machine learning called deep learning is one of the most effective ways to give better results to users.

Deep learning systems, or neural network models, are vast collections of mathematical equations that take a set of numbers called input vectors, and transform them into a different set of numbers called output vectors. The networks are composed of matrices with several parameters, and state-of-the-art distributed deep learning systems contain billions of parameters that are divided into multiple layers. During training, data is fed to the first layer, vectors are transformed, and the outputs are fed to the next layer and so on.

“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must train on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.

“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini says. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”

“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini says. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”

A better way to tackle extreme classification problems

MACH takes a very different approach. Shrivastava describes it with a thought experiment randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he says. “It’s a drastic reduction from 100 million to three.”

In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.

“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he says. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he says. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”

Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he says. “So I have reduced my search space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”

In their experiments with Amazon’s training database, the researchers randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini says.

He says MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what the separate, independent worlds represent.

“They don’t even have to talk to each other,” Medini says. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”

“In general, training has required communication across parameters, which means that all the processors that are running in parallel have to share information,” says Shrivastava.

“Looking forward, communication is a huge issue in distributed deep learning. Google has expressed aspirations of training a 1 trillion parameter network, for example. MACH, currently, cannot be applied to use cases with small number of classes, but for extreme classification, it achieves the holy grail of zero communication.”

Support for the research came from the National Science Foundation, the Air Force Office of Scientific Research, Amazon Research, and the Office of Naval Research.

Source: Rice University

The post How to train computers faster for ‘extreme’ datasets appeared first on Futurity.

Source: How to train computers faster for ‘extreme’ datasets