A new tool can monitor influenza A virus mutations in real time, researchers report.
The tool could help virologists learn how to stop viruses from replicating, according to the new study.
The gold nanoparticle-based probe measures viral RNA in live influenza A cells. It is the first time in virology that experts have used imaging tools with gold nanoparticles to monitor mutations in influenza, with unparalleled sensitivity.
“Our probe will provide important insight on the cellular features that lead a cell to produce abnormally high numbers of viral offspring and on possible conditions that favor stopping viral replication,” says senior author Laura Fabris, an associate professor in the materials science and engineering department in the School of Engineering at Rutgers University-New Brunswick.
Viral infections are a leading cause of illness and deaths. The new coronavirus, for example, has led to more than 24,000 confirmed cases globally, including more than 3,200 severe ones and nearly 500 deaths as of February 5, according to a World Health Organization report.
Influenza A, a highly contagious virus that arises every year, is concerning due to the unpredictable effectiveness of its vaccine. Influenza A mutates rapidly, growing resistant to drugs and vaccines as it replicates.
The new study highlights a promising new tool for virologists to study the behavior of influenza A, as well as any other RNA viruses, in host cells and to identify the external conditions or cell properties affecting them.
Until now, studying mutations in cells has required destroying them to extract their contents. The new tool enables analysis without killing cells, allowing researchers to get snapshots of viral replication as it occurs.
Next steps include studying multiple segments of viral RNA and monitoring the influenza A virus in animals.
Additional researchers from Rutgers and the University of Illinois at Urbana Champaign contributed to the study, which appears in the Journal of Physical Chemistry.
A flexible device can harvest the heat energy from the human body to monitor health, researchers report.
The device surpasses all other flexible harvesters that use body heat as their sole energy source.
In a paper in Applied Energy, the researchers report significant enhancements to the flexible body heat harvester they first reported in 2017. The harvesters use heat energy from the human body to power wearable technologies—think of smart watches that measure your heart rate, blood oxygen, glucose, and other health parameters—that never need to have their batteries recharged. The technology relies on the same principles governing rigid thermoelectric harvesters that convert heat to electrical energy.
Flexible harvesters that conform to the human body are highly desired for use with wearable technologies. Superior skin contact with flexible devices, as well as the ergonomic and comfort considerations to the device wearer are the core reasons behind building flexible thermoelectric generators, or TEGs, says corresponding author Mehmet Ozturk, a professor of electrical and computer engineering at North Carolina State University.
The performance and efficiency of flexible harvesters, however, currently trail well behind rigid devices, which have been superior in their ability to convert body heat into usable energy.
“The flexible device reported in this paper is significantly better than other flexible devices reported to date and is approaching the efficiency of rigid devices, which is very encouraging,” Ozturk says.
The proof-of-concept TEG originally reported in 2017 employed semiconductor elements that were connected electrically in series using liquid-metal interconnects made of EGaIn—a nontoxic alloy of gallium and indium. EGaIn provided both metal-like electrical conductivity and stretchability. Researchers embedded the entire device in a stretchable silicone elastomer.
The upgraded device employs the same architecture but it significantly improves the thermal engineering of the previous version, while increasing the density of the semiconductor elements responsible for converting heat into electricity. One of the improvements is an improved silicone elastomer—essentially a type of rubber—that encapsulates the EGaIn interconnects.
“The key here is using a high thermal conductivity silicone elastomer doped with graphene flakes and EGaIn,” Ozturk says. The elastomer provides mechanical robustness against punctures while improving the device’s performance.
“Using this elastomer allowed us to boost the thermal conductivity—the rate of heat transfer—by six times, allowing improved lateral heat spreading,” he says.
Ozturk adds that one of the strengths of the technology is that it eliminates the need for device manufacturers to develop new flexible, thermoelectric materials because it incorporates the very same semiconductor elements used in rigid devices. Ozturk says future work will focus on further improving the efficiencies of these flexible devices.
The research group has a recent patent on the technology. Funding for the work came from the NC State’s National Science Foundation-funded Advanced Self-Powered Systems of Integrated Sensors and Technologies Center.
Flexible biosensors are a popular new field of research. Soft pressure sensors are of particular interest because there are many applications for them in healthcare. Most flexible pressure sensors are based on solid-state components that tend to rely on carbon nanotubes and graphene. Carbon nanotubes or graphene flakes are seeded through a stretchy material to maintain conductivity while being squeezed and pulled, but the signal that is passed through changes when the material is deformed. This makes sensing using such materials somewhat inaccurate. Now researchers at KAIST, South Korea’s institute of science and technology, have been able to use a liquid metal to make highly accurate flexible pressure sensors that can be manufactured relatively inexpensively.
Liquid metals, such as Galinstan, an alloy of gallium, indium, and tin, have been tried inside flexible pressure sensors but the devices produced were not sensitive enough to detect heartbeats and other biological signals. The KAIST team created a 3D printed sensor that integrates liquid metal and a rigid microbump array to produce accurate, highly sensitive pressure readings.
The 3D printing makes manufacturing of such devices relatively easy, specifically the integration of the microbump array and a channel for the liquid metal. The capability allows for high sensitivity, enough to detect heartbeats on the skin, and a signal drift next to nonexistent, even after 10,000 stretching cycles.
These sensors can withstand moisture and other environmental variables and have already been integrated into a proof-of-concept wristband that monitors the pulse rate, heel pressure monitor, and as a non-invasive blood pressure sensor that estimates BP readings based on pulse travel times.
“It was possible to measure health indicators including pulse and blood pressure continuously as well as pressure of body parts using our proposed soft pressure sensor,” said Inkyu Park, the senior author of the study published in journal Advanced Healthcare Materials. “We expect it to be used in health care applications, such as the prevention and the monitoring of the pressure-driven diseases such as pressure ulcers in the near future. There will be more opportunities for future research including a whole-body pressure monitoring system related to other physical parameters.”
New Jersey’s largest hospital system said Friday that a ransomware attack last week disrupted its computer network and that it paid a ransom to stop it.
Hackensack Meridian Health did not say in its statement how much it paid to regain control over its systems but said it holds insurance coverage for such emergencies.
The attack forced hospitals to reschedule nonemergency surgeries and doctors and nurses to deliver care without access to electronic records.
The system said it was advised by experts not to disclose until Friday that it had been the victim of a ransomware attack. It said that its network’s primary clinical systems had returned to being operational, and that information technology specialists were working to bring all of its applications back online.
Hackensack Meridian said it had no indication that any patient information was subject to unauthorized access or disclosure.
It quickly notified the FBI and other authorities and spoke with cybersecurity and forensic experts, it said.
Hackensack Meridian operates 17 acute care and specialty hospitals, nursing homes, outpatient centers, and the psychiatric facility Carrier Clinic.
A new approach could make it easier to train computer for “extreme classification problems” like speech translation and answering general questions, researchers say.
The divide-and-conquer approach to machine learning can slash the time and computational resources required.
Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.
The researchers will present their work at the 2019 Conference on Neural Information Processing Systems in Vancouver. The results include tests from 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice University, visited Amazon Search in Palo Alto, California.
In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, the researchers showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the training resources of some state-of-the-art commercial systems.
“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” says Shrivastava, an assistant professor of computer science.
Machine learning for better search
Medini, a PhD student, says product search is challenging, in part, because of the sheer number of products. “There are about 1 million English words, for example, but there are easily more than 100 million products online.”
There are also millions of people shopping for those products, each in their own way. Some type a question. Others use keywords. And many aren’t sure what they’re looking for when they start. But because millions of online searches are performed every day, tech companies like Amazon, Google, and Microsoft have a lot of data on successful and unsuccessful searches. And using this data for a type of machine learning called deep learning is one of the most effective ways to give better results to users.
Deep learning systems, or neural network models, are vast collections of mathematical equations that take a set of numbers called input vectors, and transform them into a different set of numbers called output vectors. The networks are composed of matrices with several parameters, and state-of-the-art distributed deep learning systems contain billions of parameters that are divided into multiple layers. During training, data is fed to the first layer, vectors are transformed, and the outputs are fed to the next layer and so on.
“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must train on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.
“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini says. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”
“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini says. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”
A better way to tackle extreme classification problems
MACH takes a very different approach. Shrivastava describes it with a thought experiment randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he says. “It’s a drastic reduction from 100 million to three.”
In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.
“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he says. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he says. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”
Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he says. “So I have reduced my search space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”
In their experiments with Amazon’s training database, the researchers randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini says.
He says MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what the separate, independent worlds represent.
“They don’t even have to talk to each other,” Medini says. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”
“In general, training has required communication across parameters, which means that all the processors that are running in parallel have to share information,” says Shrivastava.
“Looking forward, communication is a huge issue in distributed deep learning. Google has expressed aspirations of training a 1 trillion parameter network, for example. MACH, currently, cannot be applied to use cases with small number of classes, but for extreme classification, it achieves the holy grail of zero communication.”
Support for the research came from the National Science Foundation, the Air Force Office of Scientific Research, Amazon Research, and the Office of Naval Research.
an artificial intelligence company based in Florida, has partnered with Googlevto create an autonomous patient monitoring system. By combining multiple sensors in a patient’s room and neural network data analysis, the system can identify and predict accidents and clinical events, in some cases warning healthcare staff before an incident happens.
Preventable accidents and medical issues in healthcare facilities result in thousands of patient deaths and significant patient suffering every year. These include falls, infections, and pressure ulcers. While such issues are theoretically avoidable, in many cases it is difficult or impossible for healthcare staff to identify and anticipate every such instance, and in many cases, they can only hope to react to such circumstances once they arise.
To address this, the patient monitoring system developed by care.ai allows patient rooms to be “self aware,” whereby patients are automatically monitored 24 hours a day through advanced sensors, and AI identifies and anticipates mishaps and issues, providing healthcare staff with advanced warning.
The company claims that the system allows healthcare staff to have more time to focus on their patients’ specific needs, rather than constantly keeping an eye on them or reacting to unforeseen events. Moreover, it should also allow healthcare staff to be much more proactive, and lead to lower overall levels of avoidable mishaps in healthcare facilities.
Medgadget had the opportunity to talk to Chakri Toleti, Founder and CEO of care.ai, about the company’s technology.
Conn Hastings, Medgadget: What inspired you to develop a patient monitoring system?
Chakri Toleti, care.ai: In early 2018, I received
a call that my mother in India had fallen and remained on the bathroom floor
for half an hour before her caregiver found her. Even the best medical
professionals simply can’t be everywhere at once, so they are often delayed in
responding to patient issues. This was the catalyst for care.ai, fueled by the
idea that patients should be able to maintain independence and privacy while
still being kept safe.
Other industries, like transportation and aviation,
have really transformed because of AI. Healthcare, however, has been slower to
adopt it. I considered the autonomous
driving – how self-driving technology constantly scans and monitors its
environments, responding to pedestrians, roadblocks, debris, etc. I thought,
“what if we could bring the autonomous monitoring of a self-driving car to a hospital?”
I created care.ai to turn every room into a Self Aware RoomTM.
Medgadget: Please give us some background on the types of incident the system is designed to anticipate.
Chakri Toleti: These
are the few use cases that we have deployed: staff efficacy, fall prevention, pressure
ulcer prevention, and hand sanitization monitoring.
In phase 2 we will be
deploying other use cases such as patient elopement prevention (wandering
patients), security violations and visitor management.
Medgadget: What type of sensors are included?
Chakri Toleti: care.ai’s sensors use the
most advanced technology of any solution in a healthcare setting. We use a wide
range of propriety sensors within our patented hardware and software framework.
We are leveraging NVIDIA’s Jetson platform as a core compute engine and further
accelerating the inferencing of the sensor data using Coral’s Edge TPU.
Medgadget: Please give us a basic overview of how the AI system learns to anticipate incidents in a patient’s room.
Chakri Toleti: care.ai’s purposefully architected deep neural networks are trained on our propriety library of behavioral data – in fact, it’s the world’s largest library of human behavior data in a healthcare setting. Using edge-computing framework, care.ai’s deep neural networks deliver predictive results within nanoseconds. Using this proprietary library, the sensors identify recognized behaviors and immediately send relevant alerts to the appropriate care team members. The alerts are sent through a mobile app, SMS, desktop app, or integrated into existing HIS solutions using our SDK/APIs.
Medgadget: How has the collaboration with Google helped the system?
Chakri Toleti: We chose to work with Google because their software and hardware frameworks for AI – and now their capability to bring it to the edge – meet care.ai’s needs for the scale, accuracy and performance necessary to build an enterprise-class platform. Coral’s edge TPU has been instrumental for us to scale, allowing us to preserve patients’ privacy while still conducting constant monitoring and processing.
Medgadget: Is the system in use at present? How do you deal with patient confidentiality and data security?
Chakri Toleti: Consulate Health Care, a leading provider of long-term healthcare services, is currently piloting care.ai. care.ai’s platform is the most scalable and secure AI solution in Healthcare, we process all of our data on the edge on a highly secure and custom-built operating system and publish the deidentified inferenced data in a secure and HIPAA complaint framework back to the server.