– The University of Maryland, Baltimore and UM Baltimore County recently signed an agreement to leverage UMBC’s AI, machine learning, and cybersecurity experience to protect medical devices and data from cyberattacks.
The two campuses will also partner on furthering data-base medical research. To UMB Vice President of Clinical and Translational Research Stephen Davis, cybersecurity must be part of all clinical and research projects.
While UMB’s expertise is in the medical expertise, Bruce Jarrell, MD, executive vice president, provost, and dean of the Graduate School explained UMBC is more focused on technology. By partnering, the campuses will strengthen the campuses and other agencies across the state.
“It allows us to use the very broad data that we gather in delivering healthcare to ask research questions that perhaps we might not be able to ask in the past that would allow us to improve patient safety and advance our progress in cures,” Jarrell said in a state.
“The work that we’re about to do together is a very beautiful example of interdisciplinarity,” Philip Rous provost and senior vice president for academic affairs at UMBC, said in a statement. “It is centered around bringing together experts, faculty, students with deep knowledge in different areas or perhaps different disciplines essentially to address, solve a problem, advance, innovate.”
UMBC will provide critical capabilities through core resources to UMBC’s Institute for Clinical and Translational Research (ICTR), led by Davis.
The partnership will also create a Cybersecurity and Artificial Intelligence Core, which will enable the research team to design machine learning models to analyze large data sets and determine whether any data could be collected to improve analysis, while helping to uncover and overcome possible cybersecurity risks related to devices and or systems.
Notably, the UMB-UMBC partnership will also lend its support to the Baltimore hub of the NIH-funded Clinical and Translational Science Award (CTDA). Officials said UMB joined Johns Hopkins University in the spring on a five-year grant meant to “improve the translational process, getting more treatments to patients more quickly.”
“It’s broader than cybersecurity,” Karl Steiner, vice president for research at UMBC, said in a statement. “Part of it is defense and part of it is scientific offense.”
Security leaders have long stressed that the healthcare sector should lean on outside resources and collaborate to fill cybersecurity gaps.
The Institute for Critical Infrastructure Technology recently told Sen. Mark Warner, D-Virginia: “Meaningful collaboration has proven one of the most under-utilized, cost-effective, and impactful strategies organizations can engage to mitigate hyper-evolving cyber threats. Threat sharing initiatives allow for stronger data protection and more importantly, for proactive deterrence options instead of reactive remediation efforts.”
The UMB-UMBC partnership should create a frame of reference for how to successfully accomplish common security goals, while fueling medical research and patient care.
In diabetes, Medtronic’s efforts in machine learning and artificial intelligence has been well documented with its joint launch with IBM of the virtual diabetes assistant Sugar.IQ. The company’s MiniMed 670g hybrid closed-loop insulin pump also comes loaded with the Guardian 3 sensor, which uses artificial intelligence to help diabetes patients beat high and low blood glucose-related events.
Now, the Dublin-based medical device giant is aiming to leverage AI in another business: stroke care. Last week, Medtronic announced that it has entered into a global distribution agreement with Viz.ai, whose artificial intelligence-powered imaging software is aimed at quickly treating patients suspected of having ischemic strokes. Viz.ai, based in San Francisco, received FDA’s de novo clearance for its clinical decision support software for stroke back in February 2018. The company is pursuing regulatory go-aheads in other countries.
In an interview this week, Stacey Pugh, vice president of Medtronic Neurovascular, explained that Viz.ai’s software can mean all the difference between a good and a bad medical outcome for a stroke patient with a large-vessel occlusion (LVO). Here’s what the software platform connected to CT scanners is capable of doing: the AI can quickly determine based on a CT scan of the patient’s brain, whether the patient has suffered a large-vessel occlusion, flag where it believes that occlusion has occurred and notify doctors.
“When you look at a scan of perfusion in the brain, there will be a certain amount of areas of perfusion you should expect to see, and this software through AI shows a segment of perfusion is missing, and that’s because the software has flagged it,” Pugh said in a phone interview. “It does this automatically before the scan can be read by a PACS [Picture Archiving and Communication System] and a radiologist looks at it.”
This saves valuable time in a health event where time is of the essence. Per Medtronic’s announcement, a Viz.ai study in “two centers showed that in 95.5 percent of true positive cases, its technology alerted the stroke specialist earlier than the standard of care, saving an average of 52 minutes.”
“We know from all of the research that’s been done that even by moving care up by half an hour in a large-vessel occlusion, you can meaningfully impact outcomes,” Pugh said. “So gaining minutes is a very meaningful outcome.”
Consider the standard protocols today per Pugh where the patient arrives at a hospital that may not be a comprehensive stroke center and the doctor orders CT scans of the brain. Then, the radiologist reads the scan and sends the information to the physician who ordered the scan. Thereafter, the physician reviews the scan and radiological reports and sends the information to the receiving physician at the comprehensive stroke center. This process can take some time and meanwhile the patient is “losing about 1.9 million neurons per minute that you have an LVO in the brain,” Pugh said.
Viz.ai’s system cuts down these steps improving the chances for a better outcome, she said. But there is another advantage. This software is especially valuable at hospitals that may lack stroke expertise.
“These stroke cases get messed up a lot of times at these smaller hospitals,” Pugh said. “You don’t have physicians who are looking at these kinds of scans all the time and so it’s not just about speed, it’s about detecting cases which otherwise wouldn’t be detected.”
The Viz.ai app is able to alert physicians and display images of suspected large-vessel occlusions.
Perhaps equally importantly, the software platform can be preprogrammed to alert doctors about the potential LVO at both the smaller, local hospital where the patient first arrives and the comprehensive stroke care center where the patient will be ultimately treated.
So you have speed and accuracy built into the system and the ability to get everyone on the stroke care team on the same page simultaneously, Pugh said.
Th other capability of the Viz.ai system is communication. The HIPAA compliant, cloud-based application allows doctors on the Viz.ai system to not only view the brain perfusion images of the patient suspected of having a large-vessel occlusion on their smartphones but also to communicate with each other through the app thereby streamlining care.
Even comprehensive stroke centers can benefit from the Viz.ai system because of its ability to flag potential trouble areas, Pugh said.
While the terms of Medtronic’s global distribution agreement with Viz.ai was not disclosed, Pugh said that the software can be sold both as standalone and as part of a bundle with Medtronic’s stroke hardware products. They include the Solitaire stent retriever, guidewires and other products that allow surgeons to remove the clot in the brain.
Viz.ai was founded in 2016, according to Crunchbase by Dr. Chris Mansi, David Golan and Manoj Ramchandran. Mansi is a neurosurgeon and the CEO of the company. The company raised $21 million in a Series A funding round in July 2018 that Kleiner Perkins led and in which GV (formerly Google Ventures) participated.
At the time, a Kleiner investor who joined Viz.ai’s board commented:
“We were attracted not only to the technology behind Viz.ai and its impact on patient outcomes, but also its adoption model. Many new health-tech solutions struggle to gain traction because they are an outside-in sale to medical teams, requiring changes to procedures and workflows. In contrast, physicians and their teams are driving adoption of the Viz.ai platform because it is not disruptive to emergency room procedures and fits naturally into existing systems,” said Mamoon Hamid, General Partner, Kleiner Perkins.
In other words, disruptive technologies can be more easily adopted as long as they do not cause disruption in the general, English sense of the term.
The partnership with Medtronic is further proof that the largest pure-play medical device company believes that more in the stroke market are likely to buy into Viz.ai’s product and vision.TrendMD v2.4.3
AI and healthcare represent a very tempting combination for any company with an outlook on the future. Google, one of the biggest corporations on the planet, wants to be right there in the front row of innovation, when it comes to the intersection of these fields.
Google and its sister companies, parts of the holding company Alphabet, are heavily investing in AI powered healthcare solutions. This has potentially huge implications for every Google user, the number of which is more than one billion.
It is the second try for Google, and the company is not alone
Google made an attempt to invest in this field 10 years ago, but the venture it was involved in, Google Health, failed to work as planned. However, Google has now re-started to focus its effort on healthcare.
Hundreds of employees are working on these health projects, often partnering with other companies and academics.
The company knows the value of being in the healthcare sphere. “It’s pretty hard to ignore a market that represents about 20 percent of [U.S.] GDP,” says John Moore, an industry analyst at Chilmark Research. “So whether it’s Google or it’s Microsoft or it’s IBM or it’s Apple, everyone is taking a look at what they can do in the healthcare space.”
Google doesn’t disclose the size of its investment, but Moore says it’s likely in the billions of dollars.
The push into AI and health is a natural evolution for a company that has developed algorithms that reach deep into our lives through the Web.
A new study has shown how artificial intelligence (AI) can be used in healthcare, with a particular focus on its uses in high pressure environments such as the intensive care unit (ICU).
Google is not the only big player to take an interest in healthcare. IBM Watson Health announced February 13th it plans to make a 10-year, $50 million investment in research collaborations with two separate academic centers – Brigham and Women’s Hospital and Vanderbilt University Medical Center – to advance the science of artificial intelligence (AI) and its applications to major public health issues.
Both companies understand that AI and machine learning can be put to work in healthcare just as well as in any other field.
“The fundamental underlying technologies of machine learning and artificial intelligence are applicable to all manner of tasks,” said Greg Corrado, a neuroscientist at Google. This is true, he says, “whether those are tasks in your daily life, like getting directions or sorting through email, or the kinds of tasks that doctors, nurses, clinicians and patients face every day.”
A software to help diagnose diabetic retinopathy
Things move along pretty fast. Google’s sister company Verily got a billion-dollar boost this year for its already considerable efforts. Among other projects, a software that can diagnose diabetic retinopathy is now used in India.
The new research is published in the April edition of Ophthalmology, the Journal of the American Academy of Ophthalmology.
This new study, derived from previous work from Google AI, proves its algorithm works roughly as well as human experts in screening patients for diabetic retinopathy. More than 29 million Americans have diabetes, and are at risk for diabetic retinopathy, a disease that causes blindness. In the disease’s early stages, people typically don’t notice changes in their vision, since the eyes and brain adapt to gradual vision loss. This is why diabetic retinopathy can go undetected and cause irreversible vision loss. People with diabetes must undergo yearly screenings, but sometimes even these prove to be inaccurate. A study found a 49 percent error rate among internists, diabetologists, and medical residents.
Recent AI advances could improve access to more accurate diabetic retinopathy screening.
A test has been developed to prove AI’s utility in this case. Ten ophthalmologists (four general ophthalmologists, one trained outside the US, four retina specialists, and one retina specialist in training) were asked to read each image once under one of three conditions: unassisted, grades only, and grades + heatmap.
Both of the latter types of assistance improved physicians’ diagnostic accuracy, with the amount of improvement depending on the physician’s level of expertise.
When receiving no assistance, general ophthalmologists were significantly less accurate than the algorithm, and retina specialists were not significantly more accurate than the algorithm. When assisted by the algorithm, general ophthalmologists were as accurate as the AI, but retina specialists exceeded the model’s performance.
“What we found is that AI can do more than simply automate eye screening, it can assist physicians in more accurately diagnosing diabetic retinopathy,” said lead researcher, Rory Sayres, PhD.. “AI and physicians working together can be more accurate than either alone.”
In another part of the project, Verily is working on tools to monitor blood sugar of diabetic patients. The company is also working to perfect surgical robots that learn from each surgery.
How to collect human data needed to improve AI solutions
It is important to retain medical data that are not usually collected for AI research purposes. To accumulate more useful data, Verily has partnered with Duke University and Stanford University for Project Baseline, which aims to find 10,000 volunteers willing to give necessary data to the company.
But even simple search engine queries can provide useful data about users. Rediet Abebe has attempted to find how search engine queries and social media data can provide informatno useful to AI powered solutions in healthcare.
Some of the healthcare specific problems researchers like Abebe are trying to solve through AI are those related to U.S. public health emergencies — like the nation’s disproportionately high maternal mortality rate. Abebe is currently on a 12-member body advising the National Institutes of Health on how AI can better serve biomedical and clinical research. Among the members are Google AI senior research scientist Greg Corrado, Intel principal engineer Michael McManus, Verily engineering director David Glazer, and AI Now Institute cofounder Kate Crawford, as well as professors from Stanford University, MIT, and other universities.
The group is expected to share some intermediary findings in June, while its final advisory thoughts will be delivered to NIH director Francis Collins in December.
“They want us to envision what kind of stuff we’d do to create real bridges between AI and biomedical and public health research,” Abebe said. “I’m really excited about the broad set of techniques we have and the unique style of doing research that the AI community has and using that to help address problems that impact underserved and marginalized communities.”