Concentric Emerges from Stealth with AI Document Classification Product and $7.5 Million Seed Funding
Unstructured documents — especially those that have been given wrong or no sensitivity classification — are among the most difficult assets for any enterprise to track and secure. Problems come from staff inappropriately sharing and insecurely storing documents. Ensuing threats go beyond the compliance concern of leaking personal data, and include the danger of sensitive commercial data falling into the wrong hands.
San Jose-California based Concentric has emerged from stealth with the availability of a new deep learning solution called Semantic Intelligence. It uses language analysis to determine the sensitivity of individual documents to help solve and prevent this problem. At the same time, Concentric has raised $7.5 million seed funding from Clear Ventures, Engineering Capital, Homebrew and Core Ventures. Concentric was founded in 2018.
In a separate report (PDF) published January 29, 2020, Concentric provides the result of analyzing 26 million unstructured documents from companies in the technology, financial and healthcare sectors. It found that each company has just short of 10 million unstructured documents. Each employee owns almost 2,000 documents. Among these, each employee owns 253 business critical documents — and among these, 38 documents per employee are at risk. Over 627,000 source code files and over 1 million trading files were also found.
But Concentric did not simply find files that were at risk, it found files that were actually risked. Per employee, five business critical documents were erroneously shared with an external party. Twenty-one were improperly shared with other groups. Nine were erroneously shared with internal users. And three business critical documents were wrongly classified.
Manual classification of this volume of documents requires extensive staff training and is prone to error. Manual classification done in arrears is so costly and time-consuming that it is a project often delayed, sometimes indefinitely. Existing automated rule-based methods of searching documents for key words or phrases leads to large numbers of false positives, causing many documents to be over-classified and reducing the general availability of data to the company.
Concentric brings deep learning language analysis that can analyze context. It can tell the difference, for example, between a personal email quoting the dollar-value of a home, and the dollar-figure quoted in sales or M&A documents.
“Discovering and protecting unstructured data is a huge problem,” Concentric CEO and founder Karthik Krishnan told SecurityWeek. “The challenge is that this data is complex: contracts, NDAs, source code, design documents, and so on. Traditional methods of discovery have relied on using word patterns, but this lacks the context to be able to accurately classify the document. The result is that most companies don’t know where their high value assets are.”
Meanwhile, he continued, “deep learning has progressed to the point where it can both solve problems at scale and do it with a degree of precision. What we have built is a system that uses a deep learning language model to develop a semantic level of understanding of the context. We can look at both the words and how they are used within the broader context of a document to understand the meaning. This allows us, in a completely unsupervised manner, to build thematic groups, putting contracts, design documents, NDAs into their own groups.”
By then analyzing and comparing documents within their groups, he explained, the Semantic Intelligence product can understand “how the data has been identified or classified or shared across the business units to provide a risk-based view over that data. The idea is that business-critical data combined with how it has been shared, whether it has been shared with the right sets of people, provides a view into the risk. We could compare a design document with another design document and look for signs of risky sharing where a document might have been shared inappropriately. This is all autonomously derived without a single rule or regular expression or a policy function that needs to be defined up front. It’s all driven by the thematic groupings that we build using our deep learning models. The goal is to help companies discover and protect their unstructured data.”
Semantic Intelligence uncovers, categorizes and classifies the documents, and allows IT and security teams to monitor data security with timely information and risk visualizations that drill down into the at-risk documents. The solution also integrates with major third-party security and data stores to help customers leverage the security investments they already have in place.
“Businesses understand the importance of protecting their critical assets, and yet, despite their best efforts, an extreme amount of data is left unsecured, unidentified, misclassified and at risk,” said Krishnan. “Unstructured data is currently copious and dispersed, and it includes an alarming amount of business-critical information. It’s a target for cybercriminals and can be a pitfall for regulatory compliance, but securing it is incredibly difficult. It’s the data challenge of our digital generation that we’re laser-focused on solving.”
A large Swiss drugmaker and a technology giant will work together to use data and artificial intelligence to speed drug discovery and development.
Basel, Switzerland-based Novartis and Redmond, Washington-based Microsoft said Tuesday that the alliance between the two companies would enable employees at Novartis to find insights in large amounts of data. It would also allow data scientists from Microsoft Research and Novartis’ own research teams to use AI to find new approaches to personalized medicine for macular degeneration, manufacture of cell and gene therapies and shorten the time to design new drugs. The companies are referring to the two objectives as AI Empowerment and AI Exploration.
“As Novartis continues evolving into a focused medicines company powered by advanced therapy platforms and data science, alliances like this will help us deliver on our purpose to reimagine medicine to improve and extend patients’ lives,” Novartis CEO Vas Narasimhan said in a statement. “”Pairing our deep knowledge of human biology and medicine with Microsoft’s leading expertise in AI could transform the way we discover and develop medicines for the world.”
Novartis has an existing partnership with chipmaker Intel that also involves using AI for drug discovery.
The pairing of the two fields has also been on the minds of some attendees at the ongoing CB Insights Future of Health conference in New York. One attendee, Atomwise CEO Abraham Heifets, addressed the skepticism that some people have regarding the application of AI to drug discovery and highlighted the importance of using it to make significant advances rather than small variations on existing science that end up feeding that skepticism.
“Computational chemistry has a long history of over-promising and under-delivering,” Heifets said in the interview. “What you really want is real discoveries where nobody knew what the answer was.”
Based in San Francisco, Atomwise has partnered with several companies – including the contract research organization Charles River Laboratories and Chinese drugmaker Hansoh Pharma – to use AI in drug discovery.
Another challenge to AI in drug discovery and development is cleanliness of data. Even something as simple as a typo can mean the difference between a molecule having a binding affinity of 8.241 nanomolars and 8.241 millimolars, due to the proximity of “n” and “m” on a keyboard, Heifets said. Indeed, when he looked at PubChem, the National Institutes of Health’s public database of chemical molecules and their activities against biological assays, he found that 98 percent of 240 million data points did not pass Atomwise’s quality control.
“I’m deeply skeptical of people who say, throw it in a neural network and let it figure everything out,” he said.TrendMD v2.4.3
– The University of Maryland, Baltimore and UM Baltimore County recently signed an agreement to leverage UMBC’s AI, machine learning, and cybersecurity experience to protect medical devices and data from cyberattacks.
The two campuses will also partner on furthering data-base medical research. To UMB Vice President of Clinical and Translational Research Stephen Davis, cybersecurity must be part of all clinical and research projects.
While UMB’s expertise is in the medical expertise, Bruce Jarrell, MD, executive vice president, provost, and dean of the Graduate School explained UMBC is more focused on technology. By partnering, the campuses will strengthen the campuses and other agencies across the state.
“It allows us to use the very broad data that we gather in delivering healthcare to ask research questions that perhaps we might not be able to ask in the past that would allow us to improve patient safety and advance our progress in cures,” Jarrell said in a state.
“The work that we’re about to do together is a very beautiful example of interdisciplinarity,” Philip Rous provost and senior vice president for academic affairs at UMBC, said in a statement. “It is centered around bringing together experts, faculty, students with deep knowledge in different areas or perhaps different disciplines essentially to address, solve a problem, advance, innovate.”
UMBC will provide critical capabilities through core resources to UMBC’s Institute for Clinical and Translational Research (ICTR), led by Davis.
The partnership will also create a Cybersecurity and Artificial Intelligence Core, which will enable the research team to design machine learning models to analyze large data sets and determine whether any data could be collected to improve analysis, while helping to uncover and overcome possible cybersecurity risks related to devices and or systems.
Notably, the UMB-UMBC partnership will also lend its support to the Baltimore hub of the NIH-funded Clinical and Translational Science Award (CTDA). Officials said UMB joined Johns Hopkins University in the spring on a five-year grant meant to “improve the translational process, getting more treatments to patients more quickly.”
“It’s broader than cybersecurity,” Karl Steiner, vice president for research at UMBC, said in a statement. “Part of it is defense and part of it is scientific offense.”
Security leaders have long stressed that the healthcare sector should lean on outside resources and collaborate to fill cybersecurity gaps.
The Institute for Critical Infrastructure Technology recently told Sen. Mark Warner, D-Virginia: “Meaningful collaboration has proven one of the most under-utilized, cost-effective, and impactful strategies organizations can engage to mitigate hyper-evolving cyber threats. Threat sharing initiatives allow for stronger data protection and more importantly, for proactive deterrence options instead of reactive remediation efforts.”
The UMB-UMBC partnership should create a frame of reference for how to successfully accomplish common security goals, while fueling medical research and patient care.
In diabetes, Medtronic’s efforts in machine learning and artificial intelligence has been well documented with its joint launch with IBM of the virtual diabetes assistant Sugar.IQ. The company’s MiniMed 670g hybrid closed-loop insulin pump also comes loaded with the Guardian 3 sensor, which uses artificial intelligence to help diabetes patients beat high and low blood glucose-related events.
Now, the Dublin-based medical device giant is aiming to leverage AI in another business: stroke care. Last week, Medtronic announced that it has entered into a global distribution agreement with Viz.ai, whose artificial intelligence-powered imaging software is aimed at quickly treating patients suspected of having ischemic strokes. Viz.ai, based in San Francisco, received FDA’s de novo clearance for its clinical decision support software for stroke back in February 2018. The company is pursuing regulatory go-aheads in other countries.
In an interview this week, Stacey Pugh, vice president of Medtronic Neurovascular, explained that Viz.ai’s software can mean all the difference between a good and a bad medical outcome for a stroke patient with a large-vessel occlusion (LVO). Here’s what the software platform connected to CT scanners is capable of doing: the AI can quickly determine based on a CT scan of the patient’s brain, whether the patient has suffered a large-vessel occlusion, flag where it believes that occlusion has occurred and notify doctors.
“When you look at a scan of perfusion in the brain, there will be a certain amount of areas of perfusion you should expect to see, and this software through AI shows a segment of perfusion is missing, and that’s because the software has flagged it,” Pugh said in a phone interview. “It does this automatically before the scan can be read by a PACS [Picture Archiving and Communication System] and a radiologist looks at it.”
This saves valuable time in a health event where time is of the essence. Per Medtronic’s announcement, a Viz.ai study in “two centers showed that in 95.5 percent of true positive cases, its technology alerted the stroke specialist earlier than the standard of care, saving an average of 52 minutes.”
“We know from all of the research that’s been done that even by moving care up by half an hour in a large-vessel occlusion, you can meaningfully impact outcomes,” Pugh said. “So gaining minutes is a very meaningful outcome.”
Consider the standard protocols today per Pugh where the patient arrives at a hospital that may not be a comprehensive stroke center and the doctor orders CT scans of the brain. Then, the radiologist reads the scan and sends the information to the physician who ordered the scan. Thereafter, the physician reviews the scan and radiological reports and sends the information to the receiving physician at the comprehensive stroke center. This process can take some time and meanwhile the patient is “losing about 1.9 million neurons per minute that you have an LVO in the brain,” Pugh said.
Viz.ai’s system cuts down these steps improving the chances for a better outcome, she said. But there is another advantage. This software is especially valuable at hospitals that may lack stroke expertise.
“These stroke cases get messed up a lot of times at these smaller hospitals,” Pugh said. “You don’t have physicians who are looking at these kinds of scans all the time and so it’s not just about speed, it’s about detecting cases which otherwise wouldn’t be detected.”
The Viz.ai app is able to alert physicians and display images of suspected large-vessel occlusions.
Perhaps equally importantly, the software platform can be preprogrammed to alert doctors about the potential LVO at both the smaller, local hospital where the patient first arrives and the comprehensive stroke care center where the patient will be ultimately treated.
So you have speed and accuracy built into the system and the ability to get everyone on the stroke care team on the same page simultaneously, Pugh said.
Th other capability of the Viz.ai system is communication. The HIPAA compliant, cloud-based application allows doctors on the Viz.ai system to not only view the brain perfusion images of the patient suspected of having a large-vessel occlusion on their smartphones but also to communicate with each other through the app thereby streamlining care.
Even comprehensive stroke centers can benefit from the Viz.ai system because of its ability to flag potential trouble areas, Pugh said.
While the terms of Medtronic’s global distribution agreement with Viz.ai was not disclosed, Pugh said that the software can be sold both as standalone and as part of a bundle with Medtronic’s stroke hardware products. They include the Solitaire stent retriever, guidewires and other products that allow surgeons to remove the clot in the brain.
Viz.ai was founded in 2016, according to Crunchbase by Dr. Chris Mansi, David Golan and Manoj Ramchandran. Mansi is a neurosurgeon and the CEO of the company. The company raised $21 million in a Series A funding round in July 2018 that Kleiner Perkins led and in which GV (formerly Google Ventures) participated.
At the time, a Kleiner investor who joined Viz.ai’s board commented:
“We were attracted not only to the technology behind Viz.ai and its impact on patient outcomes, but also its adoption model. Many new health-tech solutions struggle to gain traction because they are an outside-in sale to medical teams, requiring changes to procedures and workflows. In contrast, physicians and their teams are driving adoption of the Viz.ai platform because it is not disruptive to emergency room procedures and fits naturally into existing systems,” said Mamoon Hamid, General Partner, Kleiner Perkins.
In other words, disruptive technologies can be more easily adopted as long as they do not cause disruption in the general, English sense of the term.
The partnership with Medtronic is further proof that the largest pure-play medical device company believes that more in the stroke market are likely to buy into Viz.ai’s product and vision.TrendMD v2.4.3
AI and healthcare represent a very tempting combination for any company with an outlook on the future. Google, one of the biggest corporations on the planet, wants to be right there in the front row of innovation, when it comes to the intersection of these fields.
Google and its sister companies, parts of the holding company Alphabet, are heavily investing in AI powered healthcare solutions. This has potentially huge implications for every Google user, the number of which is more than one billion.
It is the second try for Google, and the company is not alone
Google made an attempt to invest in this field 10 years ago, but the venture it was involved in, Google Health, failed to work as planned. However, Google has now re-started to focus its effort on healthcare.
Hundreds of employees are working on these health projects, often partnering with other companies and academics.
The company knows the value of being in the healthcare sphere. “It’s pretty hard to ignore a market that represents about 20 percent of [U.S.] GDP,” says John Moore, an industry analyst at Chilmark Research. “So whether it’s Google or it’s Microsoft or it’s IBM or it’s Apple, everyone is taking a look at what they can do in the healthcare space.”
Google doesn’t disclose the size of its investment, but Moore says it’s likely in the billions of dollars.
The push into AI and health is a natural evolution for a company that has developed algorithms that reach deep into our lives through the Web.
A new study has shown how artificial intelligence (AI) can be used in healthcare, with a particular focus on its uses in high pressure environments such as the intensive care unit (ICU).
Google is not the only big player to take an interest in healthcare. IBM Watson Health announced February 13th it plans to make a 10-year, $50 million investment in research collaborations with two separate academic centers – Brigham and Women’s Hospital and Vanderbilt University Medical Center – to advance the science of artificial intelligence (AI) and its applications to major public health issues.
Both companies understand that AI and machine learning can be put to work in healthcare just as well as in any other field.
“The fundamental underlying technologies of machine learning and artificial intelligence are applicable to all manner of tasks,” said Greg Corrado, a neuroscientist at Google. This is true, he says, “whether those are tasks in your daily life, like getting directions or sorting through email, or the kinds of tasks that doctors, nurses, clinicians and patients face every day.”
A software to help diagnose diabetic retinopathy
Things move along pretty fast. Google’s sister company Verily got a billion-dollar boost this year for its already considerable efforts. Among other projects, a software that can diagnose diabetic retinopathy is now used in India.
The new research is published in the April edition of Ophthalmology, the Journal of the American Academy of Ophthalmology.
This new study, derived from previous work from Google AI, proves its algorithm works roughly as well as human experts in screening patients for diabetic retinopathy. More than 29 million Americans have diabetes, and are at risk for diabetic retinopathy, a disease that causes blindness. In the disease’s early stages, people typically don’t notice changes in their vision, since the eyes and brain adapt to gradual vision loss. This is why diabetic retinopathy can go undetected and cause irreversible vision loss. People with diabetes must undergo yearly screenings, but sometimes even these prove to be inaccurate. A study found a 49 percent error rate among internists, diabetologists, and medical residents.
Recent AI advances could improve access to more accurate diabetic retinopathy screening.
A test has been developed to prove AI’s utility in this case. Ten ophthalmologists (four general ophthalmologists, one trained outside the US, four retina specialists, and one retina specialist in training) were asked to read each image once under one of three conditions: unassisted, grades only, and grades + heatmap.
Both of the latter types of assistance improved physicians’ diagnostic accuracy, with the amount of improvement depending on the physician’s level of expertise.
When receiving no assistance, general ophthalmologists were significantly less accurate than the algorithm, and retina specialists were not significantly more accurate than the algorithm. When assisted by the algorithm, general ophthalmologists were as accurate as the AI, but retina specialists exceeded the model’s performance.
“What we found is that AI can do more than simply automate eye screening, it can assist physicians in more accurately diagnosing diabetic retinopathy,” said lead researcher, Rory Sayres, PhD.. “AI and physicians working together can be more accurate than either alone.”
In another part of the project, Verily is working on tools to monitor blood sugar of diabetic patients. The company is also working to perfect surgical robots that learn from each surgery.
How to collect human data needed to improve AI solutions
It is important to retain medical data that are not usually collected for AI research purposes. To accumulate more useful data, Verily has partnered with Duke University and Stanford University for Project Baseline, which aims to find 10,000 volunteers willing to give necessary data to the company.
But even simple search engine queries can provide useful data about users. Rediet Abebe has attempted to find how search engine queries and social media data can provide informatno useful to AI powered solutions in healthcare.
Some of the healthcare specific problems researchers like Abebe are trying to solve through AI are those related to U.S. public health emergencies — like the nation’s disproportionately high maternal mortality rate. Abebe is currently on a 12-member body advising the National Institutes of Health on how AI can better serve biomedical and clinical research. Among the members are Google AI senior research scientist Greg Corrado, Intel principal engineer Michael McManus, Verily engineering director David Glazer, and AI Now Institute cofounder Kate Crawford, as well as professors from Stanford University, MIT, and other universities.
The group is expected to share some intermediary findings in June, while its final advisory thoughts will be delivered to NIH director Francis Collins in December.
“They want us to envision what kind of stuff we’d do to create real bridges between AI and biomedical and public health research,” Abebe said. “I’m really excited about the broad set of techniques we have and the unique style of doing research that the AI community has and using that to help address problems that impact underserved and marginalized communities.”