‘Do I have COVID-19?’ Free online tool does triage

‘Do I have COVID-19?’ Free online tool does triage

Do I have COVID-19? A new online tool lets people everywhere assess how likely it is they’ve contracted the novel coronavirus.

C19check.com makes it easy for the general public to self-triage and is designed, in part, to prevent a surge of patients at hospitals and healthcare facilities.

The free tool comes from Vital software with guidance from Emory Department of Emergency Medicine’s Health DesignED Center and the Emory Office of Critical Event Preparedness and Response.

The site is for educational purposes and not a replacement for a healthcare provider evaluation.

“We’re all fighting, in ways big and small, to keep our loved ones out of harm’s reach. But the anxiety and uncertainty around the best way to do that can result in crowded emergency departments that will have difficulty managing the surge,” says Justin Schrager, emergency medicine physician at Emory University Hospital and co-founder of Vital. “Our goal with C19check.com is to prevent that from happening, while also making it super simple for people to understand and follow CDC guidelines.”

C19check.com acts as an easy way to digest expert information and choose the best plan of action. Based on the answers to questions about signs and symptoms, age, and other medical problems, a person is directed to guidance based on CDC guidelines and is placed into one of three categories:

high risk (needs immediate medical attention),
intermediate risk (can contact their doctor for guidance about how to best manage their illness),
low risk (can most likely administer self-care or recover at home).

In any case, the person is never dissuaded from seeking professional medical advice or contacting their healthcare provider for more guidance.

“Doctors know that crowded waiting rooms could make the problem worse because people sick with COVID-19 could infect others, speeding the overall rate of infection,” says Alex Isakov, executive director of Emory University Office of Critical Event Preparedness and Response, and coauthor of the SORT algorithm. “Keeping stress off the system and limiting exposure for at-risk populations is going to be key to managing the community spread of COVID-19.”

The site is live and will be available for the duration of the COVID-19 public health emergency. It was built as a public service and is completely free. It is available on any computer, and both lay people and medical professionals can use it. It collects no personal information. It makes the company no money. Users can opt to share a zip code to contribute to research tracking the geographic spread and eventual recovery from the pandemic.

“We designed this tool as a way for the public to have something user friendly and evidence based to assess their risk and help guide them to the necessary next steps,” says Anna Quay Yaffee, assistant professor of emergency medicine and director of Global Health in Emergency Medicine Section at Emory University School of Medicine. “We want people who are low risk to have some cautious reassurance, and those who are at higher risk to know how to seek care and get more information.”

“The goal of this tool is to empower individuals, to better understand CDC guidance, and help to inform them about whether they should stay at home, seek medical care or go to the hospital,” says David Wright, chair of the emergency medicine department.

“We understand the public is concerned about the pandemic, about their signs and symptoms. They want guidance and we built this as a resource to help guide their actions, with easy to use, accessible information.”

Source: ‘Do I have COVID-19?’ Free online tool does triage

How to Address the Surging Need for Secure Remote Access to OT Networks

How to Address the Surging Need for Secure Remote Access to OT Networks

Strategies for Evaluating Secure Remote Access Solutions for OT/ICS Networks

Over the past decade, the number of employees in the U.S. working from home half-time or more has risen to an estimated five million, according to Global Workplace Analytics. However, those numbers now pale in comparison to today’s reality of businesses everywhere encouraging as many workers as possible to work from home.

As the size of the remote workforce surges, network administrators of operational technology (OT) networks find themselves on the front lines of enablement. They need to provide online connectivity to users who typically access industrial control systems physically, while remaining confident that security isn’t compromised. The task is significant as every company in the world relies on these networks. For nearly half of the Fortune 2000 – in industries including oil and gas, energy, utilities, manufacturing, pharmaceuticals, and food and beverage – these networks are critical components to their business. While the rest rely on OT networks to run their office infrastructure – lights, elevators, and datacenter infrastructure.

Who are the users who need remote access to OT environments and why? They generally fall into the following categories:

Equipment manufacturers – In most cases, at the time of purchase, the industrial control systems that comprise these networks include a contract for remote maintenance by the manufacturers themselves. Network administrators are accustomed to supporting these users to service existing machinery, including providing updates, error fixing and performance readings, so this is not a new requirement.

Remote workers – However, the challenge escalates when you look at this group of users. In today’s business climate this could mean providing any employee who previously worked onsite but is now working outside the facility, with online access so they can continue to do their jobs. For example, making changes to production lines and manufacturing processes.

Third-party contractors – Finally, many businesses outsource services to companies that specialize in specific operational areas, such as production optimization. Contractors who previously provided these services physically, now need remote access to relevant equipment to support their contract and keep production lines running smoothly. These services can become even more mission critical during times of disruption, depending on the industry and products and services provided.

Allowing for various types of users, systems, access levels, and functions is a complex connectivity challenge. Yet, standard access paths provided by the IT department often don’t match the specific use cases we see in the OT environment.

In times like these, where every organization is reducing staff on site, the need for secure remote access is increased. Whether your company is assessing your existing capability to provide secure connectivity to your OT environment and assets, or considering new solutions, these three questions can help guide your evaluation:

1. Do you have granular privileged access control? A maintenance person from a manufacturer of a control system for example, likely only needs to access a specific controller for a specific task for a limited time. To mitigate risk, you need to be able to extend access for that specific user only to necessary assets for a set time window with a few simple clicks.

2. Can you proactively monitor, prevent, and audit access? You need visibility and control over third-party and employee access before, during, and after a remote session takes place. This includes the ability to observe activity in real time and terminate the session if needed, as well as view recordings in retrospect for auditing and forensic purposes.

3. Are workflows and processes secure? Instead of relying on third parties for password hygiene, many of whom share passwords among multiple individuals, you need the ability to centrally manage user credentials with a password vault and validate each user with multi-factor authentication. Additionally, many times the nature of the work involves installing a new file. To ensure file integrity you also need to provide secure file transfer.

Remote access can increase your level of exposure and jeopardize maintenance and production. Thankfully by ensuring you have granularity of control, the ability to audit access, and additional levels of security, such as password vaulting and secure file transfer, you can mitigate that risk. And, importantly, give those on the front lines – network administrators of OT networks – confidence in their ability to address the surge in requests for greater connectivity to these critical environments, without compromising security.

Source: How to Address the Surging Need for Secure Remote Access to OT Networks

Unprotected Database Exposed 5 Billion Previously Leaked Records

Unprotected Database Exposed 5 Billion Previously Leaked Records

An Elasticsearch instance containing over 5 billion records of data leaked in previous cybersecurity incidents was found exposed to anyone with an Internet connection, Security Discovery reports.

The database was identified as belonging to UK-based security company Keepnet Labs, which focuses on keeping organizations safe from email-based cyber-attacks. It contained data leaked in security incidents that occured between 2012 and 2019.

The Elasticsearch instance, Security Discovery’s Bob Diachenko reveals, had two collections in it: one containing 5,088,635,374 records, and another with over 15 million records. This second collection was being constantly updated.

According to the security researcher, the data was well structured and included the hashtype, leak year, password (hashed, encrypted or plaintext, depending on the leak), email, email domain, and source of the leak.

Diachenko said he was able to confirm leaks originating from Adobe, Last.fm, Twitter, LinkedIn, Tumblr, VK and others.

The researcher immediately alerted Keepnet Labs, which took the database offline within an hour.

Most of the data, Diachenko says, appears to have been collected from previously known sources, but unrestricted access to such a collection would still represent a boon for cybercriminals, providing them with a great resource for phishing and identity theft.

“This massive collection of over five billion records delivers email addresses that can be used by criminals to send socially engineered phishing email scams. The criminals can craft the email with information relating to the breach it was associated with,” James McQuiggan, security awareness advocate at KnowBe4, told SecurityWeek in an emailed comment.

Responding to a SecurityWeek inquiry, Keepnet Labs confirmed that the database only contained publicly available data that can also be accessed through various online services.

The company also said that the data was “collected and correlated” for its customers only, to inform them if their accounts were part of previous breaches, and that customers can perform only searches related to their domains.

“No confidential customer data has been breached,” the company underlined.

The unprotected Elasticsearch cluster was identified by Diachenko on March 16, after being indexed by the BinaryEdge search engine on March 15. However, it’s not clear for how long the database stayed exposed to the Internet and whether third-parties accessed it during that time.

According to Keepnet Labs, the database became exposed while its supplier was moving the index to a different Elasticsearch server. During the operation, the company says, the firewall was disabled for roughly 10 minutes, enough for an online service to index the database.

“There is a certain irony is an exposed database of previously compromised data. The fact that this data was previously compromised doesn’t mean this incident is meaningless. The sheer volume of this collections makes it a valuable target for criminals. Sometimes the data itself is made more valuable by the ease of access or aggregation. It would be important to know for how long this data has been exposed, and of course, whether anyone has actually accessed it,” Tim Erlin, VP of product management and strategy at Tripwire, told SecurityWeek in an emailed comment.

“While the data exposed in this breach appears to be collected from previously known sources, the fact that it was all readily available, indexed, and publicly exposed makes it a big concern. Criminals can use the data contained to formulate attacks against organisations, and in particular use the information for spear-phishing attacks,” Javvad Malik, security awareness advocate at KnowBe4, commented.

Source: Unprotected Database Exposed 5 Billion Previously Leaked Records

How we know the new coronavirus comes from nature

How we know the new coronavirus comes from nature

The coronavirus behind the global COVID-19 pandemic evolved from nature and not a lab, according to a new genetic study.

Researchers analyzed the genome sequence of the novel SARS-CoV-2 coronavirus that emerged in the city of Wuhan, China, last year and found no evidence that a lab or some other type of engineering made the virus.

“We determined that SARS-CoV-2 originated through natural processes by comparing the genetic sequences and protein structures of other coronaviruses to those of new virus that causes COVID-19,” says Robert F. Garry, professor of microbiology and immunology at Tulane University School of Medicine and senior author of the paper in Nature Medicine.

“It is very close to a bat virus. The adaptations that the virus has made to affect humans are actually very different than what you would expect if you were designing it using computational models in biological engineering.

Coronavirus timeline

Coronaviruses are a large family of viruses that can cause illnesses ranging widely in severity. The first known severe illness a coronavirus caused emerged with the 2003 Severe Acute Respiratory Syndrome (SARS) outbreak in China. A second outbreak of severe illness began in 2012 in Saudi Arabia with the Middle East Respiratory Syndrome (MERS).

Last year, Chinese authorities alerted the World Health Organization of an outbreak of a novel strain of coronavirus causing severe illness, subsequently named SARS-CoV-2. As of March 17, 2020, over 179,000 cases of COVID-19 have been documented, although many more mild cases have likely gone undiagnosed. The virus has killed over 7,400 people.

Shortly after the outbreak began, Chinese scientists sequenced the genome of the novel coronavirus and made the data available to researchers worldwide. The resulting data show that the epidemic has expanded because of human-to-human transmission after an initial introduction into the human population.

The ‘backbone’ of the new coronavirus

Researchers used this sequencing data to explore the origins and evolution of SARS-CoV-2 by focusing in on several telltale features of the virus.

They analyzed the genetic template for spike proteins, armatures on the outside of the virus that it uses to grab and penetrate the outer walls of human and animal cells.

More specifically, they focused on two important features of the spike protein: the receptor-binding domain (RBD), a kind of grappling hook that grips onto host cells, and the cleavage site, a molecular can opener that allows the virus to crack open and enter host cells.

The scientists found that the RBD portion of the SARS-CoV-2 spike proteins evolved to effectively target a molecular feature on the outside of human cells called ACE2, a receptor involved in regulating blood pressure. The SARS-CoV-2 spike protein was so effective at binding the human cells that the scientists concluded it was the result of natural selection and not the product of genetic engineering.

Data on SARS-CoV-2’s backbone—its overall molecular structure—support this evidence for natural evolution, researchers say.

If someone wanted to engineer a new coronavirus as a pathogen, they would have constructed it from the backbone of a virus known to cause illness. But the scientists found that the SARS-CoV-2 backbone differed substantially from those of already known coronaviruses and mostly resembled related viruses found in bats and pangolins.

“These two features of the virus, the mutations in the RBD portion of the spike protein and its distinct backbone, rules out genetic engineering as a potential origin for SARS-CoV-2” says coauthor Kristian Andersen, an associate professor of immunology and microbiology at Scripps Research.

The first of two scenarios

Based on their genomic sequencing analysis, Garry and his colleagues conclude that the most likely origins for SARS-CoV-2 followed one of two possible scenarios.

In one scenario, the virus evolved to its current pathogenic state through natural selection in a non-human host and then jumped to humans—how previous coronavirus outbreaks have emerged, with humans contracting the virus after direct exposure to civets (SARS) and camels (MERS).

The researchers proposed bats as the most likely reservoir for SARS-CoV-2 as it is very similar to a bat coronavirus. There are no documented cases of direct bat-human transmission, however, suggesting that an intermediate host likely occurred between bats and humans.

In this scenario, both of the distinctive features of SARS-CoV-2’s spike protein—the RBD portion that binds to cells and the cleavage site that opens the virus up—would have evolved to their current state prior to entering humans. In this case, the current epidemic would probably have emerged rapidly as soon as humans became infected, as the virus would have already evolved the features that make it pathogenic and able to spread between people.

And the second

In the second proposed scenario, a non-pathogenic version of the virus jumped from an animal host into humans and then evolved to its current pathogenic state within the human population. For instance, some coronaviruses from pangolins, anteater-like mammals found in Asia and Africa, have an RBD structure similar to that of SARS-CoV-2.

A coronavirus from a pangolin could possibly have been transmitted to a human, either directly or through an intermediary host such as civets or ferrets. Then the other distinct spike protein characteristic of SARS-CoV-2, the cleavage site, could have evolved within a human host, possibly via limited undetected circulation in the human population prior to the beginning of the epidemic.

The researchers found that the SARS-CoV-2 cleavage site, appears similar to the cleave sites of strains of bird flu that has been shown to transmit easily between people. SARS-CoV-2 could have evolved such a virulent cleavage site in human cells and soon kicked off the current outbreak, as the coronavirus would possibly have become far more capable of spreading between people.

“It is pretty well-adapted to humans. That’s one of the puzzles we’re trying to understand as we examine the virus. It could have been circulating in humans for a while now,” Garry says.

Additional coauthors are from Columbia University, the University of Sydney, and the University of Edinburgh. The National Institutes of Health and the Pew Charitable Trusts funded the work.

Source: How we know the new coronavirus comes from nature

Machine learning pushes quantum computing forward

Machine learning pushes quantum computing forward

Researchers have created a machine learning framework to precisely locate atom-sized quantum bits in silicon.

It’s a crucial step for building a large-scale silicon quantum computer, the researchers report.

Here, Muhammad Usman and Lloyd Hollenberg of the University of Melbourne explain their research and what it means for the future of quantum computers:

Quantum computers are expected to offer tremendous computational power for complex problems—currently intractable even on supercomputers—in the areas of drug design, data science, astronomy, and materials chemistry among others.

The high technological and strategic stakes mean major technology companies as well as ambitious start-ups and government-funded research centers are all in the race to build the world’s first universal quantum computer.

Qubits and quantum computers

In contrast to today’s classical computers, where information is encoded in bits (0 or 1), quantum computers process information stored in quantum bits (qubits). These are hosted by quantum mechanical objects like electrons, the negatively charged particles of an atom.

Quantum states can also be binary and can be put in one of two possibilities, or effectively both at the same time—known as quantum superposition—offering an exponentially larger computational space with an increasing number of qubits.

This unique data crunching power is further boosted by entanglement, another magical property of quantum mechanics where the state of one qubit is able to dictate the state of another qubit without any physical connection, making them all 1’s for example. Einstein called it a “spooky action at distance.”

Different research groups in the world are pursuing different kinds of qubits, each having its own benefits and limitations. Some qubits offer potential for scalability, while others come with very long coherence times, that is the time for which quantum information can be robustly stored.

Qubits in silicon are highly promising as they offer both. Therefore, these qubits are one of the front-runner candidates for the design and implementation of a large-scale quantum computer architecture.

One way to implement large-scale quantum computer architecture in silicon is by placing individual phosphorus atoms on a two-dimensional grid.

The single and two qubit logical operations are controlled by a grid of nanoelectronic wires, bearing some resemblance to classical logic gates for conventional microelectronic circuits. However, key to this scheme is ultra-precise placement of phosphorus atoms on the silicon grid.

What’s holding things back?

However, even with state-of-the-art fabrication technologies, placing phosphorus atoms at precise locations in silicon lattice is a very challenging task. Small variations, of the order of one atomic lattice site, in their positions are often observed and may have a huge impact on the efficiency of two qubit operations.

The problem arises from the ultra-sensitive dependence of the exchange interaction between the electron qubits on phosphorus atoms in silicon. Exchange interaction is a fundamental quantum mechanical property where two subatomic particles such as electrons can interact in real space when their wave functions overlap and make interference patterns, much like the two traveling waves interfering on water surface.

Exchange interaction between electrons on phosphorus atom qubits can be exploited to implement fast two-qubit gates, but any unknown variation can be detrimental to accuracy of quantum gate. Like logic gates in a conventional computer, the quantum gates are the building blocks of a quantum circuit.

For phosphorus qubits in silicon, even an uncertainty in the location of qubit atom of the order of one atomic lattice site can alter the corresponding exchange interaction by orders of magnitude, leading to errors in two-qubit gate operations.

Such errors, accumulated over the large-scale architecture, may severely impede the efficiency of quantum computer, diminishing any quantum advantage expected due to the quantum mechanical properties of qubits.

Pinpointing qubit atoms

So in 2016, we worked with the Center for Quantum Computation & Communication Technology researchers at the University of New South Wales, to develop a technique that could pinpoint exact locations of phosphorus atoms in silicon.

The technique, reported in Nature Nanotechnology, was the first to use computed scanning tunneling microscope (STM) images of phosphorus atom wave functions to pinpoint their spatial locations in silicon.

The images were calculated using a computational framework which allowed electronic calculations to be performed on millions of atoms utilizing Australia’s national supercomputer facilities at the Pawsey supercomputing center.

These calculations produced maps of electron wave function patterns, where the symmetry, brightness, and size of features was directly related to the position of a phosphorus atom in silicon lattice, around which the electron was bound.

The fact that each donor atom positions led to a distinct map, pinpointing of qubit atom locations, known as spatial metrology, with single lattice site precision was achieved.

The technique worked very well at the individual qubit level. However, the next big challenge was to build a framework that could perform this exact atom spatial pinpointing with high speed and minimal human interaction coping with the requirements of a universal fault tolerant quantum computer.

Machine learning to the rescue

Machine learning is an emerging area of research which is revolutionizing almost every field of research, from medical science to image processing, robotics, and material design.

A carefully trained machine learning algorithm can process very large data sets with enormous efficiency.

One branch of machine learning is known as convolutional neural network (CNN)—an extremely powerful tool for image recognition and classification problems. When a CNN is trained on thousands of sample images, it can precisely recognize unknown images (including noise) and perform classifications.

Recognizing that the principle underpinning the established spatial metrology of qubit atoms is basically recognizing and classifying feature maps of STM images, we decided to train a CNN on the computed STM images. The work is published in the NPJ Computational Materials journal.

The training involved 100,000 STM images and achieved a remarkable learning of above 99% for the CNN. We then tested the trained CNN for 17600 test images including blurring and asymmetry noise typically present in the realistic environments.

The CNN classified the test images with an accuracy of above 98%, confirming that this machine learning-based technique could process qubit measurement data with high-throughput, high precision, and minimal human interaction.

This technique also has the potential to scale up for qubits consisting of more than one phosphorus atoms, where the number of possible image configurations would exponentially increase. However, machine learning-based framework could readily include any number of possible configurations.

In the coming years, as the number of qubits increase and size of quantum devices grow, qubit characterization via manual measurements is likely to be highly challenging and onerous.

This work shows how machine learning techniques such as developed in this work could play a crucial role in this aspect of the realization of a full-scale fault-tolerant universal quantum computer—the ultimate goal of the global research effort.

Source: University of Melbourne

Source: Machine learning pushes quantum computing forward