Computer Vision Projects Help Support the Fight Against COVID
It’s October 2020 and, as the seasons change, the world is waiting to see whether recent upticks in cases of COVID-19 in many communities will signal a new phase in the pandemic’s expansion or a trend that can be controlled and reversed. Meanwhile, data scientists have been working with disease control centers, hospitals, and other experts to apply technology to reduce the disease’s impact. In this post, Innotescus spotlights several computer vision/ML projects that support the fight against COVID.
Triaging COVID patient X-rays via ML model
At the start of October, the University of Minnesota announced a machine learning model that interprets chest x-rays to form judgements on whether a patient entering a hospital or care center emergency department may have COVID-19. The evaluation is part of standard protocols, is completed in seconds after the X-ray is taken, and patients with positive results are routed to isolated care areas. The ML model training involved 100,000 X-rays of virus-free patients and 18,000 X-rays of patients with COVID. The researchers integrated the findings engine directly into the Epic patient record system, and have made the X-ray-scoring model freely available to 450 centers using Epic and its App Orchard.
The University of Minnesota announced a machine learning model that interprets chest x-rays to form judgements on whether a patient entering a hospital or care center emergency department may have COVID-19.
Thermal scanning of individuals in motion, using edge CV computing
The ability to screen individuals for COVID symptoms, at high volume and at a distance, will be a key enabler for restoring long-distance and cross-border travel. To support this approach, machine learning can be trained for a preliminary COVID diagnosis based on thermal imaging and/or labored breathing patterns. Such is the case at airports in Taiwan, where edge-based image/video capture and processing is performed on industrial GPU computers from Premio. Captured pixels are processed by on-chip Intel or NVIDIA graphics engines, while multicore CPUs execute the computer vision algorithms. This technology is said to have contributed to Taiwan’s low COVID occurrence and a fatality rate that is usually between 1 and 5 people per day.
Leveraging 164 million Google images to forecast the local COVID impact
While it’s generally accepted that a community’s “built” environment (man-made urban, suburban and rural surroundings) have a measurable correlation with residents’ risk for a COVID outbreak, in-depth studies of different environmental characteristics and their risk correlation have been limited. To address this, researchers in public health, sociology and computing across several U.S. universities have collaborated to weigh contributory factors for community COVID illnesses. The research analyzed 164 million Google Street View images using computer vision, demographic data and Poisson regression models. The extracted influencing factors include the presence of multi-family homes and apartments, multi-lane local roads, higher-density sidewalk use, and dilapidated infrastructure. The findings could enable public policy makers at the federal, state, and local level to more systematically forecast potential hotspots and apply greater monitoring or risk-reduction policies.
Note from the author: We are not health professionals, and the information in this article should not be interpreted as professional advice. To learn more about the coronavirus pandemic, you can click here.
We are a group of scientists, engineers, and entrepreneurs with a vision for better AI. With backgrounds primarily in Machine Learning and Computer Vision, the Innotescus team understands the importance of having full control over and insight into data used to train Machine Learning models.