News 2020

April 2020

New AI Enables Teachers To Rapidly Develop Intelligent Tutoring Systems

"Teaching Computers To Teach" Is Key, Say Carnegie Mellon Researchers

Byron Spice

Intelligent tutoring systems have been shown to be effective in helping to teach certain subjects, such as algebra or grammar, but creating these computerized systems is difficult and laborious. Now, researchers at Carnegie Mellon University have shown they can rapidly build them by, in effect, teaching the computer to teach.Using a new method that employs artificial intelligence, a teacher can teach the computer by demonstrating several ways to solve problems in a topic, such as multicolumn addition, and correcting the computer if it responds incorrectly.Notably, the computer system learns to not only solve the problems in the ways it was taught, but also to generalize to solve all other problems in the topic, and do so in ways that might differ from those of the teacher, said Daniel Weitekamp III, a Ph.D. student in CMU's Human-Computer Interaction Institute (HCII)."A student might learn one way to do a problem and that would be sufficient," Weitekamp explained. "But a tutoring system needs to learn every kind of way to solve a problem." It needs to learn how to teach problem solving, not just how to solve problems.That challenge has been a continuing problem for developers creating AI-based tutoring systems, said Ken Koedinger, professor of human-computer interaction and psychology. Intelligent tutoring systems are designed to continuously track student progress, provide next-step hints and pick practice problems that help students learn new skills.When Koedinger and others began building the first intelligent tutors, they programmed production rules by hand — a process, he said, that took about 200 hours of development for each hour of tutored instruction. Later, they would develop a shortcut, in which they would attempt to demonstrate all possible ways of solving a problem. That cut development time to 40 or 50 hours, he noted, but for many topics, it is practically impossible to demonstrate all possible solution paths for all possible problems, which reduces the shortcut's applicability.The new method may enable a teacher to create a 30-minute lesson in about 30 minutes, which Koedinger termed "a grand vision" among developers of intelligent tutors."The only way to get to the full intelligent tutor up to now has been to write these AI rules," Koedinger said. "But now the system is writing those rules."A paper describing the method, authored by Weitekamp, Koedinger and HCII System Scientist Erik Harpstead, was accepted by the Conference on Human Factors in Computing Systems (CHI 2020), which was scheduled for this month but canceled due to the COVID-19 pandemic. The paper has now been published in the conference proceedings in the Association for Computing Machinery's Digital Library.The new method makes use of a machine learning program that simulates how students learn. Weitekamp developed a teaching interface for this machine learning engine that is user friendly and employs a "show-and-correct" process that's much easier than programming.For the CHI paper, the authors demonstrated their method on the topic of multicolumn addition, but the underlying machine learning engine has been shown to work for a variety of subjects, including equation solving, fraction addition, chemistry, English grammar and science experiment environments.The method not only speeds the development of intelligent tutors, but promises to make it possible for teachers, rather than AI programmers, to build their own computerized lessons. Some teachers, for instance, have their own preferences on how addition is taught, or which form of notation to use in chemistry. The new interface could increase the adoption of intelligent tutors by enabling teachers to create the homework assignments they prefer for the AI tutor, Koedinger said.Enabling teachers to build their own systems also could lead to deeper insights into learning, he added. The authoring process may help them recognize trouble spots for students that, as experts, they don't themselves encounter."The machine learning system often stumbles in the same places that students do," Koedinger explained. "As you're teaching the computer, we can imagine a teacher may get new insights about what's hard to learn because the machine has trouble learning it."This research was supported in part by the Institute of Education Sciences and Google.

FitByte Uses Sensors on Eyeglasses To Automatically Monitor Diet

CMU Researchers Propose Multimodal System To Track Foods, Liquid Intake

Virginia Alvino Young

Food plays a big role in our health, and for that reason many people trying to improve their diet often track what they eat. A new wearable from researchers in Carnegie Mellon University's School of Computer Science helps wearers track their food habits with high fidelity.FitByte, a noninvasive, wearable sensing system, combines the detection of sound, vibration and movement to increase accuracy and decrease false positives. It could help users reach their health goals by tracking behavioral patterns, and gives practitioners a tool to understand the relationship between diet and disease and to monitor the efficacy of treatment.The device tracks all stages of food intake. It detects chewing, swallowing, hand-to-mouth gestures and visuals of intake, and can be attached to any pair of consumer eyeglasses. "The primary sensors on the device are accelerometers and gyroscopes, which are in almost every device at this point, like your phones and your watches," said Mayank Goel, an assistant professor in the Institute for Software Research and Human-Computer Interaction Institute.An infrared proximity sensor detects hand-to-mouth gestures. To identify chewing, the system monitors jaw motion using four gyroscopes around the wearer's ears. The sensors look behind the ear to track the flexing of the temporal muscle as the user moves their jaw. High-speed accelerometers placed near the glasses' earpiece perceive throat vibrations during swallowing. This technology addresses the longstanding challenge of accurately detecting drinking, and the intake of soft things like yogurt and ice cream.A small camera at the front of the glasses points downward to capture just the area around the mouth and only turns on when the model detects the user eating or drinking. "To address issues of privacy, we're currently processing everything offline," said Abdelkareem Bedri, an HCII doctoral student. "The captured images are not shared anywhere except the user's phone."At this point, the system relies on users to identify the food and drink in photos. But the research team has plans for a larger test deployment, which will supply the data deep learning models need to automatically discern food type.FitByte was tested in five unconstrained situations including a lunch meeting, watching TV, having a quick snack, exercising in a gym and hiking outdoors. Modeling across such noisy data allows the algorithm to generalize across conditions."Our team can take sensor data and find behavior patterns. In what situations do people consume the most? Are they binge eating? Do they eat more when they're alone or with other people? We are also working with clinicians and practitioners on the problems they'd like to address," Goel said.The team will continue developing the system by adding more noninvasive sensors that will allow the model to detect blood glucose levels and other important physiological measures. The researchers are also creating an interface for a mobile app that could share data with users in real time.Other contributing researchers include CMU students Diana Li, Rushil Khurana and Kunal Bhuwalka. The paper was accepted by the Conference on Human Factors in Computing Systems (CHI 2020), which was scheduled for this month but canceled due to the COVID-19 pandemic. It's available in the ACM Digital Library.

New Device Simulates Feel of Walls, Solid Objects in Virtual Reality

Strings Attached to Hand, Fingers Create More Realistic Haptic Feedback

Byron Spice

Today's virtual reality systems can create immersive visual experiences, but seldom do they enable users to feel anything — particularly walls, appliances and furniture. A new device developed at Carnegie Mellon University, however, uses multiple strings attached to the hand and fingers to simulate the feel of obstacles and heavy objects.By locking the strings when the user's hand is near a virtual wall, for instance, the device simulates the sense of touching the wall. Similarly, the string mechanism enables people to feel the contours of a virtual sculpture, sense resistance when they push on a piece of furniture or even give a high five to a virtual character.Cathy Fang, who will graduate from CMU next month with a joint degree in mechanical engineering and human-computer interaction, said the shoulder-mounted device takes advantage of spring-loaded strings to reduce weight, consume less battery power and keep costs low."Elements such as walls, furniture and virtual characters are key to building immersive virtual worlds, and yet contemporary VR systems do little more than vibrate hand controllers," said Chris Harrison, assistant professor in CMU's Human-Computer Interaction Institute (HCII). User evaluation of the multistring device, as reported by co-authors Harrison, Fang, Robotics Institute engineer Matthew Dworman and HCII doctoral student Yang Zhang, found it was more realistic than other haptic techniques."I think the experience creates surprises, such as when you interact with a railing and can wrap your fingers around it," Fang said. "It's also fun to explore the feel of irregular objects, such as a statue."The team's research paper was named a best paper by the Conference on Human Factors in Computing Systems (CHI 2020), which was scheduled for this month but canceled due to the COVID-19 pandemic. The paper has now been published in the conference proceedings in the Association for Computing Machinery's Digital Library.Other researchers have used strings to create haptic feedback in virtual worlds, but typically they use motors to control the strings. Motors wouldn't work for the CMU researchers, who envisioned a system both light enough to be worn by the user and affordable for consumers."The downside to motors is they consume a lot of power," Fang said. "They also are heavy."Instead of motors, the team used spring-loaded retractors, similar to those seen in key chains or ID badges. They added a ratchet mechanism that can be rapidly locked with an electrically controlled latch. The springs, not motors, keep the strings taut. Only a small amount of electrical power is needed to engage the latch, so the system is energy efficient and can be operated on battery power.The researchers experimented with a number of different strings and string placements, eventually concluding that attaching one string to each fingertip, one to the palm and one to the wrist provided the best experience. A Leap Motion sensor, which tracks hand and finger motions, is attached to the VR headset. When it senses that a user's hand is in proximity to a virtual wall or other obstacle, the ratchets are engaged in a sequence suited to those virtual objects. The latches disengage when the person withdraws their hand.The entire device weighs less than 10 ounces. The researchers estimate that a mass-produced version would cost less than $50.Fang said the system would be suitable for VR games and experiences that involve interacting with physical obstacles and objects, such a maze. It might also be used for visits to virtual museums. And, in a time when physically visiting retail stores is not always possible, "you might also use it to shop in a furniture store," she added.

Carnegie Mellon Unveils Five Interactive COVID-19 Maps

County-Level Data Will Be Used To Forecast Disease Activity

Byron Spice

Carnegie Mellon University today unveiled five interactive maps displaying real-time information on symptoms, doctor visits, medical tests and browser searches related to COVID-19 in the United States, including estimated disease activity at the county level. The maps on CMU's COVIDcast website include data developed with the help of partners including Google, Facebook, Quidel Corp. and a national health system. The data, which is updated daily, will provide the general public and decision makers with a new and unique means of monitoring the ebb and flow of the disease across the country. "COVIDcast leverages Carnegie Mellon's leadership in machine learning, statistics and data science, and builds upon our partnership with the Centers for Disease Control and Prevention (CDC) in epidemic forecasting at a time when policy makers and health care providers are eager for more insights into the spread of COVID-19," said CMU President Farnam Jahanian. "Our multidisciplinary team of researchers has worked tirelessly to bring together a variety of data sources to support informed decision-making throughout our global society." Visitors to COVIDcast can use tabs to select which data source is visualized on the U.S. map; to display the data at the level of states, metropolitan areas or counties; and to show either intensity of activity or whether activity is rising or falling. COVIDcast was created by Carnegie Mellon's Delphi Research Group and its COVID-19 Response Team, which drew many volunteer faculty, students and staff in recent weeks. The team has collected, analyzed and, in some cases, created these data sources to paint a more complete, detailed and up-to-date picture of current COVID-19 activity. The group plans to use this enhanced information in forecasting disease activity. These forecasts will provide up to four weeks of advance warning to hospitals in a given locale that they likely will see increases in the number of people requiring hospital care. "This is not the finish line for us — it's just the beginning," said Ryan Tibshirani, co-leader of the Delphi group and an associate professor of statistics and machine learning. The forecasts, as well as "nowcasts" that attempt to provide a combined, integrated view of current conditions, promise to provide important guidance as government and health care officials plan next steps in addressing the pandemic. Jodi Forlizzi, professor and director of CMU's Human-Computer Interaction Institute, led the team that created the visualizations of the data sources. The data sources include responses to CMU surveys by users of Google Surveys, and by users of Facebook. The surveys, hosted by CMU, ask people if they know someone who is experiencing fever, cough and other symptoms related to COVID-19. About 600,000 Google users reply each day to the one-question survey: "Do you know someone in your community who is sick (fever, along with cough, or shortness of breath, or difficulty breathing) right now?" These self-reported symptoms enable the CMU Delphi group to make estimates of disease activity, which they have shown are strongly correlated with test-confirmed cases of COVID-19. Delphi began reporting the survey results from Facebook users earlier this week. About a million Facebook users a week have responded to CMU's multiple-question surveys regarding symptoms; Facebook is displaying the estimates generated by Delphi based on those results. Another data source for the maps is Google Health Trends, which has provided data for the Delphi group's influenza forecasts for the past five years. For the latest forecasting project, Delphi uses the Google Health Trends interface to estimate how often people in a given location and on a given day search Google for topics related to COVID-19. A national health system also is providing statistics on patient visits to doctors and telemedicine visits. This enables the CMU Delphi researchers to estimate the percentage of visits for COVID-19 related symptoms in any given location for a given day. Quidel Corp., a medical test maker, provides the group with statistics on influenza tests. Flu tests are routinely ordered for people suffering COVID-19 symptoms as a means of excluding flu as a diagnosis, thus requests for flu tests are indicators of possible COVID-19 activity. "All of these signals are just rough indicators of COVID-19," emphasized Roni Rosenfeld, co-leader of Delphi and head of CMU's Machine Learning Department. "Any one data source may not be conclusive, but if multiple sources indicate the same thing, people can have greater confidence about what is happening or will soon happen in various locales." Tibshirani and Rosenfeld said they expect to add several additional data sources in the weeks ahead as they prepare to begin forecasting. The Delphi group is continuously providing all of its estimates in a computer-accessible way to anyone. Details are available on GitHub. The Delphi Research Group has been performing epidemic forecasts for the past eight years, most notably for each influenza season. Last year, the CDC named Delphi one of two National Centers of Excellence for Influenza Forecasting. At the CDC's request, the group this spring extended and adapted its flu forecasting efforts to encompass COVID-19.

Hammer Earns NSF CAREER Award

Virginia Alvino Young

Jessica Hammer, the Thomas and Lydia Moran Assistant Professor of Learning Science in the School of Computer Science's Human-Computer Interaction Institute, has received a National Science Foundation Faculty Early Career Development (CAREER) Award, the organization's most prestigious award for young faculty members. The $550,000 award will support her work on creating learning-supportive game-streaming interfaces. Hammer's proposed project will apply her research interests in games and learning theory to the game streaming website Twitch.tv. Many viewers already use Twitch to learn about everything from crafting to coding. To make the platform a more effective learning environment, Hammer will use learning theory to inform the design of a more interactive viewer interface and will create new educational games that take advantage of viewer participation. Hammer, who holds a joint appointment with CMU's Entertainment Technology Center, will study user needs and how the system affects viewer learning, and work with classroom instructors to better understand how these systems can be deployed. The project will also create new research tools and support stream-based embedded assessments. "Making game-streaming platforms effective learning environments can increase access to existing educational games, for example, by reducing financial barriers to entry," said Hammer, who added that the project will encourage graduate and undergraduate students from underrepresented minority groups to participate in computing. "Our research will enable educational game companies to extend their games for streaming environments, and to incorporate our stream-based data collection tools into their development processes." All systems from this project will be released as open-source so other researchers can build on the team's progress. The curriculum materials and software tools used by Hammer's classes will also be made available to other instructors who wish to teach courses in this area. Hammer earned her B.A. at Harvard University, her M.S. from the NYU Interactive Telecommunications Program and her Ph.D. in cognitive studies at Columbia University. She is also an award-winning game designer.

Self-Reported COVID-19 Symptoms Show Promise for Disease Forecasts

Carnegie Mellon Will Soon Forecast Coronavirus Activity Several Weeks Ahead

Byron Spice

Self-reported descriptions of COVID-19-related symptoms, which Carnegie Mellon University researchers are gathering nationwide with the help of Facebook and Google, correlate well with test-confirmed cases of the disease, suggesting self-reports might soon help the researchers in forecasting COVID-19 activity. Ryan Tibshirani, co-leader of Carnegie Mellon's Delphi COVID-19 Response Team, said millions of responses to CMU surveys by Facebook and Google users are providing the team with real-time estimates of disease activity at the county level for much of the United States. "I'm very happy with both the Facebook and Google survey results," said Tibshirani, associate professor of statistics and machine learning. "They both have exceeded my expectations." The survey results, combined with data from additional sources, provide real-time indications of COVID-19 activity not previously available from any other source. This information will be made publicly available at CMU's COVIDcast website and Facebook has made the aggregated survey information from its users available. CMU launched its COVIDcast site today, featuring estimates of coronavirus activity based on those same surveys from Facebook users. Later this week, the COVIDcast site will debut interactive heat maps of the United States, displaying survey estimates from not only Facebook, but also Google users. The maps also will include anonymized data provided by other partners, including Quidel Corp. and a national health care provider. Tibshirani said the survey responses combined with other data such as medical claims and medical testing will enable the CMU team to generate estimates of disease activity that are more reflective of reality than what is now available from positive coronavirus tests alone. Most of the data sources are available on a county level and the researchers say they have good coverage of the 601 U.S. counties with at least 100,000 people. Within a few weeks, they expect to use these estimates to provide forecasts that will help hospitals, first responders and other health officials anticipate the number of COVID-19 hospitalizations and ICU admits likely to occur in their locales several weeks in advance. Thus far, CMU is seeing about one million responses per week from Facebook users. Last week, almost 600,000 users of the Google Opinion Rewards and AdMob apps were answering another CMU survey each day. Using these and other unique data sources, the CMU researchers will monitor changes over time, enabling them to forecast COVID-19 activity several weeks into the future. They also plan to use this information to produce "nowcasts," which are integrated estimates of current disease activity that they expect will be more reflective of reality than are daily compilations of test-confirmed COVID-19 cases. Roni Rosenfeld, co-leader of the CMU Delphi research group and head of the Machine Learning Department, said relying only on positive test results may not provide a complete picture of disease activity because of limited test capacity, reporting delays and other factors. For this COVID-19 project, Carnegie Mellon's Delphi research group, which has now grown to include about 30 faculty members, students and other volunteers, is leveraging years of expertise as the preeminent academic center for forecasting influenza activity nationwide. Last year, the U.S. Centers for Disease Control and Prevention designated the Delphi group as one of two National Centers of Excellence for Influenza Forecasting. At the CDC's request, the group has extended and adapted its flu forecasting efforts to encompass COVID-19. Delphi uses two main approaches to forecasts, both of which have proven effective regarding the flu. One, called Crowdcast, is a "wisdom of the crowds" approach, which bases its predictions on the aggregate judgments of human volunteers who submit weekly estimates. The other uses statistical machine learning to recognize patterns in health care data that relate to past experience. "This forecasting problem is so complicated that we believe that a diversity of data and approaches is our best weapon," Tibshirani said. To aid in COVID-19 forecasting, Facebook each day invites some of its U.S. users to voluntarily answer a CMU survey about any COVID-19 symptoms they might be experiencing. CMU controls the survey and individual responses are not shared with Facebook. Likewise, Google is helping CMU distribute one-question surveys to its users; results also are not shared with Google. Since 2016 Google Health Trends has been providing CMU information about searches that its users perform each day for flu, and more recently for COVID-19-related terms. A major healthcare care provider is sharing anonymized inpatient and outpatient COVID-related counts, and Quidel, a diagnostic test provider, is sharing anonymized national lab test statistics. Rosenfeld said they hope to bolster their forecasting efforts by adding another five data sources in the next several weeks. "We're deeply appreciative of the help we are receiving from Facebook, Google and our other partners," Rosenfeld said. "The data they provide is priceless and will give us greater confidence once we are able to begin our forecasts for this deadly disease."

Fragkiadaki Earns NSF CAREER Award

Virginia Alvino Young

Katerina Fragkiadaki, an assistant professor in the School of Computer Science's Machine Learning Department, has received a National Science Foundation Faculty Early Career Development (CAREER) Award, the organization's most prestigious award for young faculty members. The five-year, $546,000 award will support her work on computer vision. Fragkiadaki's research interests include computer vision, robot behavior learning and visual language grounding. Her NSF-supported project will help her develop neural network architectures that take video inputs and not only learn to differentiate between camera motion and the scene, but also capture that scene and translate it into 3D maps. The agents are trained to predict the future rather than labels of objects and actions, greatly reducing the need for human supervision in learning. The proposed research could impact the control of any vision-enabled mobile agents, such as ground vehicles and drones. It's also instrumental in reducing the cost of programming robots and other technology, such as personal assistants, and in bringing AI systems closer to human-level performance in visual reasoning. Fragkiadaki earned her bachelor's degree at the National Technical University of Athens, and her Ph.D. at the University of Pennsylvania. Before joining CMU, she was a post-doctoral researcher at the University of California, Berkeley, and at Google Research.

Jian Ma Wins Guggenheim Fellowship

Byron Spice

Jian Ma, an associate professor in the Computational Biology Department, is one of 175 scientists, writers, artists and other scholars awarded 2020 Guggenheim Fellowships by the John Simon Guggenheim Memorial Foundation. The latest class of fellows was selected from almost 3,000 applicants based on their prior achievement and exceptional promise, the foundation said. Ma, who joined Carnegie Mellon University in 2016, will receive funding for his work in developing algorithms to compare genome structure and function in different biological contexts. This subject has been a major body of work in his lab in recent years. For instance, in a paper published in the journal Genome Research in February, Ma and his colleagues described how they took an algorithm used to study social networks and adapted it to identify how DNA and proteins are interconnected into communities within the cell nucleus. Ma earned his Ph.D. in computer science at Penn State University in 2006. He received his post-doctoral training in the laboratory of David Haussler at the University of California, Santa Cruz, before joining the faculty of the University of Illinois at Urbana-Champaign in 2009. His previous recognitions include a National Science Foundation CAREER Award in 2011. In addition to his appointment in the Computational Biology Department, he is an affiliated faculty member in the Machine Learning Department. Since its establishment in 1925, the Guggenheim Foundation has granted more than $375 million in fellowships. Scores of fellows have later won such major awards as the Nobel Prize, Turing Award and Field Medal.

Balcan Receives ACM Grace Murray Hopper Award

Virginia Alvino Young

Maria Florina "Nina" Balcan, an associate professor in the School of Computer Science's Machine Learning and Computer Science Departments, has received the 2019 Association for Computing Machinery (ACM) Grace Murray Hopper Award for her significant innovations in machine learning and minimally supervised learning. This award is given to the outstanding young computer professional of the year and includes a $35,000 prize. Balcan's research interests include learning theory, machine learning, artificial intelligence, theory of computing, algorithmic economics and algorithmic game theory, and optimization. The ACM is the world's largest educational and scientific computing society. Its president, Cherri M. Pancake, lauded Balcan for accomplishing so much before age 35. "Although she is still in the early stages of her career, she has already established herself as the world leader in the theory of how AI systems can learn with limited supervision," Pancake said. "More broadly, her work has realigned the foundations of machine learning, and consequently ushered in many new applications that have brought about leapfrog advances in this exciting area of artificial intelligence." Balcan introduced the first theoretical framework for semi-supervised learning — a technique used to increase training data in machine learning and improve predictive accuracy. Her work advanced the tool and enabled the subsequent work of many other researchers. Balcan has also made significant contributions in the techniques of active learning and clustering. Balcan received bachelor's and master's degrees from the University of Bucharest in 2000 and 2002, respectively, and earned a Ph.D. in computer science from Carnegie Mellon in 2008. She was honored with a National Science Foundation CAREER Award in 2009, a Microsoft Faculty Fellowship in 2011 and a Sloan Research Fellowship in 2014. She has served as the program committee co-chair for all three of the major machine learning conferences: the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML), and the Conference on Learning Theory (COLT).

Admoni Earns NSF CAREER Award

Virginia Alvino Young

Henny Admoni, an assistant professor in the School of Computer Science's Robotics Institute, has received a National Science Foundation Faculty Early Career Development (CAREER) Award, the organization's most prestigious award for young faculty members. The five-year, $550,000 award will support her work on robotic assistive technologies. Admoni's research combines her expertise in human-robot interaction (HRI) and cognitive psychology to enable those with severe motor impairments to independently navigate daily tasks such as preparing food and eating. Many existing assistive robots are reactive, but Admoni sees that changing. "The next major advance in HRI will involve robots that can proactively anticipate and respond to people's needs, just as an experienced caregiver does," she said. The NSF award will support Admoni's project investigating how human eye gaze can reveal when and how people need assistance in daily activities. She'll develop assistance algorithms that monitor eye gaze and respond with robot actions, and will perform studies to evaluate those algorithms with users who have upper motor impairments. HRI courses and texts will also be developed. Admoni holds a Ph.D. in computer science from Yale University, and a joint B.A./M.A. in computer science from Wesleyan University.

Facebook and Carnegie Mellon Team To Gather COVID-19 Symptom Data

Researchers Seek County-Level Statistics To Forecast Pandemic in U.S.

Byron Spice

Beginning today, Facebook is assisting Carnegie Mellon University in gathering data about U.S. residents who are experiencing symptoms consistent with COVID-19 — information that is available from no other source and could help researchers in forecasting the spread of the pandemic. Some Facebook users will now see a link at the top of their news feed that will lead them to an optional survey operated by Carnegie Mellon. Information from the survey will be used by CMU for its pandemic forecasting efforts and also will be shared with other collaborating universities. Aggregate information from the survey will be shared publicly. "We're hoping for millions of people to take the survey each week," said Ryan Tibshirani, associate professor of statistics and machine learning. Obtaining the help of a company such as Facebook is crucial for this endeavor, he added. The goal is to gather data at the county level on people who have COVID-19 symptoms. "We don't have good data at this point regarding symptomatic infections," explained Tibshirani, co-leader of CMU's Delphi Research Group, one of two Influenza Forecasting Centers of Excellence designated last year by the U.S. Centers for Disease Control and Prevention. "People have been discouraged from visiting physician offices and hospitals," he noted. "The only way to get this is with the survey. "This data has the potential to be extremely valuable for forecasts, because a spike in symptomatic infections might be indicative of a spike in hospitalizations to come," he added. If county-level data isn't possible, the researchers at least would like to get data for hospital referral regions, which could encompass multiple counties. Tibshirani said he hopes to enlist other companies that have large online platforms in the effort. CMU's Delphi group is actively working with Google, which ran an initial survey last week, and is reaching out to additional companies. "Facebook is providing us with users, but they are not involved in conducting the survey," Tibshirani said. Facebook will share a random ID number with CMU for each participant. Once that participant completes the survey, CMU will send the ID number back to Facebook — but none of the replies. Facebook will then provide a statistic known as a weight value that will help CMU correct for any sample bias. Getting lots of people to answer the survey is important, Tibshirani emphasized. "We are trying to measure something for which there is no ground truth yet," he explained. No other source is available to verify the survey findings, "so the only way for us to feel confident about the results is to gather data from as many sources as possible, for example, from Google and other companies. The reason we need to blast this to as many people as possible is to get enough data at the county level." The survey includes several questions particularly pertinent to CMU's forecasting efforts. Additional questions have been added for other COVID-19 research efforts by universities who are in talks to join the collaboration, Tibshirani said.

CMU Spinoff Offers AI Platform To Help Governments Address COVID-19

Byron Spice

People who manage public facilities and spaces during the COVID-19 pandemic have lots of new questions that artificial intelligence and computer vision technology could help answer, such as: Are people maintaining safe social distances? What surfaces are people touching that may need cleaning? How many people are wearing face masks? Zensors, a two-year-old spinoff of Carnegie Mellon University's Human-Computer Interaction Institute, is responding to this need by making its computer vision platform available for free to governments, airports and essential businesses, and is further inviting machine learning researchers to collaborate on using the data toward better disease management. Anuraag Jain, an HCII alumnus and inventor of the Zensors technology, said airports and other potential clients contacted the company as the COVID-19 threat grew, seeking assistance in using computer vision to help them manage their facilities. "Rather than profiting off them, we thought we would give our help for free," Jain said, at least through June 1. The company, based in Pittsburgh and San Francisco, provides a platform that can use a variety of cameras, including existing security cameras, as sensing devices. The technology was developed by the Future Interfaces Group, led by Chris Harrison, an assistant professor in the HCII. Machine learning technology in the platform enables the cameras to extract data from images, such as the number of people occupying a space, or the density and number of people standing in line. "We track activity levels, not individuals. If the police used Zensors, they could not track individuals because it's just not possible with our platform." Prior to the rise of the novel coronavirus, Zensors had a number of large clients, such as airports and city governments, Jain said. "In the past, a lot of people looked at the product as something that might help improve their business operations," he added. "Now, the lens is somewhat different. We look at the COVID-19 pandemic as the biggest challenge facing governments today: how do you close a city? How do you enforce a closure when your personnel are already stretched? We think our platform can be part of the answer."

Jessica Lee Wins Prestigious Goldwater Scholarship

Byron Spice

Jessica Lee, a junior majoring in computer science, is one of four Carnegie Mellon University students selected to receive a 2020 Barry Goldwater Scholarship, which is awarded to sophomores and juniors who show promise as leaders in the natural sciences, engineering and mathematics. Lee, along with CMU biological sciences major Cassandra Bishop, neuroscience major Shiv Sethi and math major Noah Stevenson (all juniors), is among 396 recipients this year from across the United States. Each university is allowed to nominate no more than four students to the Goldwater program; it is unusual for any university to see all four nominees selected. Given by the federally endowed Barry Goldwater Scholarship and Excellence in Education Foundation, the award provides up to $7,500 per year for tuition, fees, books, and room and board. Lee belongs to the School of Computer Science's Student Advisory Council and plans to pursue a Ph.D. in computer vision and machine learning. Her goal is to develop techniques that make machine learning algorithms more efficient, scalable and explainable — similar to how a human brain is able to learn quickly. Obtaining a Goldwater scholarship, the most prestigious STEM scholarship for undergraduates, promises to be a big help in attaining those goals, she said. "It's definitely hard to convince your parents to be on board with pursuing a Ph.D. instead of finding a job after graduation that pays a lot more," Lee explained. "However, the Goldwater enables student researchers to both be recognized for their work and to encourage them to pursue a research career in the future by getting involved in a nationwide community of researchers." She is leaning toward a career in industry, but hasn't ruled out an academic career. Lee is one of 50 math and computer science majors and among 203 women to receive 2020 Goldwater scholarships.

Smartphone Videos Produce Highly Realistic 3D Face Reconstructions

Carnegie Mellon Method Foregoes Expensive Scanners, Camera Setups, Studios

Byron Spice

Normally, it takes pricey equipment and expertise to create an accurate 3D reconstruction of someone's face that's realistic and doesn't look creepy. Now, Carnegie Mellon University researchers have pulled off the feat using video recorded on an ordinary smartphone.Using a smartphone to shoot a continuous video of the front and sides of the face generates a dense cloud of data. A two-step process developed by CMU's Robotics Institute uses that data, with some help from deep learning algorithms, to build a digital reconstruction of the face. The team's experiments show that their method can achieve sub-millimeter accuracy, outperforming other camera-based processes.A digital face might be used to build an avatar for gaming or for virtual or augmented reality, and could also be used in animation, biometric identification and even medical procedures. An accurate 3D rendering of the face might also be useful in building customized surgical masks or respirators."Building a 3D reconstruction of the face has been an open problem in computer vision and graphics because people are very sensitive to the look of facial features," said Simon Lucey, an associate research professor in the Robotics Institute. "Even slight anomalies in the reconstructions can make the end result look unrealistic."Laser scanners, structured light and multicamera studio setups can produce highly accurate scans of the face, but these specialized sensors are prohibitively expensive for most applications. CMU's newly developed method, however, requires only a smartphone.The method, which Lucey developed with master's students Shubham Agrawal and Anuj Pahuja, was presented in early March at the IEEE Winter Conference on Applications of Computer Vision (WACV) in Snowmass, Colorado. It begins with shooting 15-20 seconds of video. In this case, the researchers used an iPhone X in the slow-motion setting."The high frame rate of slow motion is one of the key things for our method because it generates a dense point cloud," Lucey said.The researchers then employ a commonly used technique called visual simultaneous localization and mapping (SLAM). Visual SLAM triangulates points on a surface to calculate its shape, while at the same time using that information to determine the position of the camera. This creates an initial geometry of the face, but missing data leave gaps in the model.In the second step of this process, the researchers work to fill in those gaps, first by using deep learning algorithms. Deep learning is used in a limited way, however: it identifies the person's profile and landmarks such as ears, eyes and nose. Classical computer vision techniques are then used to fill in the gaps."Deep learning is a powerful tool that we use every day," Lucey said. "But deep learning has a tendency to memorize solutions," which works against efforts to include distinguishing details of the face. "If you use these algorithms just to find the landmarks, you can use classical methods to fill in the gaps much more easily."The method isn't necessarily quick; it took 30-40 minutes of processing time. But the entire process can be performed on a smartphone.In addition to face reconstructions, the CMU team's methods might also be employed to capture the geometry of almost any object, Lucey said. Digital reconstructions of those objects can then be incorporated into animations or perhaps transmitted across the internet to sites where the objects could be duplicated with 3D printers.