News 2021

January 2021

Black and Hispanic Americans Report Low Rates of COVID-19 Vaccinations

Byron Spice

Daily national surveys by Carnegie Mellon University show Black and Hispanic Americans are far less likely than whites to report that they have received COVID-19 vaccinations. Just 6.4% of African Americans and 6.8% of Hispanic Americans say they have received the vaccines, compared to 9.3% of whites. American Indians/Alaska Natives and people of Asian descent have the highest self-reported rates of vaccinations, at 12.9% and 12.3%, respectively. The surveys of Facebook users are conducted daily by members of CMU's Delphi Research Group, with the support of Facebook's Data for Good program. The percentage of respondents who say they have been vaccinated is based on 300,000 survey responses from Jan. 9 to Jan. 15. An analysis of those survey findings by Alex Reinhart, assistant teaching professor in CMU's Department of Statistics and Data Science, and Facebook research scientists Esther Kim, Andy Garcia and Sarah LaRocca noted that the high rate of vaccinations among American Indians and Alaskan Natives corroborates the efficient vaccine rollout by the Indian Health Service. The researchers note that the wide racial and ethnic disparity among those who say they have received COVID-19 shots likely results from many factors, such as minority groups being less likely to have access to affordable healthcare and having reduced trust in medicine because of decades of discrimination. "These disparities highlight long-standing gaps in Americans' access to and trust in medicine, and show that much work remains to be done to ensure everyone has access to healthcare they trust," Reinhart said. Asian Americans reported the highest level of vaccine acceptance — 88%. White and Hispanic people are close behind, at 76% and 73% respectively. Just 58 percent of Black Americans, however, said they would get a shot if it was offered. In some states, the survey responses indicate Black Americans are more than twice as likely to worry about vaccine side effects than white Americans. Among healthcare workers, who were one of the first groups given access to the vaccines, more men reported receiving the vaccine (59%) than women (51%). "By running this survey daily and releasing aggregate data publicly, we hope to help health officials and policymakers extend vaccine access to those who need it most," Reinhart said. CMU's Delphi Research Group began daily surveys related to COVID-19 last April, initially focusing on self-reported symptoms. The survey later was expanded to include factors such as mask use and vaccine acceptance. The findings are updated daily and made available to the public on CMU's COVIDcast website. Delphi researchers use the data to perform forecasts of COVID-19 activity at state and county levels, which are reported to the U.S. Centers for Disease Control and Prevention. Facebook distributes the surveys to a portion of its users each day as part of its Data for Good program. Facebook does not receive any individual survey information from users; CMU conducts the surveys off Facebook and manages all the findings. The University of Maryland likewise works with Facebook to gather international data on the pandemic.

CMU Team Uses Machine Learning To Predict Peatland Fires

A Major Source of Carbon Dioxide Emissions, Peat Wildfires Can Persist For Years

Byron Spice

A team of Carnegie Mellon University seniors has for the first time used deep learning techniques to predict fires and their spread in peatlands, which have become a major source of global carbon dioxide emissions. The machine learning algorithms outperformed previous computer models for peat fires. The team also assembled the first dataset, PeatSet, designed for the problem of peatland fire prediction. Other researchers can now use this dataset to train their own machine learning programs and to extend the CMU team's work on predicting the severity of peat fires. "There's not a lot of existing research on fire prediction for peatlands," said Sydney Zheng, a senior computer science major from Virginia. "When you think about fires, you think of forests." But peatland fires are growing in frequency and, because peat is carbon-rich, they emit far more carbon dioxide than forest fires. As demonstrated by Zheng and her fellow students, additional research is much needed. The team created their neural networks and dataset for a three-month machine learning research competition hosted by the University of Toronto, called ProjectX, in which researchers addressed climate-related issues. The team finished second in the contest's weather and natural disaster prediction category. Peatlands are wetlands where plant material decomposes to form peat. Normally, fire isn't a problem in these bogs and marshes, but natural droughts cause some peatlands to dry out, while humans drain other peatlands for development. Droughts related to climate change exacerbate this trend. Though they represent only 3% of the planet's land area, the carbon-rich peatlands store twice as much carbon as the world's forests. In 2015, daily carbon dioxide emissions from Indonesian fires — mainly peat fires — exceeded daily emissions for the entire United States. Shreya Bali, a senior computer science major from India, said peatland fires can be enigmatic. They can travel underground, cropping up in distant spots, and can smolder beneath snow cover. They tend to be small and burn at relatively low temperatures and so can escape detection by satellites. Peatlands tend to be sparsely inhabited, increasing the chances a fire can go unnoticed. "Yet once a peat fire erupts, it can be huge," she added. Machine learning typically requires large datasets, which are used to train the neural nets. But data was scarce for such an understudied area. One of the first tasks for the students was to reach out to researchers with expertise relevant to peatlands, such as emission experts, remote sensing experts, fire experts, peatland fire experts, and data organizers, said Justin Khim, a post-doctoral researcher in the Machine Learning Department. Reid Simmons, the director of the School of Computer Science's AI major and the team's faculty advisory, recruited Khim to mentor the team. Zheng said even the peatland experts didn't have a coherent view on the subject of peatland fires. But picking their brains proved to be crucial to the team's success. "However much machine learning can contribute to solving a problem, it is only part of the issue," she said. "You need a good working relationship with other people with other expertise. It's about being open-minded." The strength of machine learning algorithms, Bali noted, is that they can analyze an immense amount of data, finding patterns and clues that an overwhelmed human would likely miss. One type of algorithm that proved helpful with peatland fires was graph-based models. Because so much of a peatland fire can be underground, these models could treat each hotspot like a node on a graph and then analyze how those nodes might be connected. But the algorithms that proved most effective were those that they borrowed from the field of natural language processing, or NLP, Bali said. In NLP, these models identify the most important elements of a sentence or paragraph and reason about their relationships. For the peatland work, the models likewise reasoned about the most important data available and the relationships between them. In addition to Zheng and Bali, the team included Blair Chen, Akshina Gupta and Yue Wu, all seniors in the School of Computer Science, and Anirban Chowdhury, a statistics and machine learning major in the Dietrich College of Humanities and Social Sciences. Though the students found that their deep learning approach outperformed other computational efforts to predict peatland fire behavior, they also acknowledge that much work remains to be done to make such predictions more precise and reliable. For Bali, the competition was rewarding because it was her first opportunity to use her computer science skills to address problems associated with global climate change, an issue that has long concerned her. "We definitely need science to conquer this problem," she added.

Xinyu Wu Wins Ada Lovelace Fellowship

Byron Spice

Xinyu Wu, a Ph.D. student in the Computer Science Department who studies the theoretical foundations of quantum computing, is one of five recipients of 2021 Ada Lovelace Fellowships, presented by Microsoft Research. The three-year fellowships are awarded to second-year Ph.D. students who are pursuing research aligned with Microsoft Research. To increase diversity, the company also seeks students who are underrepresented in the field of computing. Wu said support from the fellowship will enable her to continue her work exploring what's possible with quantum computers. "I work on using different mathematical tools to develop the theory behind quantum computers," she said. "I especially like working in this area because it has connections across all of math, physics and computer science." Her current research in this area includes improved constructions of pseudorandom quantum states and channels, as well as analyzing the average-case algorithmic complexity of quantum problems.

SCS, CyLab Grad Student Safeguards Digital Transactions

Daniel Tkacik

In 2013, a Pennsylvania man became the richest person on Earth … for about two minutes. PayPal had accidentally credited his account $92 quadrillion dollars. That's a 92 with 15 zeros behind it. But within minutes, PayPal realized their mistake, and took it all back. Too bad. Mistakes like this — big, small and humongous — happen all too often and typically come down to bugs in "smart contracts," computer programs that facilitate digital transactions online. In the case of the PayPal bug, 92 quadrillion is the maximum value that a 64-bit computer can store in its memory. A bug in the code initiated a transfer of funds representing that gigantic number. As more and more of our finances and purchasing behaviors move online, the importance of bug-free smart contracts has never been greater. Ankush Das, a Ph.D. student in the Computer Science Department (CSD) affiliated with CMU's CyLab Security and Privacy Institute, agonizes over it every day. "If there is a way for a smart contract to accidentally pay you money — if that error exists — somebody will exploit it to pay themselves money. And this happens all the time, all over the place," said Das, who is advised by CyLab's Jan Hoffman, an associate professor in CSD. "It's very, very important that these smart contracts are free of errors." Das is the lead designer and developer of a new programming language, which he's named Nomos, aimed at reducing such errors in smart contracts. "All smart contracts — just like real contracts — have a predefined protocol," he said. "Nomos has a way for a programmer to specify what that protocol is. Then, when you're writing the actual program, the language will actually enforce that you satisfy your predefined protocol. If you make an error, it will say, 'No, no, no. This is not correct. There's a protocol mismatch.'" Another feature of Nomos relates to transaction fees. In most scenarios, people rarely pay transaction fees themselves, passing the buck to the credit card companies or the vendors. But on a blockchain — the decentralized network of computers around the world facilitating and recording cryptocurrency transactions — users pay the transaction fee themselves. "A cool and unique feature of Nomos is that whenever you write a smart contract, the language will automatically tell you how much the transaction fee will be," Das said. "There's a guarantee, a mathematical theorem running in the background, that says, 'If the language says the fee will be $5, then it will be exactly $5.' Nothing more, nothing less." Das said that every transaction in the virtual world faces these potential challenges. Blockchains are just the most recent transparent application of smart contracts, exposing these issues to the world. Thus, the research ideas that power Nomos, like ensuring funds are not lost and that protocols are enforced, can be applied in any digital financial realm. "People who are skeptical of transacting on certain websites or paying money in certain portals … the kind of work we are doing can help build people's trust in these systems," Das said. Nomos is available as a web interface and its code is open-source on GitHub.  

NASA Mission To Test Technology for Satellite Swarms

Carnegie Mellon's Zac Manchester Leads Three-Satellite Experiment

Byron Spice

A NASA mission slated for launch on Friday will place three tiny satellites into low-Earth orbit, where they will demonstrate how satellites might track and communicate with each other, setting the stage for swarms of thousands of small satellites that can work cooperatively and autonomously. Zac Manchester, an assistant professor in Carnegie Mellon University's Robotics Institute and the mission's principal investigator, said small satellites have grown in popularity over the last 10 years, as some companies already are launching hundreds into orbit to perform tasks such as Earth imaging and weather forecasting. These satellites now are individually controlled from the ground. As swarms grow bigger and more sophisticated, Manchester noted, they will need to respond to commands almost as a single entity. The new mission, dubbed V-R3x, will test technologies that might make that possible. "This mission is a precursor to more advanced swarming capabilities and autonomous formation flying," Manchester said. NASA also is interested in using swarms of small satellites beyond Earth. Swarms of satellites around the moon, for instance, could provide communications and navigation aid for lunar exploration, including NASA's Artemis program. It will be essential that extraterrestrial swarms operate autonomously, Manchester said. V-R3x, a NASA Small Spacecraft Technology program-funded technology demonstration mission, is implemented by a small, dedicated group of engineers known as Payload Accelerator for CubeSat Endeavors (PACE) at NASA's Ames Research Center in Silicon Valley. The group aims to design, develop and fly space experiments rapidly and more cost effectively. V-R3x will deploy three so-called CubeSats into low-Earth orbit. These standardized 10-centimeter cubes each weigh about a kilogram and, once deployed, will form a mesh network, exchanging radio signals as they slowly drift apart over a three-to-four month period. The satellites also will be equipped with special S-band radios capable of time-of-flight ranging. That is, they can measure how long it takes a radio signal to travel to another satellite and bounce back. That signal's time of flight can then be used to calculate the distance between the two satellites within half a meter. The three satellites will be launched aboard a SpaceX Falcon 9 from Cape Canaveral, Florida. In a testament to the popularity of small satellites, this flight will be a "rideshare" that will carry dozens of microsatellites and nanosatellites for a variety of commercial and government customers. Manchester, who joined CMU's Robotics Institute this past September, conceived the mission while he was an assistant professor of aeronautics and astronautics at Stanford University. His graduate student at Stanford, Max Holliday, did much of the CubeSat construction in his kitchen because of the COVID-19 pandemic. A CMU Ph.D. student in robotics, Kevin Tracy, has been developing software for the experiment. "It looks like there's a bright future here for this kind of stuff," Manchester said of CMU, referring to two CMU lunar rovers now pending launches and other space-related research underway at the university and in Pittsburgh. Though his own training is in aeronautics — V-R3x is the third space mission for which he has served as principal investigator — Manchester emphasized that joining the Robotics Institute makes perfect sense because of the overlap that exists with robotics. "Spacecraft are robots, too," he said. The V-R3x CubeSats will be placed in a polar orbit, which means they will pass over Pittsburgh about twice a day, 12 hours apart. Manchester said he hopes to set up a ground station at CMU to communicate with the satellites, though he acknowledged that none of the ground stations being used for the mission have that much to do. "The satellites will wake up and do their thing autonomously," he explained. "We mainly need to make sure that we get their data downloaded."

"Smiling Eyes" May Not Signify True Happiness After All

Carnegie Mellon Study Questions Influential Duchenne Smile Hypothesis

Byron Spice

A smile that lifts the cheeks and crinkles the eyes is thought by many to be truly genuine. But new research at Carnegie Mellon University casts doubt on whether this joyful facial expression necessarily tells others how a person really feels inside. In fact, these "smiling eye" smiles, called Duchenne smiles, seem to be related to smile intensity, rather than acting as an indicator of whether a person is happy or not, said Jeffrey Girard, a former post-doctoral researcher at CMU's Language Technologies Institute. "I do think it's possible that we might be able detect how strongly somebody feels positive emotions based on their smile," said Girard, who joined the psychology faculty at the University of Kansas this past fall. "But it's going to be a bit more complicated than just asking, 'Did their eyes move?'" Whether it's possible to gauge a person's emotions based on their behavior is a topic of some debate within the disciplines of psychology and computer science, particularly as researchers develop automated systems for monitoring facial movements, gestures, voice inflections and word choice. Duchenne smiles might not be as popularly known as Mona Lisa smiles or Bette Davis eyes, but there is a camp within psychology that believes they are a useful rule of thumb for gauging happiness. But another camp is skeptical. Girard, who studies facial behavior and worked with CMU's Louis-Phillippe Morency to develop a multimodal approach for monitoring behavior, said that some research seems to support the Duchenne smile hypothesis, while other studies demonstrate how it fails. So Girard and Morency, along with Jeffrey Cohn of the University of Pittsburgh and Lijun Yin of Binghamton University, set out to better understand the phenomenon. They enlisted 136 volunteers who agreed to have their facial expressions recorded as they completed lab tasks designed to make them feel amusement, embarrassment, fear or physical pain. After each task, the volunteers rated how strongly they felt various emotions. Finally, the team made videos of the smiles occurring during these tasks and showed them to new participants (i.e., judges), who tried to guess how much positive emotion the volunteers felt while smiling. A report on their findings has been published online by the journal Affective Science. Unlike most previous studies of Duchenne smiles, this work sought spontaneous expressions, rather than posed smiles, and the researchers recorded videos of the facial expressions from beginning to end rather than taking still photos. They also took painstaking measurements of smile intensity and other facial behaviors. Although Duchenne smiles made up 90% of those that occurred when positive emotion was reported, they also made up 80% of the smiles that occurred when no positive emotion was reported. Concluding that a Duchenne smile must mean positive emotion would thus often be a mistake. On the other hand, the human judges found smiling eyes compelling and tended to guess that volunteers showing Duchenne smiles felt more positive emotion. "It is really important to look at how people actually move their faces in addition to how people rate images and videos of faces, because sometimes our intuitions are wrong," Girard said. "These results emphasize the need to model the subtleties of human emotions and facial expressions," said Morency, associate professor in the LTI and director of the MultiComp Lab. "We need to go beyond prototypical expression and take into account the context in which the expression happened." It's possible, for instance, for someone to display the same behavior at a wedding as at a funeral, yet the person's emotions would be very different. Automated methods for monitoring facial expression make it possible to examine behavior in much finer detail. Just two facial muscles are involved in Duchenne smiles, but new systems make it possible to look at 30 different muscle movements simultaneously. Multimodal systems such as the ones being developed in Morency's lab hold the promise of giving physicians a new tool for assessing mental disorders, and for monitoring and quantifying the results of psychological therapy over time. "Could we ever have an algorithm or a computer that is as good as humans at gauging emotions? I think so," Girard said. "I don't think people have any extrasensory stuff that a computer couldn't be given somewhere down the road. We're just not there yet. It's also important to remember that humans aren't always so good at this either!" The National Science Foundation and the National Institutes of Health supported this research.

Surveys Show Northeast, West Coast Most Likely To Accept COVID-19 Vaccine

Carnegie Mellon Researchers' Daily Question: Would You Get the Shot?

Byron Spice

Carnegie Mellon University researchers have begun daily nationwide surveys to determine U.S. acceptance of COVID-19 vaccines and pinpoint those states and localities where people are most skeptical of the shots. The CMU surveys, which Facebook distributes daily to a portion of its users, thus far have shown that the greatest resistance to the vaccine is in the Deep South, while people in the Northeast and West Coast are the most inclined to receive the vaccine, if it were available to them. Among over one million respondents who have taken the survey nationally since late December, 71% indicate that they would definitely or probably receive the vaccine if it were offered to them. The rate exceeds 80% in states such as Vermont, Massachusetts, Connecticut and Washington, and is more than 88% in the District of Columbia. Some states, including Mississippi, Louisiana and Alabama, have rates under 65%, suggesting much lower acceptance of the vaccine in their populations. The deep disparities in the findings are consistent with previous public surveys, said Alex Reinhart, assistant teaching professor in the Department of Statistics and Data Science and a member of the Delphi Research Group that conducts the surveys. The power of the CMU surveys, he added, is that they are distributed to a wide swath of Facebook users each day, enabling researchers to track changes in attitudes in real-time as the pandemic progresses and vaccine distribution continues. "County, state and federal public health officials can use this information to better guide their public relations and vaccine distribution efforts," Reinhart explained. The aggregated results of this and other COVID-19 survey questions are publicly available on CMU's COVIDcast website. CMU began the daily surveys last April, asking people if they had COVID-19 symptoms, and more than 15 million people have answered the surveys since then. The results provide county-level information about the coronavirus pandemic that is updated continuously and available from no other source. Delphi researchers use that information to forecast changes in COVID-19 activity across the country, helping to guide responses by health care providers. The survey was expanded Dec. 19 to include a question on whether the respondent would get a COVID-19 shot if it was available; a second question asking whether they had received a shot was added Jan. 6. In collaboration with the U.S. Centers for Disease Control and Prevention, the researchers plan to soon ask respondents who don't want the vaccine to say why not. Facebook distributes the surveys as part of its Data for Good program and this data is made available to support public health responses to the pandemic. Facebook does not receive any individual survey information from users — CMU conducts the surveys off Facebook and manages all the findings. The University of Maryland likewise works with Facebook to gather international data on the pandemic. In its COVID-19 data-gathering and forecasting efforts, the Delphi group leverages years of expertise as the preeminent academic center for forecasting influenza activity nationwide. The CDC has designated Delphi as one of two National Centers of Excellence for Influenza Forecasting. At the CDC's request this past spring, the group extended and adapted its flu forecasting efforts to encompass COVID-19.

Blum, Forlizzi Named ACM Fellows

Byron Spice

School of Computer Science faculty members Manuel Blum and Jodi Forlizzi are among 95 distinguished computer scientists named 2020 fellows by the Association for Computing Machinery (ACM). The ACM Fellows program recognizes the top 1% of the association's membership for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community. Blum, University Professor Emeritus in the Computer Science Department (CSD), was recognized "for contributions to the foundations of computational complexity theory and its application to cryptography and program checking." That's the same citation that the ACM used in 1995, when it awarded Blum its highest honor, the A.M. Turing Award. Blum developed methods for measuring the intrinsic complexity of problems, and his Speedup theorem is an important proposition about complexity of computable functions. Much of his research focused on finding positive, practical aspects of the fact that all computational devices are resource-bounded. That work produced such innovations as pseudo-random number generation and the development of CAPTCHAs for detecting online bots. Most recently, he and his wife, Lenore Blum, have explored architectures that might demonstrate machine consciousness. Forlizzi, the Geschke Director of the Human-Computer Interaction Institute, was recognized "for contributions to design research in human-computer interaction." A faculty member since 2000, Forlizzi has studied designing and analyzing systems ranging from peripheral displays to social and assistive robots. Her work has included designing educational games that are engaging and effective, designing services that adapt to people's needs, and designing for healthcare. "This year our task in selecting the 2020 fellows was a little more challenging, as we had a record number of nominations from around the world," said ACM President Gabriele Kotsis. The contributions of this class run the gamut of the computing field, including algorithms, networks, computer architecture, robotics, distributed systems, software development, wireless systems and web science. "These men and women have made pivotal contributions to technologies that are transforming whole industries, as well as our personal lives," she added. Other fellows named this year include five CSD alumni. Peter Stone, the David Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin, was recognized "for contributions to automated planning, learning, and multiagent systems with applications in robotics and ecommerce." Sven Koenig, professor of computer science at the University of Southern California, was recognized "for contributions to artificial intelligence, including heuristic search and multiagent coordination." David Maltz, distinguished engineer at Microsoft, was recognized "for contributions to networking infrastructure, including data center networking, network operating systems and cloud networking." Andrew Tomkins, engineering director at Google, was recognized "for contributions to face recognition, computer vision and multimodal interaction." And Sanjit Arunkumar Seshia, professor of electrical engineering and computer science at the University of California, Berkeley, was cited "for contributions to formal verification, inductive synthesis and cyber-physical systems." Dieter Fox, a former post-doctoral researcher in the Robotics Institute now starting a robotics research lab for Nvidia, was recognized "for contributions to probabilistic state estimation, RGB-D perception, and learning for robotics and computer vision."

New Study, Browser Extension Help Users Understand Opt-Out Options

Daniel Tkacik

Many websites offer users choices to opt out of some of their data collection and use practices. But most of these choices are buried deep in the text of long, jargon-filled privacy policies and users never see them. Recent work from Carnegie Mellon University CyLab researchers has shown that machine learning techniques can automatically extract and classify some of these opt-out choices. The study, presented at last year's Web Conference, also introduces Opt-Out Easy, a browser plug-in that automatically extracts opt-out choices from privacy policies and presents them to users in a way that's easy to use. The plug-in is available for free download. "Different privacy regulations grant users the right to revoke how their data can be used by companies," said CyLab's Norman Sadeh, a professor in the School of Computer Science and the principal investigator on the study. "But as it stands, most websites don't offer users easy and practical access to these choices, effectively depriving them of these rights." In their study, Sadeh's team trained a machine learning algorithm to scan privacy policies and identify language and links related to opt-out choices. They ran their algorithm on 7,000 of the most popular websites and found that more than 3,600 of them (~51%) offer zero opt-out choices. A little over 800 (~11%) provide just one opt-out hyperlink. "Our study aimed to provide an in-depth overview of whether popular websites allowed users the ability to opt out of some data collection and use practices," Sadeh said. "In addition, we also wanted to develop a practical solution to help users access opt-out choices made available to them when such choices are present." The team developed Opt-Out Easy in collaboration with the University of Michigan School of Information. By clicking on the plugin's icon, users receive a list of opt-out links found in the privacy policy of the website they are visiting, allowing them to opt out of analytics, for example, or limit marketing emails. The researchers also conducted a usability evaluation of Opt-Out Easy, focusing on its effectiveness, efficiency and overall user satisfaction. The users who participated in the evaluation generally found the browser extension easy to use, and strongly agreed that the various types of opt-outs provided by the plugin were useful. "Our team put in hard work to come up with a browser extension that makes the most of opt-out choices available on a given website," Sadeh said. "We believe this extension is an important first step toward empowering web users to regain control of their privacy online." This work was conducted through a collaboration between CMU, the University of Michigan, Penn State University and Stanford University under the Usable Privacy Policy Project. Other members of the team include graduate students Vinayshekhar Bannihatti Kumar, Roger Iyengar, Namita Nisal, Hana Habib, Peter Story, Siddhant Arora, post-doctoral fellow Yuanyuan Feng, former undergraduate student Sushain Cherivirala, and faculty collaborators Margaret Hagan, Lorrie Faith Cranor, Shomir Wilson, and Florian Schaub.

HCII Team Named Semifinalist in USDOT Inclusive Design Challenge

Researchers Will Develop Prototype Smartphone Interface for Accessible Self-Driving Cars

Byron Spice

A team led by the Human-Computer Interaction Institute (HCII) is one of 10 semifinalists in the U.S. Department of Transportation's Inclusive Design Challenge, which seeks to make self-driving vehicles more accessible to people with disabilities. The 14-member student/faculty team, headed by assistant professors Nikolas Martelaro, Patrick Carrington and Sarah Fox, received $300,000 for its proposal. Their design focuses on using the accessible features of smartphones to enable communication between a self-driving car and a rider. This functionality would allow riders with differing abilities to control aspects of the vehicle and the trip, such as unlocking, opening and closing doors. Since August, the team members have been working with a community engagement group recruited by Lisa Kay Schweyer, program manager of CMU's Traffic21 Institute and Mobility21, a USDOT University Transportation Center. That group includes people with disabilities, disability rights advocates, transportation officials, automotive suppliers and automotive user experience designers. Working with collaborators at Propel, a transportation technology and data company, the researchers now have 18 months to develop a prototype interface. The first place team will win $1 million.

Gupta Wins Aggarwal Prize for Self-Supervised Learning

Byron Spice

Abhinav Gupta, associate professor in the Robotics Institute, is the winner of the 2020 J.K. Aggarwal Prize, which is presented every two years by the International Association for Pattern Recognition (IAPR) to a scientist under age 40 who has had a major impact on computer vision, pattern recognition and image processing. The IAPR is honoring Gupta "for pioneering contributions to unsupervised and self-supervised learning in computer vision and robotics." The award's namesake, Aggarwal, is an emeritus professor at the University of Texas who has made substantial contributions to pattern recognition and to the IAPR. Gupta, a CMU faculty member since 2011, is also research manager at Facebook AI Research. His work focuses on scaling up learning by building self-supervised, lifelong and interactive learning systems. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense and representation for actions in robots. He will receive the Aggarwal prize at the IAPR's premier conference, the International Conference on Pattern Recognition, which will be held virtually Jan. 10-15. He will present an award lecture, "Towards Self-supervised Curious Robots," regarding efforts by his lab and others to enable robots to learn on their own as they explore their environments. Gupta is the recipient of numerous awards, including the Office of Naval Research's Young Investigator Award, a Sloan Research Fellowship, a Bosch Young Faculty Fellowship, the IEEE Pattern Analysis and Machine Intelligence (PAMI) Young Researcher Award and the Okawa Foundation Research Grant.