News 2019

November 2019

CMU Algorithm Rapidly Finds Anomalies in Gene Expression Data

Algorithm Also Works to Identify and Correct Mistakes It Might Have Made

Byron Spice

Computational biologists at Carnegie Mellon University have devised an algorithm to rapidly sort through mountains of gene expression data to find unexpected phenomena that might merit further study. What's more, the algorithm then re-examines its own output, looking for mistakes it has made and then correcting them. This work by Carl Kingsford, a professor in CMU's Computational Biology Department, and Cong Ma, a Ph.D. student in computational biology, is the first attempt at automating the search for these anomalies in gene expression inferred by RNA sequencing, or RNA-seq, the leading method for inferring the activity level of genes. As they report today in the journal Cell Systems, the researchers already have detected 88 anomalies — unexpectedly high or low levels of expression of regions within genes — in two widely used RNA-seq libraries that are both common and not previously known. "We don't yet know why we're seeing those 88 weird patterns," Kingsford said, noting that they could be a subject of further investigation. Though an organism's genetic makeup is static, the activity level, or expression, of genes varies greatly over time. Gene expression analysis has thus become a major tool for biological research, as well as for diagnosing and monitoring cancers. Anomalies can be important clues for researchers, but until now finding them has been a painstaking, manual process, sometimes called "sequence gazing." Finding one anomaly might require examining 200,000 transcript sequences — sequences of RNA that encode information from the gene's DNA, Kingsford said. Most researchers therefore zero in on regions of genes that they think are important, largely ignoring the vast majority of potential anomalies. The algorithm developed by Ma and Kingsford automates the search for anomalies, enabling researchers to consider all of the transcript sequences, not just those regions where they expect to see anomalies. This technology could uncover many new phenomena, such as the 88 previously unknown common anomalies found in the multi-tissue RNA-seq libraries. But Ma noted that identifying anomalies is often not clear cut. Some RNA-seq "reads," for instance, are common to multiple genes and transcripts and sometimes get mapped to the wrong one. If that occurs, a genetic region might appear more or less active than expected. So the algorithm re-examines any anomalies it detects and sees if they disappear when the RNA-seq reads are redistributed between the genes. "By correcting anomalies when possible, we reduce the number of falsely predicted instances of differential expression," Ma said. The Gordon and Betty Moore Foundation, the National Science Foundation, the National Institutes of Health, the Shurl and Kay Curci Foundation, and the Pennsylvania Department of Health supported this research.

New Technology Makes Internet Memes Accessible for People With Visual Impairments

CMU Researchers Develop System to Identify and Translate Memes

Virginia Alvino Young

People with visual impairments use social media like everyone else, often with the help of screen reader software. But that technology falls short when it encounters memes, which don't include alternate text, or alt text, to describe what's depicted in the image. To counter this, researchers at Carnegie Mellon University have developed a method to automatically identify memes and apply prewritten templates to add descriptive alt text, making them intelligible via existing assistive technologies. Memes are images that are copied and then overlaid with slight variations of text. They are often humorous and convey a shared experience, but "if you're blind, you miss that part of the conversation," said Cole Gleason, a Ph.D. student in CMU's Human-Computer Interaction Institute (HCII.) "Memes may not seem like the most important problem, but a vital part of accessibility is not choosing for people what deserves their attention," said Jeff Bigham, an associate professor in the HCII. "Many people use memes, and so they should be made accessible." Memes largely live within social media platforms that have barriers to adding alt text. Twitter, for example, allows people to add alt text to their images, but that feature isn't always easy to find. Of 9 million tweets the CMU researchers examined, one million included images and, of those, just 0.1 percent included alt text. Gleason said basic computer vision techniques make it possible to describe the images underlying each meme, whether it be a celebrity, a crying baby, a cartoon character or a scene such as a bus upended in a sinkhole. Optical character recognition techniques are used to decipher the overlaid text, which can change with each iteration of the meme. For each meme type, it's only necessary to make one template describing the image, and the overlaid text can be added for each iteration of that meme. But writing out what the meme is intended to convey proved difficult. "It depended on the meme if the humor translated. Some of the visuals are more nuanced," Gleason said. "And sometimes it's explicit and you can just describe it." For example, the complete alt text for the so-called "success kid" meme states "Toddler clenching fist in front of smug face. Overlaid text on top: Was a bad boy all year. Overlaid text on bottom: Still got awesome presents from Santa." The team also created a platform to translate memes into sound rather than text. Users search through a sound library and drag and drop elements into a template. This system was made to translate existing memes and convey the sentiment through music and sound effects. "One of the reasons we tried the audio memes was because we thought alt text would kill the joke, but people still preferred the text because they're so used to it," Gleason said. Deploying the technology will be a challenge. Even if it was integrated into a meme generator website, that alt text wouldn't be automatically copied when the image was shared on social media. "We'd have to convince Twitter to add a new feature," Gleason said. It could be something added to a personal smartphone, but he noted that would put the burden on the user. CMU researchers are currently working on related projects, including a browser extension for Twitter that attempts to add alt text for every image and could include a meme system. Another project seeks to integrate alt text into the metadata of images that would stay with the image wherever it was posted. This work was presented earlier this year at the ACCESS conference in Pittsburgh. Other researchers involved in the project include HCII postdoctoral fellow Amy Pavel, CMU undergraduate Xingyu Liu, HCII assistant professor Patrick Carrington, and Lydia Chilton of Columbia University.

Carnegie Mellon System Locates Shooters Using Smartphone Video

New Analytical Tool Could Aid Human Rights, Public Safety Workers

Byron Spice

Researchers at Carnegie Mellon University have developed a system that can accurately locate a shooter based on video recordings from as few as three smartphones. When demonstrated using three video recordings from the 2017 mass shooting in Las Vegas that left 58 people dead and hundreds wounded, the system correctly estimated the shooter’s actual location — the north wing of the Mandalay Bay hotel. The estimate was based on three gunshots fired within the first minute of what would be a prolonged massacre. Alexander Hauptmann, research professor in CMU’s Language Technologies Institute, said the system, called Video Event Reconstruction and Analysis (VERA), won’t necessarily replace the commercial microphone arrays for locating shooters that public safety officials already use, although it may be a useful supplement for public safety when commercial arrays aren’t available. One key motivation for assembling VERA was to create a tool that could be used by human rights workers and journalists who investigate war crimes, terrorist acts and human rights violations, Hauptmann said. “Military and intelligence agencies are already developing these types of technologies,” said fellow researcher Jay D. Aronson, a professor of history at CMU and director of the Center for Human Rights Science. “We think it’s crucial for the human rights community to have the same types of tools. It provides a necessary check on state power.” The researchers presented VERA and released it as open-source code last month at the Association for Computing Machinery’s International Conference on Multimedia in Nice, France. Hauptmann said he has used his expertise in video analysis to help investigators analyze events such as the 2014 Maidan massacre in Ukraine, which left at least 50 antigovernment protesters dead. Inspired by that work — and the insight of ballistics experts and architecture colleagues from the firm SITU Research — Hauptmann, Aronson and Junwei Liang, a Ph.D. student in language and information technology, have pulled together several technologies for processing video, while automating their use as much as possible. VERA uses machine learning techniques to synchronize the video feeds and calculate the position of each camera based on what that camera is seeing. But it’s the audio from the video feeds that's pivotal in localizing the source of the gunshots, Hauptmann said. Specifically, the system looks at the time delay between the crack caused by a supersonic bullet’s shock wave and the muzzle blast, which travels at the speed of sound. It also uses audio to identify the type of gun used, which determines bullet speed. VERA can then calculate the shooter's distance from the smartphone. “When we began, we didn’t think you could detect the crack with a smartphone because it’s really short,” Hauptmann said. “But it turns out today’s cell phone microphones are pretty good.” By using video from three or more smartphones, the direction from which the shots were fired — and the shooter’s location — can be calculated based on the differences in how long it takes the muzzle blast to reach each camera. With the proliferation of mass protests occurring in places such as Hong Kong, Egypt and Iraq, identifying where a shot originated can be critical to determining whether protesters, police or other groups might be responsible when a shooting takes place, Aronson said. But VERA is not limited to detecting gunshots. It is an event analysis system that can be used to locate a variety of other sounds relevant to human rights and war crimes investigations, he said. He and Hauptmann hope that other groups will add functionalities to the open-source software. “Once it’s open source, the journalism and human rights communities can build on it in ways we don’t have the imagination for or time to do,” Aronson added. The National Institute of Standards and Technology provided partial support for this work. The MacArthur Foundation and the Oak Foundation also have supported this work.

Trash Talk Hurts, Even When It Comes From a Robot

Discouraging Words From Machines Impair Human Game Play

Byron Spice

Trash talking has a long and colorful history of flustering game opponents, and now researchers at Carnegie Mellon University have demonstrated that discouraging words can be perturbing even when uttered by a robot. The trash talk in the study was decidedly mild, with utterances such as "I have to say you are a terrible player," and "Over the course of the game your playing has become confused." Even so, people who played a game with the robot — a commercially available humanoid robot known as Pepper — performed worse when the robot discouraged them and better when the robot encouraged them. Lead author Aaron M. Roth said some of the 40 study participants were technically sophisticated and fully understood that a machine was the source of their discomfort. "One participant said, 'I don't like what the robot is saying, but that's the way it was programmed so I can't blame it,'" said Roth, who conducted the study while he was a master's student in the CMU Robotics Institute. But the researchers found that, overall, human performance ebbed regardless of technical sophistication. The study, presented last month at the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) in New Delhi, India, is a departure from typical human-robot interaction studies, which tend to focus on how humans and robots can best work together. "This is one of the first studies of human-robot interaction in an environment where they are not cooperating," said co-author Fei Fang, an assistant professor in the Institute for Software Research. It has enormous implications for a world where the number of robots and internet of things (IoT) devices with artificial intelligence capabilities is expected to grow exponentially. "We can expect home assistants to be cooperative," she said, "but in situations such as online shopping, they may not have the same goals as we do." The study was an outgrowth of a student project in AI Methods for Social Good, a course that Fang teaches. The students wanted to explore the uses of game theory and bounded rationality in the context of robots, so they designed a study in which humans would compete against a robot in a game called "Guards and Treasures." A so-called Stackelberg game, researchers use it to study rationality. This is a typical game used to study defender-attacker interaction in research on security games, an area in which Fang has done extensive work. Each participant played the game 35 times with the robot, while either soaking in encouraging words from the robot or getting their ears singed with dismissive remarks. Although the human players' rationality improved as the number of games played increased, those who were criticized by the robot didn't score as well as those who were praised. It's well established that an individual's performance is affected by what other people say, but the study shows that humans also respond to what machines say, said Afsaneh Doryab, a systems scientist at CMU's Human-Computer Interaction Institute (HCII) during the study and now an assistant professor in Engineering Systems and Environment at the University of Virginia. This machine's ability to prompt responses could have implications for automated learning, mental health treatment and even the use of robots as companions, she said. Future work might focus on nonverbal expression between robot and humans, said Roth, now a Ph.D. student at the University of Maryland. Fang suggests that more needs to be learned about how different types of machines — say, a humanoid robot as compared to a computer box — might invoke different responses in humans. In addition to Roth, Fang and Doryab, the research team included Manuela Veloso, professor of computer science; Samantha Reig, a Ph.D. student in the HCII; Umang Bhatt, who recently completed a joint bachelor's-master's degree program in electrical and computer engineering; Jonathan Shulgach, a master's student in biomedical engineering; and Tamara Amin, who recently finished her master's degree in civil and environmental engineering. The National Science Foundation provided some support for this work.

CMU Researchers Propose New Rules for Internet Fairness

Daniel Tkacik

Just weeks after a team of Carnegie Mellon researchers demonstrated that Google's new congestion control algorithm (CCA) gives an unfair advantage to its own traffic, the same team has proposed new guidelines for how future algorithms should be developed. "Our work shows that it is not always the case that new CCAs will be fair to the old ones," said Justine Sherry, an assistant professor in CMU's Computer Science Department (CSD) and a co-author of the proposal. "Google is not the only company deploying new algorithms. Moving forward, we need guidelines." Those guidelines, offered in their study, "Beyond Jain's Fairness Index: Setting the Bar for the Deployment of Congestion Control Algorithms," were presented last week at the 18th ACM Workshop on Hot Topics in Networks (HotNets-2019) in Princeton, New Jersey. Despite the team's focus on internet fairness, their proposed guidelines don't focus on fairness itself. That's because perfect fairness, the authors argue, is actually difficult to achieve and few (if any) existing CCAs today are perfectly fair. "We need to stop making excuses for why our new algorithms are not meeting an unrealistic goal," said Ranysha Ware, a CSD Ph.D. student and lead author on the study. So instead of focusing on developing CCAs that are fair, Ware and her co-authors say that developers need to ensure that new CCAs would not inflict harm on the existing ecosystem of CCAs. Put simply: if a new CCA is more unfair than existing CCAs, it is not okay to deploy. "What makes Google's new algorithm special is not that it's unfair, it's that it is more unfair and causes more harm to the internet than existing CCAs," said Sherry, who is also a member of the university's CyLab Security and Privacy Institute. "You can only be as unfair as things already are. You can't be more unfair than things already are." Sherry likens the issue of CCA fairness to splitting a cookie between two children. "Ideally, we would cut the cookie perfectly in half, but no one can ever perfectly cut a cookie in half. One side always ends up uneven," Sherry said. "The trick is doing something that is reasonable, even if it's not perfectly fair: having one child split the cookie, and the other child choose which half they get." In the case of CCAs, the trick is ensuring that the status quo is left unperturbed. Other authors on the study included CSD Department Head Srinivasan Seshan and Nefeli Networks software engineer and CSD alumnus Matthew Mukerjee.

Two Endowed Professorships Created for Computer Science and Electrical and Computer Engineering Faculty

Gifts Totaling $6 Million by Cadence Design Systems and CEO Lip-Bu Tan Will Advance Research and Teaching

Brian Thornton

Cadence Design Systems Inc. and its CEO, Lip-Bu Tan, have made significant gifts of $3 million each to support Carnegie Mellon University faculty members working in computer-related fields. Cadence, a leading multinational company in the electronic design automation industry, has created the Cadence Design Systems Endowed Chair in Computer Science. Tan and his wife, Ysa Loo, have created the Tan Family Endowed Chair in Electrical and Computer Engineering (ECE). Together, the gifts total $6 million, which will provide funding to advance faculty members' activities, including research and teaching. "Exceptional people with pioneering ideas have fueled Carnegie Mellon’s game-changing research and education from the very beginning, so investing in human capital development is one of the most important ways that we can retain our global leadership," CMU President Farnam Jahanian said. "Endowed professorships provide a singularly powerful tool to support these bright minds, and we are grateful to Cadence, Lip-Bu and Ysa for their exceptional generosity toward this critical priority." Cadence's products are used by electronic systems and semiconductor companies to create innovative and transformational end products. Cadence's Academic Network Program, of which CMU is a member, promotes the proliferation of technology expertise among selected universities, research institutes and industry advisors in the area of microelectronic systems development. "Cadence is privileged to institute an endowed chair in the School of Computer Science," said John Shoven, chairman of the Board of Directors of Cadence Design Systems. "We are fortunate to have many CMU CS graduates on our Cadence team and look forward to enabling the advancement of faculty members' research priorities." "The connection between Cadence's work and computer science cannot be overstated," said Martial Hebert, dean of the School of Computer Science. "This new professorship is another indication of the deepening connections among computer science, electronic design automation and related areas." Tan has been the CEO of Cadence since 2009 and joined the company's board of directors in 2004. He is also the founder and chairman of Walden International, a venture capital firm that he launched in 1987. He is a member of The Business Council and serves on the board of directors of Hewlett Packard Enterprise Company and Schneider Electric SE. Tan also serves on CMU's Board of Trustees and is a member of the College of Engineering's Dean's Advisory Council. The couple's two sons, Andrew and Elliott, both received their master's degrees from CMU's College of Engineering. "Carnegie Mellon's ECE department has provided world-class education and an incredible learning experience to our two sons," Tan said. "Ysa and I are delighted to support the ECE department as it continues pushing the frontiers of cutting-edge, innovative research." Tan and Loo previously endowed a graduate student fellowship in the Department of Electrical and Computer Engineering. "I'm excited by the opportunity to recognize and support one of our star faculty as the Tan Family Professor in ECE," said Jon Cagan, interim dean of the College of Engineering. "We deeply value the additional support of research in the college by Lip-Bu Tan and his family."  

CMU Women Prominent Among Rising Stars 2019

Annual Workshop Boosts Women in Computer Science, Electrical and Computer Engineering

Byron Spice

Women from Carnegie Mellon University outnumbered those from every other institution at Rising Stars 2019, an annual workshop for early career women in computer science and electrical and computer engineering. They also won two of the four prizes in the workshop's Research Pitch Competition. The intensive workshop, designed for women pursuing academic careers, was hosted this year by the University of Illinois at Urbana-Champaign Oct. 29-Nov. 1. It included the largest class of participants to date, with 90 participants from almost 40 institutions represented. Twelve CMU women attended the workshop. The University of California, Berkeley, with nine participants, was the only other institution that came close to that total. Participants were selected from about 300 applicants Pardis Emami Naeini, a Ph.D. student in CMU's CyLab and the School of Computer Science's Institute for Software Research (ISR), and Elahe Soltanaghaei, a post-doctoral researcher who joined CyLab last month, won the Research Pitch Competition. They and the other two winners will be invited back to Illinois to present their talks. Emami Naeini's talk was "Privacy and Security Label for IoT Devices," and Soltanaghaei discussed "Sensing the Physical World Using Pervasive Wireless Infrastructure." Rising Stars was launched at MIT in 2012 and has been hosted at different campuses each year since, including CMU. This year's workshop included opportunities for one-on-one mentoring and feedback on the first eight minutes of each participant's job talks. In addition to Emami Naeini and Soltanaghaei, the CMU contingent included Forough Arabshahi, a post-doctoral associate in the Machine Learning Department (MLD); Naama Ben-David, a Ph.D. student in the Computer Science Department; Maria De-Arteaga, a Ph.D. student in MLD and the Heinz College; Hana Habib, a Ph.D. student in the ISR and CyLab; and Guyue Liu, a post-doctoral researcher in CyLab. Other members of the contingent were Soo-Jin Moon, a Ph.D. student in the Electrical and Computer Engineering Department and CyLab; Swabha Swayamdipta, a recent Ph.D. graduate of the Language Technologies Institute and now a post-doctoral researcher at the Allen Institute of Artificial Intelligence; Hsia-Yu Tung, a Ph.D. student in MLD; Xu Wang, a Ph.D. student in the Human-Computer Interaction Institute; and Yang Yang, a Ph.D. student in the Computational Biology Department.

Neural Network Fills In Data Gaps for Spatial Analysis of Chromosomes

Machine Learning Enhances Study of 3D Genome Structure in Cell Nucleus

Byron Spice

Computational methods used to fill in missing pixels in low-quality images or video also can help scientists provide missing information for how DNA is organized in the cell, computational biologists at Carnegie Mellon University have shown. Filling in this missing information will make it possible to more readily study the 3D structure of chromosomes and, in particular, subcompartments that may play a crucial role in both disease formation and determining cell functions, said Jian Ma, associate professor in CMU's Computational Biology Department. In a research paper published today by the journal Nature Communications, Ma and Kyle Xiong, a CMU Ph.D. student in the CMU-University of Pittsburgh Joint Ph.D. Program in Computational Biology, report that they successfully applied their machine learning method to nine cell lines. This enabled them, for the first time, to study differences in spatial organization related to subcompartments across those lines. Previously, subcompartments could be revealed in only a single cell type of lymphoblastoid cells — a cell line known as GM12878 — that has been exhaustively sequenced at great expense using Hi-C technology, which measures spatial interactivity among all regions of the genome. "We now know a lot about the linear composition of DNA in chromosomes, but in the nuclei of human cells, DNA isn't linear," Xiong said. "Chromosomes in the cell nucleus are folded and packaged into 3D shapes. That 3D structure is critical to understanding the cellular functions in development and diseases." Subcompartments are of particular interest because they reflect spatial segregation of chromosome regions with high interactivity. Scientists are eager to learn more about the juxtaposition of subcompartments and how it affects cell function, Ma said. But until now researchers could calculate the patterns of subcompartments only if they had an extremely high coverage Hi-C dataset — that is, the DNA had been sequenced in great detail to capture more interactions. That level of detail is missing in the datasets for cell lines other than GM12878. Working with Ma, Xiong used an artificial neural network called a denoising autoencoder to help fill in the gaps in less-than-complete Hi-C datasets. In computer vision applications, the autoencoder can supply missing pixels by learning what types of pixels typically are found together and making its best guess. Xiong adapted the autoencoder to high-throughput genomics, using the dataset for GM12878 to train it to recognize what sequences of DNA pairs from different chromosomes typically might be interacting with each other in 3D space in the cell nucleus. This computational method, which Ma and Xiong have dubbed SNIPER, proved successful in identifying subcompartments in eight cell lines whose interchromosomal interactions based on Hi-C data were only partially known. They also applied SNIPER to the GM12878 data as a control. But Xiong noted that it is not yet known how widely this tool can be used on all other cell types. He and Ma are continuing to enhance the method, however, so it can be used on a variety of cellular conditions and even in different organisms. "We need to understand how subcompartment patterns are involved in the basic functions of cells, as well as how mutations can affect these 3D structures," Ma said. "Thus far, in the few cell lines we've been able to study, we see that some subcompartments are consistent across cell types, while others vary. Much remains to be learned." The National Institutes of Health and the National Science Foundation supported this work.

EduSense: Like a FitBit for Your Teaching Skills

CMU Researchers Develop Comprehensive Classroom Sensing System

Virginia Alvino Young

While training and feedback opportunities abound for K-12 educators, the same can't be said for instructors in higher education. Currently, the most effective mechanism for professional development is for an expert to observe a lecture and provide personalized feedback. But a new system developed by Carnegie Mellon University researchers offers a comprehensive real-time sensing system that is inexpensive and scalable to create a continuous feedback loop for the instructor. The system, called EduSense, analyzes a variety of visual and audio features that correlate with effective instruction. "Today, the teacher acts as the sensor in the classroom, but that's not scalable," said Chris Harrison, assistant professor in CMU's Human-Computer Interaction Institute (HCII). Harrison said classroom sizes have ballooned in recent decades, and it's difficult to lecture and be effective in large or auditorium-style classes. EduSense is minimally obtrusive. It uses two wall-mounted cameras — one facing students and one facing the instructor. It senses things such as students' posture to determine their engagement, and how much time instructors pause before calling on a student. "These are codified things that educational practitioners have known as best practices for decades," Harrison said. A single off-the-shelf camera can view everyone in the classroom and automatically identify information such as where students are looking, how often they're raising their hands and if the instructor moves through the space instead of staying behind a podium. The system uses OpenPose, another CMU project, to determine body position. With advances in computer vision and machine learning, it’s possible to provide insights that would take days if not months to get with manual observation said the HCII's Karan Ahuja and Dohyun Kim of CMU’s Institute for Software Research, the two lead Ph.D. students working on the EduSense project.  Harrison said learning scientists are interested in the instructional data. "Because we can track the body, it's like wearing a suit of accelerometers. We know how much you're turning your head and moving your hands. It's like you're wearing a virtual motion-capture system while you're teaching." Using high-resolution cameras steaming 4K video for many classes at once is a "computational nightmare," Harrison said. To keep up, resources are elastically assigned to provide the best possible frame rate for real-time data. The project also has a strong focus on privacy protection, guided by Yuvraj Agarwal, an associate professor in the university's Institute for Software Research (ISR). The team didn't want to identify individual students, and EduSense can't. No names or identifying information are used, and since camera data is processed in real time, the information is discarded quickly. Now that the team has demonstrated that they can capture the data, HCII faculty member Amy Ogan said their current challenge is wrapping it up and presenting it in a way that's educationally effective. The team will continue working on instructor-facing apps to see if professors can integrate the feedback into practice. "We have been focused on understanding how, when and where to best present feedback based on this data so that it is meaningful and useful to instructors to help them improve their practice," she said. This research has been presented at Ubicomp, the International Conference of the Learning Sciences, and will be presented this coming April at the American Educational Research Association annual meeting. Other researchers involved in EduSense include HCII Ph.D. student Franceska Xhakaj; Annie Xie, a project manager in the HCII; Jay Eric Townsend, former senior engineer in the HCII; Stanley Zhang, a student in CMU’s Electrical and Computer Engineering Department; and Virag Varga, from ETH Zurich.

ACM Names Tom Cortina as Distinguished Member

Byron Spice

The Association for Computing Machinery (ACM) has named Thomas Cortina, assistant dean for undergraduate education in the School of Computer Science, one of 62 computer scientists worldwide to be recognized this year as Distinguished Members for their outstanding contributions. All 2019 inductees are longstanding ACM members and were selected as Distinguished Members by their peers for a range of accomplishments that have contributed to technologies that underpin how we live, work and play. Cortina is one of nine members selected for their educational contributions to computing. A faculty member since 2004, Cortina became assistant dean in 2012, overseeing a rapid expansion of the undergraduate program. He helped launch the popular CS4HS workshop for computer science high school teachers, and ACTIVATE workshops for science, technology, engineering and math teachers in the Pittsburgh region. Prior to joining CMU, Cortina taught for a combined 16 years at Polytechnic University in Brooklyn, New York, and at Stony Brook University. He has been active in ACM's Special Interest Group in Computer Science Education (SIGCSE) and currently serves on the ACM's Education Advisory Committee. He served on the National Science Foundation's Computer and Information Science and Engineering advisory committee for four years, and was on the advisory board of a joint NSF-College Board project to develop the latest Advanced Placement Computer Science Principles course. "Each year it is our honor to select a new class of Distinguished Members," said ACM President Cherri M. Pancake. "Our overarching goal is to build a community wherein computing professionals can grow professionally and, in turn, contribute to the field and the broader society. We are delighted to recognize these individuals for their contributions to computing, and we hope that the careers of the 2019 ACM Distinguished Members will continue to prosper through their participation with ACM."