News 2019

May 2019

Carley Awarded Honorary Doctorate by University of Zurich

Josh Quicksall

Kathleen M. Carley, a professor in the Institute for Software Research and the Engineering and Public Policy Department, has received an honorary doctorate from the Faculty of Business, Economics and Informatics of the University of Zurich. In their statement regarding Carley's degree, the University of Zurich noted that her work constitutes "pioneering contributions to our understanding of social systems by means of computational methods. Through the development of new methods to study social networks, she shaped the development of data science and computational social science and provided important stimuli for the study of digital societies." Carley, who founded and directs the Center for Computational Analysis of Social and Organizational Systems (CASOS), has made seminal contributions to the computational and data-driven study of social organizations, and is a pioneer of data science and computational social science. She has led the development of software tools for network analysis, agent-based modeling and epidemiological modeling used in academia and industry. She is also the founder and leader of the newly emerging field of social cybersecurity. The University of Zurich, founded in 1833, is widely regarded as one of the leading research universities in Europe. More than 12 Nobel Prize laureates are associated with the institution, which was the first in Europe to be founded by the state rather than a monarch or church. For more on Carley's work, read the full story on the ISR website.

CMU, Yixue Education Inc. Announce AI Research Project in Adaptive K-12 Education

Byron Spice

Yixue Education Inc. and Carnegie Mellon University have announced a new multiyear partnership highlighted by a comprehensive artificial intelligence research lab. The CMU-Squirrel AI Research Lab on Personalized Education at Scale will develop new ways for AI, machine learning, cognitive science and human computer interface technologies to improve the adaptive learning experiences of K–12 students around the world. Yixue has used the Squirrel AI Learning brand to launch more than 1,900 learning centers in more than 300 cities across China, offering AI-driven adaptive tutoring services in a blended classroom learning model. In these after-school centers, Yixue offers an advanced adaptive learning system and personalized content in multiple subjects, with the options of in-person and online tutor support. In many parts of China, personalized learning is out of reach for most students, and qualified teachers are in short supply for tutoring services. "The CMU-Squirrel AI Research Lab on Personalized Education at Scale provides unique new opportunities for CMU faculty and students to partner with Yixue and its scientists and engineers to extend the frontiers of adaptive learning theories, technologies and practices," said Tom Mitchell, interim dean of CMU's School of Computer Science. The new research lab will be directed at CMU by Mitchell and Ken Koedinger, the Hillman Professor of Computer Science and Human-Computer Interaction. "This partnership is a clear demonstration, in the tradition of CMU, of how advanced scientific research, combined with industry perspectives, can advance the education industry and have a global social impact," Mitchell added. "The long-term commitment to this new partnership is a testament to our shared desire to advance AI and machine learning for personalized education." "We're thrilled to be partnering with the exceptional faculty and students at Carnegie Mellon, which has established itself as the leading institution for artificial intelligence, machine learning and adaptive learning technologies," said Derek Haoyang Li, chairman, Yixue Education Inc. "By building upon CMU's proven record of research and development in the science of learning, we will be able to expand the frontier of adaptive learning theories and practices to improve student engagement, efficiency and learning outcomes for all kids around the world."

 A portrait of Jan Hoffmann.

Hoffmann Receives NSF CAREER Award

Byron Spice

Jan Hoffmann, an assistant professor in the Computer Science Department, has received a five-year, $519,000 Faculty Early Career Development (CAREER) Award, the National Science Foundation's most prestigious award for young faculty members. Hoffmann's research specialties are programming languages and verification. The NSF award will support his work regarding quantitative properties, such as available memory and execution time, associated with formal verification techniques. For instance, Hoffmann said it would be insufficient to verify the correctness of software for a self-driving vehicle without considering whether the system had sufficient memory to execute the program or could do so quickly enough to allow the vehicle to respond to live traffic situations. Likewise, formal methods could determine how long it takes to run a program on the cloud and, thus, justify charges by cloud computing providers. Hoffmann said quantitative properties will be critical to providing probabilistic guarantees for the safety of a software system. And quantitative properties can help cybersecurity researchers reason about side channels. In the case of password prompts, for instance, the time it takes to verify if an input matches the password can provide help in cracking passwords, and the size of data packets transmitted while filling out an online medical form can let an observer determine which links on the form have been clicked. Hoffmann earned his Ph.D. in computer science at Ludwig Maximilian University of Munich and the Technical University of Munich. He joined the CMU faculty in 2015, after serving as a post-doctoral associate and an associate research scientist at Yale University.

Pitt and CMU To Create Autonomous Robotic Trauma Care System

Byron Spice (CMU), Allison Hydzik (Pitt)

The University of Pittsburgh School of Medicine and Carnegie Mellon University each have been awarded four-year contracts totaling more than $7.2 million from the U.S. Department of Defense to create an autonomous trauma care system that fits in a backpack and can treat and stabilize soldiers injured in remote locations. The goal of TRAuma Care in a Rucksack: TRACIR is to develop artificial intelligence (AI) technologies enabling medical interventions that extend the "golden hour" for treating combat casualties and ensure an injured person's survival for long medical evacuations. A multidisciplinary team of Pitt researchers and clinicians from emergency medicine, surgery, critical care and pulmonary fields will provide a wealth of real-world trauma data and medical algorithms that CMU roboticists and computer scientists will incorporate in the creation of a hard and soft robotic suit, into which an injured person can be placed. Monitors embedded in the suit will assess the injury, and AI algorithms will guide the appropriate critical care interventions and robotically apply stabilizing treatments, such as intravenous fluids and medications. Ron Poropatich, M.D., retired U.S. Army colonel, director of Pitt's Center for Military Medicine Research and a professor in Pitt's Division of Pulmonary, Allergy and Critical Care Medicine, is overall principal investigator on the $3.71 million Pitt contract, with Michael R. Pinsky, M.D., professor in Pitt's Department of Critical Care Medicine, as its scientific principal investigator. Artur Dubrawski, a research professor in CMU's Robotics Institute, is principal investigator on the $3.5 million CMU contract. "Battlefields are becoming increasingly remote, making medical evacuations more difficult," Poropatich said. "By fusing data captured from multiple sensors and applying machine learning, we are developing more predictive cardio-pulmonary resuscitation opportunities, which hopefully will conserve an injured soldier's strength. Our goal with TRACIR is to treat and stabilize soldiers in the battlefield, even during periods of prolonged field care, when evacuation is not possible." Much technology still needs to be developed to enable robots to reliably and safely perform tasks, such as inserting IV needles or placing a chest tube in the field, Dubrawski said. Initially, the research will be a series of baby steps demonstrating the practicality of individual components the system will eventually require. "Everybody has a slightly different vision of what the final system will look like," Dubrawski added. "But we see this as being an autonomous or nearly autonomous system — a backpack containing an inflatable vest or perhaps a collapsed stretcher that you might toss toward a wounded soldier. It would then open up, inflate, position itself and begin stabilizing the patient. Whatever human assistance it might need could be provided by someone without medical training. " With a digital library of detailed physiologic data collected from over 5,000 UPMC trauma patients, Pinsky and Dubrawski previously created algorithms that could allow a computer program to learn the signals that an injured patient's health is deteriorating before damage is irreversible and tell the robotic system to administer the best treatments and therapies to save that person's life. "Pittsburgh has the three components you need for a project like this — world-class expertise in critical care medicine, artificial intelligence and robotics," Dubrawski said. "That's why Pittsburgh is unique and is the one place for this project." While the project's immediate goal is to carry forward the U.S. military's principle of "leave no man behind," and treat soldiers on the battlefield, there are numerous potential civilian applications, said Poropatich. "TRACIR could be deployed by drone to hikers or mountain climbers injured in the wilderness; it could be used by people in submarines or boats; it could give trauma care capabilities to rural health clinics or be used by aid workers responding to natural disasters," he said. "And, someday, it could even be used by astronauts on Mars." In addition to Dubrawski, CMU researchers on this project include robotics faculty members Howie Choset, Chris Atkeson, John Galeotti and Herman Herman, director of CMU's National Robotics Engineering Center.

Bacteria Change Behavior To Tackle Tiny Obstacle Course

E. coli Behavior in Obstacle Courses Has Implications for Robotic Search and Rescue

Byron Spice

It's not exactly the set of TV's "American Ninja Warrior," but a tiny obstacle course for bacteria has shown researchers how E. coli changes its behavior to rapidly clear obstructions to food. Their work holds implications for not only biology and medicine, but also robotic search-and-rescue tactics. Scientists at Carnegie Mellon University, the University of Pittsburgh and the Salk Institute for Biological Studies report today in the Proceedings of the National Academy of Sciences that the well-known "swim and tumble" behavior that bacteria use to move toward food or away from poisons changes when bacteria encounter obstacles. "In the real world, they always encounter lots of obstacles," said Ziv Bar-Joseph, a professor in CMU's Computational Biology and Machine Learning Departments. E. coli, for instance, inhabits the complicated terrain of the gastrointestinal tract. Yet previous studies of chemotaxis — the way bacteria move toward a higher concentration of food or away from concentrations of poisons — generally have been done in unobstructed chambers. Existing models of chemotaxis predict that obstacles will slow the progress of bacteria. So the researchers designed microfluidic chambers — just 10 micrometers high, one millimeter wide and one millimeter long — and placed evenly distributed square and round obstacles in them. When they tested E. coli inside these tiny obstacle courses, they were surprised at the speed at which the bacteria found a food source. "Almost regardless of the obstacles, they got to the food almost as quickly as they did without obstacles," said Sabrina Rashid, a CMU Ph.D. student in computational biology and the lead author on the study. "The obstacles were not affecting the time they needed to reach food, as the previous models predicted." Bacteria are known to communicate with each other by secreting chemicals, and this sort of communication no doubt informs bacteria as they try to get around an obstacle, she said. But a closer look at the bacteria also showed a change in behavior. Normally, bacteria swim a bit, then perform a circular dance, called tumbling, to reorient themselves regarding food concentrations. The tumbling slows progress toward food, but importantly enables bacteria to make course corrections. The researchers suspected that a key reason for the improvement in speed when facing obstacles is the bacteria's ability to tumble less and swim more until they are in the clear. So they designed additional experiments that tracked individual bacteria cells and confirmed these predictions. Given the importance of cell movement in biology, the new findings could have implications for how malignant cells spread through the body or how infections might be treated, Bar-Joseph said. Based on these findings the researchers have developed their own chemotaxis model to account for this new behavior and better predict the performance of bacteria. Applying the model to simulations of teams, or swarms, of robots performing searches for trapped victims in emergencies has shown that this approach can reduce their search time as well. "Any type of insight we can get from biology to improve computation is important to us," Bar-Joseph added. In addition to Bar-Joseph and Rashid, the research team at CMU included Shashank Singh, a Ph.D. student in machine learning and statistics. At Pitt, Hanna Salman, associate professor of physics and astronomy, and Zoltan Oltvai, associate professor of pathology, were joined by Zicheng Long, a post-doctoral researcher in Salman's lab, and Maryam Kohram and Harsh Vashistha, both Ph.D. students in physics. Saket Navlakha, an associate professor at the Salk Institute, completes the team. The National Science Foundation supported this research.

Carnegie Mellon Educational Software Slated for Pilot Project in Zambia

Finalist in Global Learning XPRIZE Lets Kids Teach Themselves Reading, Writing, Math

Byron Spice

RoboTutor LLC, a team based at Carnegie Mellon University that was a finalist in the $15 million Global Learning XPRIZE, has announced that its educational apps will be used to teach 10,000 children basic reading, writing and mathematical skills in the Republic of Zambia.Two other finalists — Kitkit School from South Korea and the United States, and onebillion from Kenya and the United Kingdom — on Wednesday were named the winners in the Global Learning XPRIZE, sharing the $10 million grand prize donated by entrepreneur Elon Musk. Each team in the competition developed educational apps to run on Android tablets that would enable children ages 7–10 to teach themselves basic literacy and numeracy without the aid of an adult.The winner was chosen based on a 15-month field test in Tanzania of the software developed by five finalists, involving more than 2,700 children in 170 villages. Before the field test, 74% of the participating children were reported as never attending school, 80% reported as never being read to at home and more than 90% could not read a single world in Swahili. Following the trial, that number was cut in half."We salute the KitKit School and onebillion teams for their achievement. But we also are incredibly proud and delighted by the performance of RoboTutor, and I can't thank enough the more than 180 people around the world who contributed to its development," said Jack Mostow, leader of the RoboTutor team and an emeritus research professor at CMU's Robotics Institute."Though we didn't win the big prize, it was an honor to participate in this grand educational effort," Mostow said. "At Carnegie Mellon, we like to say that we work on solutions to real-world problems; I can't think of a bigger educational challenge than the 250 million children on this planet who lack basic literacy and numeracy."Just as RoboTutor will move forward with plans for its pilot project in Zambia, Mostow said he hoped all of the finalists will continue to develop and deploy their educational software."If RoboTutor — and the other finalists in this competition — can help compensate for the shortage of teachers in so many areas of this world, we will all have accomplished something great. Today marks only the beginning," he added.In Zambia, Carnegie Mellon will partner with Anchor of Hope Charities, which provides 50,000 children in Zambia with food and shoes. The pilot study for a national education program for Zambian children has been endorsed by Zambia's Ministry of General Education. Though 10,000 children initially would be involved, hopes are to expand the program to include 8 million students in Zambia.Mostow established RoboTutor LLC as a spinoff of Carnegie Mellon for the purpose of pursuing the Global Learning XPRIZE and licensed some learning technologies from the university. Going forward, RoboTutor will be under the umbrella of the Simon Initiative, CMU's campus-wide effort devoted to learning engineering and an early RoboTutor sponsor."We really believe in this work and are committed to seeing it continue," said Norman Bier, executive director of the Simon Initiative. About half of the $4 million cost of the pilot project has been raised thus far, and CMU is seeking additional donors for the effort.RoboTutor software already is part of the OpenSimon Toolkit, a suite of open-source tools, educational resources and underlying codebase that the Simon Initiative has made available to catalyze a revolution in learning and teaching.RoboTutor is based on decades of research on human learning, including an automated Reading Tutor developed by Mostow's Project LISTEN team, that helped children learn to read. RoboTutor employs artificial intelligence to recognize children's speech and handwriting, and includes a number of activities that children can choose that help develop reading, comprehension and numeracy skills. It assesses each child's performance, providing help when needed and adjusting the activities to match the student's skill level.Each of the finalists prepared software in both English and Swahili. The Swahili version was used for the field test in Tanzania. The English version of RoboTutor will be used in the pilot program in Zambia."Carnegie Mellon is all about using knowledge to solve big problems, and RoboTutor fits squarely within that CMU tradition," said Tom Mitchell, interim dean of CMU's School of Computer Science. "We are optimistic that the software developed by the RoboTutor team and the other finalists will make headway against illiteracy in an age where education grows ever more important. Congratulations to the entire RoboTutor team and best of luck in implementing the software more broadly."In addition to Mostow, leaders of the RoboTutor team included Amy Ogan, the Thomas and Lydia Moran Assistant Professor of Learning Science in the Human-Computer Interaction Institute (HCII), who spearheaded efforts to adapt RoboTutor to the Tanzanian culture; and Judith Uchidiuno, the project manager and a Ph.D. student in HCII.Other participants included Leonora Anyango-Kivuva, a consultant for the National Foreign Language Center, a former Swahili instructor at Pitt and the voice of RoboTutor; Judy Kendall, director of Anchor of Hope Charities; Kevin DeLand, software architect and developer; and Janet Mostow, enterprise architect and RoboTutor LLC board member.

New Technology Improves Cloud Computing

Daniel Tkacik

Cloud computing has enabled huge triumphs in big data, from searching the web in a millisecond to decoding the human genome. But to keep cloud servers running smoothly, developers have applied different techniques to minimize disrupting their central processing units (CPUs) — techniques that don't often work together. Thanks to a team of computer science researchers, that's all changed. Historically, developers have relied on containerization or remote direct memory access (RDMA) to keep cloud applications running smoothly. The first technique creates an isolated computing environment where applications can run without disrupting a machine's CPU. RDMA allows developers to access memory on a remote server without interrupting the server's CPU. Traditionally, the two couldn't be used together. Enter FreeFlow, open-source software that unites these two once-incompatible techniques. "Before FreeFlow, no system could use RDMA for containerized applications," said Daehyeok Kim, a Ph.D. student in Carnegie Mellon's Computer Science Department (CSD). "FreeFlow makes this possible." Kim presented FreeFlow earlier this year at the 16th USENIX Symposium on Networked Systems Design and Implementation in Boston. (You can watch Kim's presentation on YouTube). To understand how FreeFlow works, consider TensorFlow, a popular machine learning framework that companies such as Google use for various tasks, including image and speech recognition. If a developer installs the software on containers in the cloud, a TensorFlow instance running inside one container can't remotely access data inside another container without invoking the host server's CPU. They also couldn't use RDMA as a communication method in their application, so performance has been limited. "This boils down to companies like Intel running deep learning applications at much faster speeds with FreeFlow than they were able to before," says Kim. "That's because RDMA is 15 times faster than traditional networking." Other researchers on the study included CSD Ph.D. student Tianlong Tu, Hongqiang Harry Liu from Alibaba, Jitu Padhye and Shachar Raindel from Microsoft Research, Chuanxiong Guo and Yibo Zhu from Bytedance, CyLab and Electrical and Computer Engineering professor Vyas Sekar, and CSD Department Head Srinivasan Seshan.

RoboTutor Team Awaits Global Learning XPRIZE Results

Winning Team Gets $10 Million, but Ultimate Goal Is Boosting Literacy for Millions Worldwide

Byron Spice

Team members of RoboTutor LLC — who have spent years developing open-source software that children can use to teach themselves basic reading, writing and mathematics — are anxiously awaiting the results of the Global Learning XPRIZE competition. RoboTutor, based at Carnegie Mellon University, is one of five finalists in the international competition. Following a 15-month, large-scale field trial in Tanzania that involved thousands of children, XPRIZE will announce the trial results and the winner of the competition's $10 million prize on Wednesday evening, May 15, during a ceremony in Los Angeles. "Our goal was always to win the competition, but win or lose, this has been a tremendous experience for all of us," said Jack Mostow, research professor emeritus in the Robotics Institute and leader of the RoboTutor team. "We believe the work by all five teams will have enormous benefits for millions of children around the world who otherwise face a future dimmed by illiteracy." Each team was tasked with creating Android tablet apps in both Swahili and English that could be used by children ages 7–10. The idea was that children presented with a tablet could learn basic literacy and numeracy skills without need of adult supervision. That's essential in areas of the world where few teachers, if any, exist. "We know that kids who used RoboTutor benefited from it," Mostow said, based on the app's internal processes for assessing student progress. "But we also know that there was attrition over the 15 months, as the novelty wore off and some kids stopped using it. The real answer to how effective these apps are will come from the post-test that XPRIZE administered following the field test. Obviously, we can't wait to hear those results." Those findings and the grand prize winner of the Global Learning XPRIZE will be announced at 9:30 p.m. EDT Wednesday, May 15, and will be streamed on YouTube Live. RoboTutor LLC is a CMU spinoff, created by Mostow for the competition and based on decades of research on human learning. It received early support from CMU's Simon Initiative and licensed some of its technology from the university. More than 180 people from CMU, the University of Pittsburgh and other institutions around the world contributed to the team. Norman Bier, executive director of the Simon Initiative, said RoboTutor development proceeded according to the learning engineering process that CMU has pioneered: instrumented capture of learning data, analysis of that data and then using the analytics to improve learning. RoboTutor software is part of the OpenSimon Toolkit, a suite of tools, educational resources and underlying codebase that the Simon Initiative is making freely available in hopes of catalyzing a revolution in learning and teaching for the world's educational institutions. Almost 200 teams from 40 countries entered the competition. XPRIZE selected RoboTutor and the other four finalists in September 2017 and gave each finalist $1 million to further develop their apps. The RoboTutor team worked with Swahili-speaking children and staff in the Tanzanian towns of Bagamoyo and Mugeta to prepare for the field test. RoboTutor is the only team among the five finalists based at a university. Two other U.S. teams were fielded by educational technology companies, while an Indian team and a British team both were formed by nonprofit educational organizations. RoboTutor is based on decades of research on human learning, including Reading Tutor, an automated system developed by Mostow's Project LISTEN team that helped children learn to read. RoboTutor employs artificial intelligence to recognize children's speech and handwriting. It includes a number of activities that children can choose that help develop reading, comprehension and numeracy skills. It assesses each child's performance, providing help when needed and adjusting the activities to match the student's skill level. "It was up to us to make software engaging enough that the children would use it and effective enough that if they used it they would learn," Mostow said. The team also adapted the software to Tanzanian culture, an effort spearheaded by Amy Ogan, Thomas and Lydia Moran Assistant Professor of Learning Science in the Human-Computer Interaction Institute (HCII). For instance, during the initial beta trials conducted by Ogan and Judith Uchidiuno, the project manager and a Ph.D. student in HCII, the children were handed RoboTutor-equipped tablets. But the children wouldn't touch the tablets. Only after 45 minutes, when Ogan and Uchidiuno told them to tap on the tablets, did the children feel they had permission to do so. "I can't imagine an American kid sitting for 45 seconds without tapping on a tablet," Mostow said. In addition to Mostow, Ogan and Uchidiuno, key members of the RoboTutor team include Leonora Anyango-Kivuva, a consultant for the National Foreign Language Center, a former Swahili instructor at Pitt and the voice of RoboTutor; Judy Kendall, director of Anchor of Hope Charities in Indianapolis, Ind.; Kevin DeLand, software architect and developer; and Janet Mostow, enterprise architect and RoboTutor LLC board member.

Lenore Blum Receives Inaugural Dean's Professorship in Tech Entrepreneurship

Byron Spice

Lenore Blum, Distinguished Career Professor of Computer Science, received the inaugural Dean's Professorship in Technology Entrepreneurship at a May 3 ceremony and celebration. Blum is the founding director of Project Olympus, an incubator that helps Carnegie Mellon University students and faculty assess the commercial prospects of their ideas and research findings, and begin the process of establishing a startup. She also is the faculty director of the campus-wide Swartz Center for Entrepreneurship. "Lenore has driven several career's worth of impact on the School of Computer Science," said Tom Mitchell, interim dean of SCS. "We are profoundly grateful for all that she has done for SCS and CMU." Blum is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering and Math (STEM) fields. She was a founder of the Association for Women in Mathematics and the Expanding Your Horizons Network. At Carnegie Mellon, she founded the Women@SCS program. In 2004, she received the U.S. Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring. In 2009 she received the Carnegie Science Catalyst Award in recognition of her work with Project Olympus and of her efforts to increase the participation of women in computer science. Her research, founding a theory of computation and complexity over continuous domains, forms a theoretical basis for scientific computation. She is a trustee of the Jewish Healthcare Foundation, on the board of hackNY, on the advisory board of WorldQuantU, and faculty advisor to the CMU student organization, ScottyLabs.  

Bajpai Wins 2019 K&L Gates Prize

Graduating Senior Discovered Her Passion for Teaching at SCS

Byron Spice

Tanvi Bajpai, who came to Carnegie Mellon University to become a software engineer and discovered a passion for teaching in the process, will receive the 2019 K&L Gates Prize. The $5,000 prize, supported by the K&L Gates Endowment for Ethics and Computational Technologies, recognizes a graduating senior who has best inspired fellow students at the university to love learning through a combination of intellect, high scholarly achievement, engagement with others and character. "I was very surprised to get the award," said Bajpai, who is heading to the University of Illinois Urbana-Champaign this fall to pursue a Ph.D. in theoretical computer science. "Honestly, it just seemed that I did what I had a passion to do. If that made an impact, that's great." "It has never been about grades for Tanvi," said Anil Ada, assistant teaching professor in the Computer Science Department and Bajpai's academic advisor. "She cares deeply about research as well as her community. And she is the rare talent who will positively influence the culture of the environment she is in," Ada wrote in nominating her for the prize. Notably, Bajpai is the only undergraduate to serve on the search committee for the new SCS dean. She also won the Mark Stehlik Introductory and Service Teaching Award, a 2018 Mark Stehlik Alumni Undergraduate Impact Scholarship, and a Carnegie Mellon Women's Association scholarship in recognition of her commitment to the advancement of women in SCS. Growing up in Princeton Junction, New Jersey, Bajpai developed a love of math and, through a summer program for high school students, a fascination with algorithmic and combinatorial thinking. Her college search gravitated to Carnegie Mellon. "CMU was one of the few places where discrete math and theory were taught early on," she recalled. Between that, CMU's strength in machine learning and SCS's relatively high female enrollment, "CMU sold itself very easily." Her initial plans to become a software engineer in industry changed after becoming a teaching assistant for Teaching Professor John Mackey's course on mathematical foundations of computer science. She realized she loved teaching and, again, learned to appreciate the uniqueness of SCS — in this case, its use of undergraduate TAs. "I don't think I would have discovered my love for teaching as soon as I did anywhere else," she said. She initially was the only female TA for Mackey's class, which is taken by all computer science majors. After becoming head TA, however, she worked to erase the gender gap; by last fall, half of the course's TAs were women. "Tanvi is a very talented teacher," Ada said. "She has an amazing power to tell stories and get people to listen to her." As a TA, Bajpai organized "conceptual" office hours, in which students are not allowed to ask any homework questions. "She has like 50 students come to these conceptual office hours, which is quite extraordinary," Ada noted. She also has been active in research since her sophomore year, approaching Ramamoorthi Ravi, Zoltners Professor of Operations Research in the Tepper School of Business, after auditing his doctoral course in combinatorial optimization. He started her off on defining a new measure of diversity of recommendations, which she was quick to grasp, and she extended the work of a doctoral student. Ravi and Bajpai have submitted a paper for publication on the topic. "Her research prowess, intellectual curiosity and inspiration, as well as her socially conscious engagement, make her an ideal candidate for the K&L Gates Prize," Ravi wrote in his nomination. And she clearly inspires her fellow students, he added, noting several computer science students came to know his course materials just from her audit of his course. Carol Frieze, director of Women @ SCS and SCS4ALL, said Bajpai has been a leader in making women an integral part of the computer science student culture. "She has designed and created programs aimed at improving inclusion and diversity, she has sustained these efforts, and, what is so important, she has trained younger students to keep up the good work after she graduates from Carnegie Mellon," Frieze said. "The fact that she has sustained her extracurricular efforts as she prepares to head to graduate school is truly impressive." Bajpai said the large number of women students at SCS, compared with other computer science programs, was important to her in selecting the school and she grew to appreciate the atmosphere even more after a summer internship with a leading tech company. When she realized she wasn't getting as many assignments as the male interns, she complained to the recruiter, who suggested that her experience at Carnegie Mellon had given her unreasonable expectations. "She said, 'Well, Carnegie Mellon is abnormal,'" Bajpai remembered. "SCS is a very special place."

SCS News From the Conference on Human Factors in Computing Systems

CMU Researchers Unveil Latest Human-Computer Interaction Advances at CHI 2019

Byron Spice

SCS had a strong showing this week at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems. Here are highlights from the stories we wrote on some of the CMU research the conference featured.Show Your Hands: Smartwatches Sense Hand ActivityWe're used to smartwatches and smartphones that sense what our bodies are doing, but what about our hands? It turns out that smartwatches, with a few tweaks, can detect a surprising number of things your hands are doing, including typing on a keyboard, washing dishes, petting a dog, playing the piano or using scissors.Knit 1, Purl 2: Assembly Instructions for a Robot?CMU researchers have used computationally controlled knitting machines to create plush toys and other knitted objects actuated by tendons. It's an approach they say might someday be used to cost-effectively make soft robots and wearable technologies.Suitcase, Wayfinding App Help Blind People Navigate AirportsRobotics Institute researchers have teamed up with the Pittsburgh International Airport to develop two tools that help people with visual disabilities navigate airport terminals safely and independently. The first, a smart rolling suitcase, sounds alarms when users are headed for a collision. The second tool is a navigation app that provides turn-by-turn audio instructions for how to reach a departure gate - or a restroom or a restaurant.CMU Researchers Make Transformational AI Seem "Unremarkable" Physicians making life-and-death decisions don't give much thought to how artificial intelligence might help them. And that's how CMU researchers say clinical AI tools should be designed - so doctors don't need to think about them. They call this "Unremarkable AI." 

CMU Researchers Make Transformational AI Seem "Unremarkable"

AI Must Be Unobtrusive To Be Accepted as Part of Clinical Decision Making

Byron Spice

Physicians making life-and-death decisions about organ transplants, cancer treatments or heart surgeries typically don't give much thought to how artificial intelligence might help them. And that's how researchers at Carnegie Mellon University say clinical AI tools should be designed — so doctors don't need to think about them. A surgeon might never feel the need to ask an AI for advice, much less allow it to make a clinical decision for them, said John Zimmerman, the Tang Family Professor of Artificial Intelligence and Human-Computer Interaction in CMU's Human-Computer Interaction Institute (HCII). But an AI might guide decisions if it were embedded in the decision-making routines already used by the clinical team, providing AI-generated predictions and evaluations as part of the overall mix of information. Zimmerman and his colleagues call this approach "Unremarkable AI." "The idea is that AI should be unremarkable in the sense that you don't have to think about it and it doesn't get in the way," Zimmerman said. "Electricity is completely unremarkable until you don't have it." Qian Yang, a Ph.D. student in the HCII, will address how the Unremarkable AI approach guided the design of a clinical decision support tool (DST) at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems, May 4–9 in Glasgow, Scotland. Yang, along with Zimmerman and Aaron Steinfeld, associate research professor in the HCII and the Robotics Institute, are working with biomedical researchers at Cornell University and CMU's Language Technologies Institute on a DST to help physicians evaluate heart patients for treatment with a ventricular assist device (VAD). This implantable pump aids diseased hearts in patients who can't receive heart transplants, but many recipients die shortly after the implant. The DST under development uses machine learning methods to analyze thousands of cases and calculate a probability of whether an individual might benefit. DSTs have been developed to help diagnose or plan treatment for a number of medical conditions and surgical procedures, but most fail to make the transition from lab to clinical practice and fall into disuse. "They all assume you know you need help," Zimmerman said. They often face resistance from physicians, many of whom don't think they need help, or see the DST as technology designed to replace them. Yang used the Unremarkable AI principles to design how the clinical team would interact with the DST for VADs. These teams include mid-level clinicians, such as nurse practitioners, social workers and VAD coordinators, who routinely use computers; and surgeons and cardiologists, who value their colleagues' advice over computational support. The natural time to incorporate the DST's prognostications is during multidisciplinary patient evaluation meetings, Yang said. Though physicians make the ultimate decision about when or if to implant a VAD, the entire team is often present at these meetings and computers are being used. Her design automatically incorporates the DST prognostications into the slides prepared for each patient. In most cases, the DST information won't be significant, Steinfeld suggested, but for certain patients, or at certain critical points for each patient, the DST might provide information that demands attention. Though the DST itself is still under development, the researchers tested this interaction design at three hospitals that perform VAD surgery, with DST-enhanced slides presented for simulated patients. "The mid-levels — the support staff — loved this," Yang said, because it enhanced their input and helped them be more active in the discussion. Physician reaction was less enthusiastic, reflecting skepticism about DSTs and the conviction that it was impossible to totally evaluate the interaction without a fully functioning system and real patients. But Yang said physicians didn't display the same defensiveness and feelings about being replaced by technology typically associated with DSTs. They also acknowledged that the DST might inform their decisions. "Prior systems were all about telling you what to do," Zimmerman said. "We're not replacing human judgment. We're trying to give humans inhuman abilities." "And to do that we need to maintain the human decision-making process," Steinfeld added. The National Heart, Lung and Blood Institute and the CMU Center for Machine Learning and Health supported this research.

Collision-Detecting Suitcase, Wayfinding App Help Blind People Navigate Airports

Researchers, Pittsburgh International Airport seek to increase independence of travelers with vision impairments

Byron Spice

Carnegie Mellon University researchers say a smart suitcase that warns blind users of impending collisions and a wayfinding smartphone app can help people with visual disabilities navigate airport terminals safely and independently.The rolling suitcase sounds alarms when users are headed for a collision with a pedestrian, and the navigation app provides turn-by-turn audio instructions to users on how to reach a departure gate — or a restroom or a restaurant. Both proved effective in a pair of user studies conducted at Pittsburgh International Airport.The researchers will present their findings at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems, May 4–9 in Glasgow, Scotland.CMU and Pittsburgh International Airport are partners in developing new systems and technologies for enhancing traveler experiences and airport operations."Despite recent efforts to improve accessibility, airport terminals remain challenging for people with visual impairments to navigate independently," said Chieko Asakawa, IBM Distinguished Service Professor in CMU's Robotics Institute and an IBM Fellow at IBM Research. Airport and airline personnel are available to help them get to departure gates, but they usually can't explore and use the terminal amenities as sighted people can."When you get a five- or six-hour layover and you need to get something to eat or use the restrooms, that is a major hassle," said one legally blind traveler who participated in a focus group as part of the research. "It would be lovely to be able to get up and move around and do things that you need to do and maybe want to do."An increasing number of airports have been installing Bluetooth beacons, which can be used for indoor navigation, but often they are deployed to enhance services for sighted travelers, not to help blind people, said Kris Kitani, assistant research professor in the Robotics Institute.He and his colleagues deployed NavCog, a smartphone-based app that employs Bluetooth beacons, at Pittsburgh International Airport. The app, developed by CMU and IBM to help blind people navigate independently, previously has been deployed on campuses, including CMU, and in shopping malls. They modified it for use at the airport, where extremely wide corridors make users vulnerable to veering, and for use with moving walkways. As part of the project, the airport installed hundreds of Bluetooth beacons throughout the facility."Part of our commitment to the public includes making sure our airport works for everyone, particularly as we modernize our facility for the future," said Pittsburgh International Airport CEO Christina Cassotis. "We're proud to partner with such great researchers through Carnegie Mellon University. Having that world-class ingenuity reflected at our airport is emblematic of Pittsburgh's transformation."The app gives audio directions to users. It relies on a map of the terminal that has been annotated with the locations of restrooms, restaurants, gates, entrances and ticketing counters.Ten legally blind people tested the app using an iPhone 8 with good results, traversing the terminal's large open spaces, escalators and moving walkways with few errors. Most users were able to reach the ticketing counter in three minutes, traverse the terminal in about six minutes, go from the gate to a restroom in a minute and go from the gate to a restaurant in about four minutes.The NavCog app for iPhone is available for free from the App Store and can be used at Pittsburgh International in the ticketing area of the landside terminal and in the concourses and center core of the airside terminal.Another team, including researchers from the University of Tokyo and Waseda University in Tokyo, developed the smart suitcase, called BBeep, to help with another problem encountered in airports — navigating through crowds. The assistive system has a camera for tracking pedestrians in the user's path and can calculate when there is a potential for collision."Sighted people will usually clear a path if they are aware of a blind person," said Asakawa, who has been blind since age 14. "This is not always the case, as sighted people may be looking at their smartphone, talking with others or facing another direction. That's when collisions occur."BBeep helps clear a path. A rolling suitcase itself can help clear the way and can serve as an extended sensing mechanism for identifying changes in floor texture. BBeep, however, can also sound an alarm when collisions are imminent — both warning the user and alerting people in the area, enabling them to make room. A series of beeps begins five seconds before collision. The frequency of the beeps increases at 2.5 seconds. When collision is imminent, BBeep issues a stop sound, prompting the blind user to halt immediately.In tests at the airport, six blind participants each wheeled BBeep with one hand and used a white cane in the other as they maneuvered through crowded areas. They were asked to walk five similar routes in three modes — one where the suitcase gave no warnings, another in which the warnings could only be heard by the user through a headset and another in which warnings were played through a speaker. A researcher followed each participant to make sure no one was injured.The researchers said the speaker mode proved most effective, both in reducing the number of pedestrians at risk of imminent collision and in reducing the number of pedestrians in the user's path."People were noticing that I was approaching and people were moving away … giving me a path," one user observed.In addition to Kitani and Asakawa, the authors of the BBeep report are Seita Kayukawa and Shigeo Morishima of Waseda University, Keita Higuchi and Yoichi Sato of the University of Tokyo, and João Guerreiro, project scientist in the Robotics Institute. Guerreiro, Asakawa and Kitani are joined in the NavCog report by Daiskuke Sato, a CMU project scientist, and Dragan Ahmetovic of the University of Turin.The National Science Foundation; the National Institute on Disability, Independent Living and Rehabilitation Research; the Allegheny County Airport Authority; and Shimizu Corp. sponsored both studies. The Japan Science and Technology Agency and Uptake provided additional support for BBeep.

Show Your Hands: Smartwatches Sense Hand Activity

Devices That Know What Your Hands Are Doing Could Unlock New Apps

Byron Spice

We've become accustomed to our smartwatches and smartphones sensing what our bodies are doing, be it walking, driving or sleeping. But what about our hands? It turns out that smartwatches, with a few tweaks, can detect a surprising number of things your hands are doing.Researchers at Carnegie Mellon University's Human-Computer Interaction Institute (HCII) have used a standard smartwatch to figure out when a wearer was typing on a keyboard, washing dishes, petting a dog, pouring from a pitcher or cutting with scissors.By making a few changes to the watch’s operating system, they were able to use its accelerometer to recognize hand motions and, in some cases, bio-acoustic sounds associated with 25 different hand activities at around 95 percent accuracy. And those 25 activities are just the beginning of what might be possible to detect."We envision smartwatches as a unique beachhead on the body for capturing rich, everyday activities," said Chris Harrison, assistant professor in the HCII and director of the Future Interfaces Group. "A wide variety of apps could be made smarter and more context-sensitive if our devices knew the activity of our bodies and hands."Harrison and HCII Ph.D. student Gierad Laput will present their findings on this new sensing capability at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems, May 4–9 in Glasgow, Scotland.Just as smartphones now can block text messages while a user is driving, future devices that sense hand activity might learn not to interrupt someone while they are doing certain work with their hands, such as chopping vegetables or operating power equipment, Laput said. Sensing hand activity also lends itself to health-related apps — monitoring activities such as brushing teeth, washing hands or smoking a cigarette.Hand-sensing also might be used by apps that provide feedback to users who are learning a new skill, such as playing a musical instrument, or undergoing physical rehabilitation. Apps might alert users to typing habits that could lead to repetitive strain injury (RSI), or assess the onset of motor impairments such as those associated with Parkinson's disease.Laput and Harrison began their exploration of hand activity detection by recruiting 50 people to wear specially programmed smartwatches for almost 1,000 hours while going about their daily activities. Periodically, the watches would record hand motion, hand orientation and bio-acoustic information, and then prompt the wearer to describe the hand activity — shaving, clapping, scratching, putting on lipstick, etc. More than 80 hand activities were labeled in this way, providing a unique dataset.For now, users must wear the smartwatch on their active arm, rather than the passive (non-dominant) arm where people typically wear wristwatches, for the system to work. Future experiments will explore what events can be detected using the passive arm."The 25 hand activities we evaluated are a small fraction of the ways we engage our arms and hands in the real world," Laput said. Future work likely will focus on classes of activities — those associated with specific activities such as smoking cessation, elder care, or typing and RSI.The Packard Foundation, Sloan Foundation and the Google Ph.D. Fellowship supported this research.

Dataset Bridges Human Vision and Machine Learning

Neuroscience, Computer Vision Collaborate To Better Understand Visual Information Processing

Byron Spice

Neuroscientists and computer vision scientists say a new dataset of unprecedented size — comprising brain scans of four volunteers who each viewed 5,000 images — will help researchers better understand how the brain processes images. Researchers at Carnegie Mellon University and Fordham University, reporting today in the journal Scientific Data, said acquiring functional magnetic resonance imaging (fMRI) scans at this scale presented unique challenges. Each volunteer participated in 20 or more hours of MRI scanning, challenging both their perseverance and the experimenters' ability to coordinate across scanning sessions. The extreme design decision to run the same individuals over so many sessions was necessary for disentangling the neural responses associated with individual images. The resulting dataset, dubbed BOLD5000, allows cognitive neuroscientists to better leverage the deep learning models that have dramatically improved artificial vision systems. Originally inspired by the architecture of the human visual system, deep learning may be further improved by pursuing new insights into how human vision works and by having studies of human vision better reflect modern computer vision methods. To that end, BOLD5000 measured neural activity arising from viewing images taken from two popular computer vision datasets: ImageNet and COCO. "The intertwining of brain science and computer science means that scientific discoveries can flow in both directions," said co-author Michael J. Tarr, the Kavčić-Moura Professor of Cognitive and Brain Science and head of CMU's Department of Psychology. "Future studies of vision that employ the BOLD5000 dataset should help neuroscientists better understand the organization of knowledge in the human brain. As we learn more about the neural basis of visual recognition, we will also be better positioned to contribute to advances in artificial vision." Lead author Nadine Chang, a Ph.D. student in CMU's Robotics Institute who specializes in computer vision, suggested that computer vision scientists are looking to neuroscience to help innovate in the rapidly advancing area of artificial vision — reinforcing the two-way nature of this research. "Computer-vision scientists and visual neuroscientists essentially have the same end goal: to understand how to process and interpret visual information," Chang said. Improving computer vision was an important part of the BOLD5000 project from its onset. Senior author Elissa Aminoff, then a post-doctoral fellow in CMU's Psychology Department and now an assistant professor of psychology at Fordham, initiated this research direction with co-author Abhinav Gupta, an associate professor in the Robotics Institute. Among the challenges faced in connecting biological and computer vision is that the majority of human neuroimaging studies include very few stimulus images — often 100 or less — which typically are simplified to depict only single objects against a neutral background. In contrast, BOLD5000 includes more than 5,000 real-world, complex images of scenes, single objects and interacting objects. The group views BOLD5000 as only the first step toward leveraging modern computer vision models to study biological vision. "Frankly, the BOLD5000 dataset is still way too small," Tarr said, suggesting that a reasonable fMRI dataset would require at least 50,000 stimulus images and many more volunteers to make headway in light of the fact that the class of deep neural nets used to analyze visual imagery are trained on millions of images. To this end, the research team hopes their ability to generate a dataset of 5,000 brain scans will pave the way for larger collaborative efforts between human vision and computer vision scientists. So far, the field's response has been positive. The publicly available BOLD5000 dataset has already been downloaded more than 2,500 times. The dataset is also housed on the Carnegie Mellon University Libraries' KiltHub repository. In addition to Chang, Tarr, Gupta, and Aminoff, the research team included John A. Pyles, senior research scientist and scientific operations director of the CMU-Pitt BRIDGE Center, and Austin Marcus, a research assistant in Tarr's lab. The National Science Foundation, U.S. Office of Naval Research, the Alfred P. Sloan Foundation and the Okawa Foundation for Information and Telecommunications sponsored this research.

Knit 1, Purl 2: Assembly Instructions for a Robot?

Researchers Make Soft, Actuated Objects Using Commercial Knitting Machines

Byron Spice

Carnegie Mellon University researchers have used computationally controlled knitting machines to create plush toys and other knitted objects that are actuated by tendons. It's an approach they say might someday be used to cost-effectively make soft robots and wearable technologies.Software developed by researchers from CMU's Morphing Matter Lab and Dev Lab in the Human-Computer Interaction Institute makes it possible for the objects to emerge from the knitting machines in their desired shapes and with tendons already embedded. They can then be stuffed and the tendons attached to motors, as necessary.Lea Albaugh, a Ph.D. student who led the research effort, developed the tendon-embedding technique and explored this design space to make lampshades that change shape, stuffed figures that give hugs when poked in the stomach and even a sweater with a sleeve that moves on its own. Although largely fanciful, these objects demonstrate capabilities that could eventually have serious applications, such as soft robots."Soft robotics is a growing field," Albaugh noted. "The idea is to build robots from materials that are inherently safe for people to be near, so it would be very hard to hurt someone. Actuated soft components would be cheap to produce on commercial knitting machines."We have so many soft objects in our lives and many of them could be made interactive with this technology," she added. "A garment could be part of your personal information system. Your sweater, for example, might tap you on your shoulder to get your attention. The fabric of a chair might serve as a haptic interface. Backpacks might open themselves."Albaugh and her co-investigators, HCII faculty members Scott Hudson and Lining Yao, will present their research at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems, May 4–9 in Glasgow, Scotland.Commercial knitting machines are well developed and widely used, but generally require painstaking programming for each garment. This new research builds on previous CMU work to automate the process, making it easier to use these mass production machines to produce customized and one-off designs."It's a pretty convenient pipeline to use for producing actuated knitted objects," said Yao, assistant professor in HCII. Other researchers have experimented with actuated textile objects, she noted, but have been faced with the time-consuming task of adding tendons to completed items. Embedding tendons in the materials as they are created saves time and effort, and adds precision to the actuation.The researchers developed methods for embedding tendon paths horizontally, vertically and diagonally in fabric sheets and tubes. They showed that the shape of the fabric, combined with the orientation of the tendon path, can produce a variety of motion effects. These include asymmetric bends, S-shaped bends and twists. Stiffness of the objects can be adjusted by stuffing them with various materials, such as those available to hobbyists.A number of tendon materials can be used, including polyester-wrapped quilting thread, pure silk yarn and nylon monofilament.In addition to actuating the objects, these techniques can also add sensing capabilities to objects. By attaching sensors to each tendon, for instance, it's possible to sense the direction in which the object is being bent or twisted. By knitting with conductive yarn, researchers showed they could create both contact pads for capacitive touch sensing and strain sensors to detect if a swatch is stretched.3D printing already is being used to make customized, actuated objects and robotic components, Albaugh said, although the materials often are hard. Computationally controlled knitting has the potential for expanding the possibilities and making the results more people-friendly."I think there's enormous power in using materials that people already associate with comfort," she said.The National Science Foundation supported part of this research project.

Computer Science Idea Triggers First Kidney-Liver Transplant Swap

Sandholm Says Multi-Organ Exchanges Could Boost Number of Transplants

Byron Spice

Aliana Deveza was desperate. Her mother's health was failing after years of fighting a hereditary kidney disease. Aliana wasn't a good donor candidate for her mother because she eventually might face the same disease herself. But what if she donated part of her liver instead? Specifically, what if she donated part of her liver to a patient who needed it and then a loved one of that patient donated a kidney to her mother? It wasn't Aliana's idea, but one she gleaned from a research paper by Tuomas Sandholm, the Angel Jordan Professor of Computer Science, and one of his former students, John Dickerson. She embraced the idea, however, and set in motion events that culminated in what is believed to be the world's first kidney-liver swap. In July 2017 at UCSF Medical Center in San Francisco, Aliana, of Gilroy, California, donated a little more than half of her liver to Connie Saragoza de Salinas of Sacramento, California. Saragoza's sister, Annie Simmons, of Boise, Idaho, donated one of her kidneys to Aliana's mother, Erosalyn Deveza. "Everyone's doing well now," said Aliana, now 23 and a psychology major at the University of California, Santa Cruz, though her mother subsequently was treated for breast cancer. "Things were a little rough for her for a while," Aliana acknowledged. In her own case, her liver rapidly regenerated and she considers herself in good health. Neither Sandholm nor Dickerson, now an assistant professor of computer science at the University of Maryland, had any idea the kidney-liver swap they inspired had taken place until April. That's when a case report by the UCSF surgeons describing the historic transplant was published in the American Journal of Transplantation. "Multi-organ exchange is something I thought would be really cool," Sandholm said of the research paper he and Dickerson wrote, which explored the potential for kidney-liver swaps to increase U.S. organ transplants overall. "I didn't anticipate it would be performed anytime soon because there are so few live donors for livers." Sandholm already has played a key role in the expansion of kidney paired-donation (KPD) transplants. In these cases, mismatched donor-recipients — a donor who is willing to donate a kidney, but is biologically incompatible with the recipient — are matched with other donor-recipients in the same situation. The first donor donates to the second recipient, while the second donor donates to the first recipient, thereby enabling two transplants. Sandholm, his students and collaborators devised computer algorithms that made it possible to make KPD matches using a national pool of candidates, making matches more likely, and enabled chains of kidney swaps that involve multiple donor-recipient pairs. The first kidney swaps resulting from his algorithms took place in 2006. The United Network for Organ Sharing (UNOS), the nonprofit that manages the U.S. organ transplant system, adopted his algorithms for a national kidney exchange that began in 2010. Thousands of donor-recipient pairs have been matched, resulting in hundreds of kidney transplants, Sandholm noted. About 70 percent of U.S. transplant centers now participate in the UNOS national kidney exchange. In their paper, published in the Journal of Artificial Intelligence Research, Dickerson and Sandholm calculated that combining the kidney exchange with a liver lobe exchange could match 20 to 30 more candidates a month than would be possible with separate liver and kidney exchanges. That would be an increase of about 10 percent. Importantly, an increase in liver transplants would translate into lives saved. Though kidney dialysis can keep people with failing kidneys alive, no such life-saving treatment is available for someone with a failing liver. When Aliana read the paper in 2015, she thought the computer scientists were describing an existing scheme for kidney-liver swaps. "I didn't realize that it was just theoretical," she said. But it met her needs, so she began calling hospitals in California, trying to learn how she and her mother could join such an exchange. Most of the people she called had no idea what she was talking about, much less where to transfer her call. At UCSF Medical Center, however, Dr. John Roberts, a liver transplant surgeon, returned her call. "He said it was an interesting thought." He referred her to a transplant coordinator and she and her mother were approved by the transplant program in January 2016. But finding a suitable exchange took 18 months, in part because Aliana's small physique meant they would need to find a recipient of similar size. Saragoza, who had primary biliary cirrhosis, and Simmons ultimately were matched. Simmons had originally planned to donate part of her liver to her sister, but her liver wasn't of sufficient size. According to the case study recently published by Roberts and kidney transplant surgeon Dr. Nancy Ascher, every indication was that the transplant recipients would have normal outcomes. The major concern prior to surgery was ethical: the risk Aliana was assuming as a liver donor was far greater than Simmons' risk as a kidney donor, and the life-enhancing benefits to her mother were less than the life-saving benefits to Simmons' sister. "My parents and family were a little hesitant," Aliana said. "But I really wanted to push for the transplant because it was my mom. It was getting to the point where her condition was really painful. I wasn't too worried about myself; I was in good hands with the doctors at UCSF." Sandholm said he's not sure whether multi-organ exchanges will catch on the same as KPD transplants. At this point, for instance, no system exists for pooling donor-recipient pairs and making matches. But he said it is heartening to see that an idea born in a computer science lab resulted in a first-of-its-kind operation that saved lives. "Computer scientists have a lot of wild ideas, and this one just seemed so out there," Sandholm said. "It's just very cool that this turned out so well. We're happy for all of the patients."

Roeder Elected to National Academy of Sciences

Jocelyn Duffy and Abby Simmons

Carnegie Mellon University's Kathryn Roeder has been elected to the National Academy of Sciences in recognition of her distinguished and continuing achievements in original research. She and CMU's Krzysztof Matyjaszewski are among 100 new members and 25 foreign associates elected to the academy in 2019. NAS membership is a widely accepted mark of excellence in science and is considered one of the highest honors a scientist can receive. CMU has been home to 20 NAS members. Roeder, the UPMC Professor of Statistics and Life Sciences, serves as CMU's vice provost for faculty in addition to her faculty appointments in the Statistics & Data Science and Computational Biology departments. Her research focuses on developing statistical tools for finding associations between patterns of genetic variation and complex disease. Roeder's research group uses modern statistical methods such as high dimensional statistics, statistical machine learning, nonparametric methods and networks to solve biologically relevant problems. An elected fellow of the American Statistical Association and the Institute of Mathematical Statistics, Roeder has received the Committee of Presidents of Statistical Societies' Presidents' Award and George W. Snedecor Award. The University of Alabama at Birmingham also presented her with the Janet L. Norwood Award for outstanding achievement by a woman in statistical sciences. Matyjaszewski, the J.C. Warner University Professor of the Natural Sciences in the Mellon College of Science's Department of Chemistry, is world renowned for his discovery of atom transfer radical polymerization (ATRP), one of the most effective and widely used methods of controlled radical polymerization. ATRP has allowed for the creation of a wide range of materials with highly specific, tailored functionalities, including "smart" materials. NAS is a private, nonprofit institution established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.