News 2018

November 2018

PopSci Recognizes Wheel-Track With "Best of What's New" Award

Byron Spice

A wheel that can transform into a triangular track, developed by Carnegie Mellon University's National Robotics Engineering Center with funding from the Defense Advanced Research Projects Agency, has won a Popular Science "Best of What's New" Award for 2018.The reconfigurable wheel-track can transform from one mode to the other in less than two seconds while the vehicle is in motion, enabling a vehicle in wheel mode to operate at high speeds on roads and switch rapidly to track mode to negotiate challenging off-road terrain.The device was recognized by Popular Science with a Best of What's New Award in the security category. The magazine presents the awards annually to 100 new products and technologies in 10 categories, including aerospace, entertainment and health."The Best of What's New Awards allow us the chance to examine and honor the best innovations of the year," said Joe Brown, editor-in-chief of Popular Science. "This collection shapes our future, helps us be more efficient, keeps us healthy and safe, and lets us have some fun along the way.''The innovative wheel-track was one of the technologies developed in DARPA's Ground X-Vehicle Technologies (GXV-T) program, which aimed to reduce the need for armor by making combat vehicles faster, more maneuverable and capable of operating in a wide variety of environments.Dimi Apostolopoulos, a CMU Robotics Institute senior systems scientist and principal investigator for the wheel-track project, said the shape-shifting wheel-track has a number of potential civilian applications as well, including uses in agriculture, mining, construction, forestry and transportation. It can also be used in vehicles ranging in size from heavy equipment to recreational vehicles."Creating a reconfigurable wheel-track system that works on a moving vehicle and at high speeds was an exceptional challenge, but our NREC team came up with a design that works and has the potential to transform ground mobility." Apostolopoulos said. "We are appreciative of DARPA's GXV-T program and we thank the editors of Popular Science for this recognition."The wheel-track has a rubberized tread that sits atop a frame that can change shape. The spinning wheel is transformed into a track by extending a Y-shaped support, which pushes the frame into a triangular shape. Simultaneously, application of a brake to stop the wheel from spinning causes the transmission to automatically shift from turning the wheel to turning a set of gears that drives the track.Though other research groups have built devices similar to NREC's reconfigurable wheel-track, those previous designs have required halting the vehicle to transform from one mode to the other, Apostolopoulos noted. The ability to make these transformations on the fly, he added, is a critical requirement for vehicles that must handle changing terrain at high speed.In testing to date, vehicles have been able to achieve 50 miles an hour in wheel mode and almost 30 mph in track mode. The device has been able to transform from wheel mode to track mode at speeds as high as 25 mph and from track mode to wheel mode at speeds of around 12 mph.Previous winners of Best of What's New Awards from Carnegie Mellon include Tartan Racing's Boss self-driving SUV, a self-landing helicopter, a snake-like robotic neck surgery tool, an automated method for editing video, a panoramic video camera and a photo editing tool that can manipulate objects in a photo as if they were three-dimensional.NREC is a part of Carnegie Mellon's Robotics Institute that performs contract research and development for a variety of governmental and industrial clients.

Farber Elected 2018 AAAS Fellow

Byron Spice

David Farber of the Institute for Software Research is one of two Carnegie Mellon University faculty members named 2018 fellows of the American Association for the Advancement of Science (AAAS). The AAAS honor recognizes Farber, sometimes called the "Grandfather of the Internet," for distinguished contributions to programming languages and computer networking. Farber joined CMU in 2002. He served as Distinguished Career Professor of Computer Science and Public Policy, and is now an adjunct professor. Earlier this year, Farber became a Distinguished Professor at Keio University in Tokyo, where he is co-director of the Cyber Civilization Research Center. This year, 416 members have been named AAAS fellows because of their scientifically or socially distinguished efforts to advance science or its applications. In addition to Farber, they include Gregory V. Lowry, the Walter J. Blenko Sr. Professor of Civil and Environmental Engineering, who is cited for his contributions to safe and sustainable use of nanomaterials, remediation methods for contaminated sediments and brines, and mitigation of fossil fuel use impacts. Farber's distinguished career spans more than 50 years, including a stint as chief technologist for the Federal Communications Commission. He was the Alfred Fitler Moore Professor of Telecommunication Systems at the University of Pennsylvania's Wharton School before joining CMU. Farber has made foundational contributions to electronics, programming languages and distributed computing. He also is moderator of the long-running Interesting People email list, which focuses on internet governance, infrastructure and other topics he favors. He also is known for his way with words and known Farberisms, such as "another day, a different dollar," and "don't look for a gift in the horse's mouth." His work has earned him countless awards and honors, including induction as an IEEE fellow, ACM fellow, the 1995 SIGCOMM Award for lifelong contributions to computer communications, and a spot in the Pioneers Circle of the Internet Hall of Fame. The new AAAS fellows will be inducted on Saturday, Feb. 16, at the AAAS Fellows Forum during the AAAS Annual Meeting in Washington, D.C.

Bajpai, Wang Earn Stehlik Scholarships

Aisha Rashid (DC 2019)

The School of Computer Science has named current seniors Tanvi Bajpai and Serena Wang the recipients of its 2018 Mark Stehlik SCS Alumni Undergraduate Impact Scholarship. The award, now in its fourth year, recognizes undergraduate students for their commitment and dedication both in and beyond the classroom. Bajpai and Wang have made noteworthy contributions both to SCS and the computer science field in general. And they both plan to continue doing so after graduation. Bajpai, who hails from West Windsor, NJ, said that she felt out of place in high school, surrounded by students who were less passionate about learning and more preoccupied with padding their resumes. She cultivated her interest in computer science by participating in programming competitions at the University of Pennsylvania, and attended a summer program at Princeton called the Program in Algorithmic and Combinatorial Thinking (PACT). Her exposure to discrete math and algorithm design fueled a desire to pursue computer science at CMU, where she was pleased to finally be surrounded with peers, faculty and mentors who were all just as passionate about the field as she was. "I didn't want to get my hopes up about anything when I arrived at CMU," Bajpai said. "I just wanted to learn as much as I could." During her time at CMU, Bajpai has performed research with Ramamoorthi Ravi, the Andris A. Zoltners Professor of Business and Rohet Tolani Distinguished Professor in SCS and the Tepper School of Business. In the summer of 2017, she interned at Microsoft, and this past summer she traveled to the University of Maryland to work on research with Samir Khuller, the Distinguished Scholar Teacher and Professor of Computer Science. Despite her many accomplishments, Bajpai believes that her biggest achievement at CMU was being a teaching assistant for a series of computer science and discrete math classes including 15-451: Algorithms, 15-151: Mathematical Foundations of Computer Science, and 21-128: Mathematical Concepts and Proofs. "My outreach has been primarily toward encouraging diversity in the undergraduate computer science program, because although we have a 50/50 male to female ratio, we still need to push diversity at the teaching assistant and research level," Bajpai said. "I've been very passionate about addressing the imposter phenomenon that goes on at CMU, and I've planned events with Women @ SCS to address both of these topics." Wang, a Bay Area native, wasn't interested in computer science until her junior year of high school, even though she grew up in Silicon Valley. After a field trip to visit Google and Facebook's offices, and joining the National Center for Women and Information Technology (NCWIT) Facebook group, she was inspired by all of the initiatives proposed by young women. "It made me realize that just like any field, computer science had many diverse topics," Wang said. "There were many other young women just like me who were pursuing the field." Wang has been a teaching assistant every semester since fall of her sophomore year, because of the positive impact her own teaching assistants had on her education at CMU. Beyond that, she has been involved with ScottyLabs and Women @ SCS since her freshman year, holding executive positions in ScottyLabs including both director of finance and director. She has also performed research on provable security and privacy with SCS Assistant Professor Jean Yang, and developed a passion for entrepreneurship while participating in the Kleiner Perkins Engineering Fellows Program. Wang believes the most incredible opportunity she's had at CMU was organizing TartanHacks, a CMU-wide hackathon. "Organizing a large event like TartanHacks takes a lot of preparation and teamwork," Wang said. "But in the end, the rest of the ScottyLabs executive board members and I felt so accomplished and satisfied when we finished successfully hosting the event." With their senior years nearly half completed, both students are focusing on their post-graduation goals. Bajpai hopes to pursue a Ph.D. in theoretical computer science and Wang will join an enterprise data infrastructure startup called Akita. Both students are incredibly grateful for the resources and opportunities that were theirs for the taking in the School of Computer Science. "Receiving the Stehlik Scholarship has made me look back at what I've accomplished during my time at CMU, and as a freshman, I never would have expected to be able to do everything I've achieved," said Wang. Bajpai added, "I don't think I'd be where I am today had I not had the support from some of my professors and advisors here, and I will always be grateful for that."

Carnegie Mellon University, Microsoft Join Forces to Advance Edge Computing Research

Byron Spice

Carnegie Mellon University today announced it will collaborate with Microsoft on a joint effort to innovate in edge computing, an exciting field of research for intensive computing applications that require rapid response times in remote and low-connectivity environments. By bringing artificial intelligence to the "edge," devices such as connected vehicles, drones or factory equipment can quickly learn and respond to their environments, which is critical to scenarios like search and rescue, disaster recovery, and safety.To enable discovery in these areas and more, Microsoft will contribute edge computing products to Carnegie Mellon for use in its Living Edge Laboratory, a testbed for exploring applications that generate large data volumes and require intense processing with near-instantaneous response times. Intel, which already is associated with the lab, is also contributing technology to the lab.Edge computing is a growing field that, in contrast to cloud computing, pushes computing resources closer to where data is generated — particularly mobile users — so that a host of new interactive and augmented reality applications are possible. It's the focus of intense commercial interest by network providers and tech companies, even as researchers continue to investigate its possibilities. Carnegie Mellon is at the forefront of this major shift in computing paradigms.Under a two-year agreement, Microsoft will provide edge computing products to the Living Edge Lab, including Azure Data Box Edge, Azure Stack (with hardware partner Intel) and Microsoft Azure credits, which provide access to cloud services including artificial intelligence, internet of things, storage and more. The new hardware is powered by Intel® Xeon® Scalable processors to support the most high-demand applications and actionable insights.The lab, run by edge computing pioneer and Carnegie Group Professor of Computer Science Mahadev Satyanarayanan, now operates on the CMU campus, as well as in shopping districts and parks in Pittsburgh's Oakland and Shadyside neighborhoods."It's easy to talk about edge computing, but it's hard to get crucial hands-on experience," said Satyanarayanan. "That's why a number of major telecommunications and tech companies have joined our Open Edge Computing Initiative and helped us establish the lab. We validate ideas and provide unbiased, critical thinking about what works and what doesn't."With the addition of Microsoft products and Intel technology to the lab, faculty and students will be able to use them to develop new applications and compare their performance with other components already in the lab. Microsoft partners also will be able to use the lab."The intelligent edge, with the power of the intelligent cloud, can and is already driving real-world impact. By moving AI models and compute closer to the source, we can surface real-time insights in scenarios where milliseconds make a critical difference, and in remote areas where 'real time' has not been possible," said Tad Brockway, general manager of Azure Storage and Azure Stack. "Microsoft offers the most comprehensive spectrum of intelligent edge technologies across hardware, software and devices, bringing the power of the cloud to the edge. We are excited to see what Carnegie Mellon researchers create."Speed — both of computation and communication — is a driving force for edge computing. By placing computer nodes, or "cloudlets," near where people are, edge computing makes it possible to both perform intensive computation and to communicate the results to users at near real-time. This enables solutions better designed to for latency-sensitive workloads where every millisecond matters."Intel is at the heart of solutions needed to run the most demanding AI applications on the edge," said Renu Navale, senior director of Edge Services and Industry Enabling in the Network Communications Division. "We are excited to extend our existing networking edge collaboration with the Open Edge Computing Initiative to include Microsoft solutions like Azure Data Box Edge and Azure Stack, powered by Intel Xeon processors."One example class of applications are wearable cognitive assistance applications based on the Gabriel platform, a National Science Foundation-sponsored project led by Satyanarayanan. A Gabriel application is intended as an angel on your shoulder, observing a user and providing advice on a task. This technology might provide expert guidance to a user who is assembling furniture, or troubleshooting a complex piece of machinery, or helping someone use an AED device in an emergency.A second example of the value edge computing brings to applications is OpenRTiST, which allows a user to see the world around them in real time, through the eyes of an artist. The video feed from the camera of a mobile device is transmitted to a cloudlet, transformed there by a deep neural network trained offline to learn the artistic features of a famous painting, and returned to the user's device as a video feed. The entire round trip is fast enough to preserve the illusion that the artist is continuously repainting the user's world as displayed on the device.Another class of applications envisioned for the Living Edge Laboratory are real-time assistive tools for visually impaired people to help them detect objects or people nearby. The video feeds of a stereoscopic camera on a user are transmitted to a nearby cloudlet, and real-time video analytics is used detect obstacles. This information is transmitted back to the user and communicated via vibro-tactile feedback."The Living Edge Laboratory can help determine not only what types of applications are possible, but also what kind of equipment or software works best for a given application," Satyanarayanan said.The lab was established through the Open Edge Computing Initiative, a group of leading companies, including Intel, Deutsche Telekom, Vodafone and Crown Castle who have provided equipment, software and expertise."We welcome Microsoft as a new member of the Open Edge Computing Initiative and we very much look forward to exploring Microsoft technologies in our Living Edge Laboratory," said Rolf Schuster, director of the Open Edge Computing Initiative. "This is a great opportunity to drive attractive new business opportunities around edge computing for both the telecom and the cloud industries."

Neural Nets Supplant Marker Genes in Analyzing Single Cell RNA Sequencing

CMU's scQuery Web Server Uses New Method To Determine Cell Types, Identify Key Genes

Byron Spice

Computer scientists at Carnegie Mellon University say neural networks and supervised machine learning techniques can efficiently characterize cells that have been studied using single cell RNA-sequencing (scRNA-seq). This finding could help researchers identify new cell subtypes and differentiate between healthy and diseased cells. Rather than rely on marker genes, which are not available for all cell types, this new automated method analyzes all of the scRNA-seq data to select just those parameters that can differentiate one cell from another. This enables the analysis of all cell types and provides a method for comparative analysis of those cells. Researchers from CMU's Computational Biology Department explain their method today in the online journal Nature Communications. They also describe a web server called scQuery that makes the method usable by all researchers. Over the past five years, single cell sequencing has become a major tool for cell researchers. In the past, researchers could only obtain DNA or RNA sequence information by processing batches of cells, providing results that only reflected average values of the cells. Analyzing cells one at a time, by contrast, enables researchers to identify subtypes of cells, or to see how a healthy cell differs from a diseased cell, or how a young cell differs from an aged cell. This type of sequencing will support the National Institutes of Health's new Human BioMolecular Atlas Program (HuBMAP), which is building a 3D map of the human body that shows how tissues differ on a cellular level. Ziv Bar-Joseph, professor of computational biology and machine learning and a co-author of today's paper, leads a CMU-based center contributing computational tools to that project. "With each experiment yielding hundreds of thousands of data points, this is becoming a Big Data problem," said Amir Alavi, a Ph.D. student in computational biology who was co-lead author of the paper with post-doctoral researcher Matthew Ruffalo. "Traditional analysis methods are insufficient for such large scales." Alavi, Ruffalo and their colleagues developed an automated pipeline that attempts to download all public scRNA-seq data available for mice — identifying the genes and proteins expressed in each cell — from the largest data repositories, including the NIH's Gene Expression Omnibus (GEO). The cells were then labeled by type and processed via a neural network, a computer system modeled on the human brain. By comparing all of the cells with each other, the neural net identified the parameters that make each cell distinct. The researchers tested this model using scRNA-seq data from a mouse study of a disease similar to Alzheimer's. As would be expected, the analysis showed similar levels of brain cells in both healthy and diseased cells, while the diseased cells included substantially more immune cells, such as macrophages, generated in response to the disease. The researchers used their pipeline and methods to create scQuery, a web server that can speed comparative analysis of new scRNA-seq data. Once a researcher submits a single cell experiment to the server, the group's neural networks and matching methods can quickly identify related cell subtypes and identify earlier studies of similar cells. In addition to Ruffalo, Alavi and Bar-Joseph, authors of the research paper include Aiyappa Parvangada and Zhilin Huang, both graduate students in computational biology. The National Institutes of Health, the National Science Foundation, the Pennsylvania Department of Health and the James S. McDonnell Foundation supported this work.

High Stakes

Embedded Software Engineering Team Wins National Honors for Wearable Device That Detects Opioid Overdose

Josh Quicksall

Opioid drug overdoses kills thousands of Americans each year. A team of software engineering students has developed a wearable device that could help address these unprecedented rates of overdose deaths. As a capstone project for the Institute for Software Research's professional master's program in embedded software engineering (MSIT-ESE), four students worked with their sponsor, Pinney Associates, to build a prototype wristband that can detect overdose in the wearer. The challenge the client presented to the team was to produce a low-cost wearable device that could accurately detect an opioid overdose and send out an alert — helping rescuers respond in time to administer naloxone, a life-saving opioid antagonist that can reverse the overdose. The device delighted Pinney Associates, a pharmaceutical consulting firm that sponsored the work. And it was also clever enough that the team beat out 97 percent of all submissions to the Robert Wood Johnson Foundation's Opioid Challenge competition, ultimately placing third in the competition finals at the Health 2.0 Conference held in September in Santa Clara, Calif. "The project was intimidating, not only because it was massive, but also because this wasn't a project where you could simply deliver the code," explained Puneetha Ramachandra, one member of the group that calls itself Team Hashtag. "There was a burden of real societal responsibility to the project. Lives were on the line. This had to be done properly." Using pulse oximetry, the device they developed monitors the amount of oxygen in the user's blood by measuring light reflected back from the skin to a sensor. When paired to a mobile phone via Bluetooth, the sensor takes numerous readings on an ongoing basis to establish a baseline reading. If the user's blood oxygen level drops for more 30 seconds, the device switches an LED on the display from green to red. The device also cues the paired mobile phone — via an app the team also developed — to send out a message with the user's GPS coordinates to his or her emergency contacts. "Having naloxone on hand doesn't matter if you overdose and there is nobody nearby to administer it," said Michael Hufford, CEO of Harm Reduction Therapeutics, a nonprofit pharmaceutical company spun out of Pinney Associates with the goal of taking naloxone over-the-counter. "Having a cheap-but-reliable device that can detect overdose could be absolutely central in saving lives." One of the most significant challenges to developing the system was simply understanding what constitutes an overdose in terms of a specific drop in oxygen saturation in the blood. "Even if you asked a group of doctors what defines the overdose, they would struggle to give you a concrete answer," team member Rashmi Kalkunte Ramesh said. "They have to physically assess the person for a variety of signals. It was on us to cull those signals and select a method of reliable, accurate assessment. We eventually honed in on a wrist-mounted pulse oximetry device as the best approach." The team, which also included Yu-Sam Huang and Soham Donwalkar, is excited to see how the device might further evolve. "There are so many ways this product could be even better," Donwalkar said. "I can absolutely see additional sensors being incorporated to give a machine-learning backend a bigger dataset to work with, reducing the number of false positives, for example. Or, once clinical trials are open, assembling a much larger, more diverse corpus for ML training that encompasses a wide range of physical variables — like age, sex, race, etc. — that could affect what an overdose state looks like!" Their clients couldn't be happier with the progress to date. "I wasn't expecting something that was quite so turnkey," said Pinney senior data manager Steve Pype. "Initially, we were thinking this might be a proof of concept. But here we are: The project is almost finished and they're refining the prototype."    

Sandholm, Brown To Receive Minsky Medal

The Duo's Libratus AI Program Was the First To Beat Top No-Limit Poker Professionals

Byron Spice

Computer Science Professor Tuomas Sandholm and Noam Brown, a Ph.D. student in the Computer Science Department, are the second-ever recipients of the prestigious Marvin Minsky Medal, which will be presented by the International Joint Conference on Artificial Intelligence (IJCAI) in recognition of their outstanding achievements in AI. Sandholm and Brown created Libratus, an AI that became the first computer program to beat top professional poker players at Heads-Up No-Limit Texas Hold'em. During the 20-day "Brains vs. Artificial Intelligence" competition in January 2017, Libratus played 120,000 hands against four poker pros, beating each player individually and collectively amassing more than $1.8 million in chips. The feat has yet to be duplicated. "Poker is an important challenge for AI because any poker player has to deal with incomplete information," said Michael Wooldridge, a professor of computer science at the University of Oxford and chair of the IJCAI Awards Committee. "Incomplete information makes the computational challenge orders of magnitude harder. Libratus used fundamentally new techniques for dealing with incomplete information which have exciting potential applications far beyond games." This is just the second time that the IJCAI has awarded the Minsky Medal. The inaugural recipient was the team behind DeepMind's AlphaGo system, which beat a world champion Go player in 2016. The award is named for Marvin Minsky, one of the founders of the field of AI and co-founder of MIT's Computer Science and AI Laboratory. It will be presented at the IJCAI 2019 conference in Macao, China, next August. "Marvin Minsky was a big, broad thinker and an AI pioneer. We are proud to receive the medal in his name," Sandholm said. "Computational techniques for solving imperfect-information games will have large numbers of applications in the future, since most real-world settings have more than one actor and imperfect information. I believe that this is a tipping point toward applications now that the best AI has reached a superhuman level, as measured on the main benchmark in the field." Libratus did not use expert domain knowledge or human data specific to poker. Rather, the AI analyzed the game's rules and devised its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications. "We appreciate the community's recognition of the difficult challenges that hidden information poses to the field of artificial intelligence and the importance of addressing them," Brown said. "We look forward to applying this technology to a variety of real-world settings in a way that will have a positive impact on peoples' lives." Sandholm said he believes so strongly in the potential of this technology that he has founded two companies, Strategic Machine Inc. and Strategy Robot Inc., which have exclusively licensed Libratus' technology and other technologies from Sandholm's lab to create a variety of commercial applications. Sandholm, a leader of the university's CMU AI initiative, is the first recipient of Carnegie Mellon University's Angel Jordan Professorship in Computer Science. He also founded and directs the Electronic Marketplaces Laboratory and Optimized Markets Inc. Sandholm joined CMU in 2001, and works at the convergence of AI, economics and operations research. His algorithms run the nationwide kidney exchange for the United Network for Organ Sharing, autonomously making the kidney exchange transplant plan for 69 percent of U.S. transplant centers each week. One of his startups, Optimized Markets Inc., is bringing a new optimization-powered paradigm to advertising campaign sales and scheduling in television, streaming video and audio, internet display, mobile, game, radio and cross-media advertising. Through a prior startup he founded, he fielded 800 combinatorial electronic auctions for sourcing, totaling $60 billion. Sandholm's many honors include a National Science Foundation CAREER Award, the inaugural Association for Computing Machinery (ACM) Autonomous Agents Research Award, a Sloan Fellowship, an Edelman Laureateship, and the IJCAI's Computers and Thought Award. He is a fellow of the Association for Computing Machinery, Association for the Advancement of Artificial Intelligence, and Institute for Operations Research and the Management Sciences. He holds an honorary doctorate from the University of Zurich. Before undertaking his doctoral studies at CMU, Brown worked at the Federal Reserve Board in the International Financial Markets section, where he researched algorithmic trading in financial markets. Prior to that, he developed algorithmic trading strategies. At CMU, where he is advised by Sandholm, Brown has developed computational game theory techniques to produce AIs capable of strategic reasoning in large imperfect-information interactions. He is a recipient of an Open Philanthropy Project AI Fellowship and a Tencent AI Lab Fellowship. Brown and Sandholm have shared numerous awards for their research, including a best paper award at the 2017 Neural Information Processing Systems conference, the Allen Newell Award for Research Excellence and multiple supercomputing awards.

Sadeh Speaks on Plenary Panel About Data Protection and Privacy

Daniel Tkacik

It's been five months since the General Data Protection Regulation (GDPR) went into effect in the European Union, setting strict rules for how personal data is collected and processed. Last week, CyLab's Norman Sadeh, a professor in the Institute for Software Research and co-director of the Privacy Engineering program, spoke about privacy, artificial intelligence (AI) and the challenges at the intersection of the two at the International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Brussels. He sat down with Cylab to discuss his talk and privacy in general in a Q&A we've included below. Can you give us a taste of what the International Conference of Data Protection and Privacy Commissioners was all about? As its name suggests, ICDPPC is the big international conference where once a year regulators and other key stakeholders from around the world come together to discuss privacy regulation and broader challenges associated with privacy. You served as a panelist on one of the plenary panels titled, "Right vs. Wrong." What exactly was discussed? This panel was aimed at broadening the scope of privacy discussions beyond just regulation and addressing deeper, more complex ethical issues related to the use and collection of data. I discussed how our research has shown that getting the full benefits of existing regulations, whether GDPR or the recently passed California Consumer Protection Act, is hampered by complex cognitive and behavioral limitations we people have. I talked about the technologies our group has been developing to assist users make better informed privacy decisions and overcome these limitations. How exactly do you define what's right vs. what's wrong? When people discuss ethics, they generally refer to a collection of principles that include basic expectations of trustworthiness, transparency, fairness and autonomy. As you can imagine, there is no single definition out there and this list is not exhaustive. In my presentation, I discussed the principles and methodologies our group uses to evaluate and fine-tune technologies we develop, and how we ultimately ask ourselves whether a user is better off with a given configuration of one or more technologies. This often involves running human subject studies designed to isolate and quantify the effects of those technologies. Examples of privacy technologies we have been developing range from technologies to nudge users to more carefully reflect on privacy decisions they need to make, to machine learning techniques to model people’s privacy preferences and help them configure privacy settings. They also include technologies to automatically answer privacy questions users may have about a given product or service. Can you talk about the context in which this conference took place? What kinds of privacy regulations have we seen go into effect this year, and what other regulations might we see in the future? This conference took place in a truly unique context. People’s concerns about privacy have steadily increased over the past several years, from the Snowden revelations of a few years ago to the Cambridge Analytica fiasco exposed earlier this year. People have come to realize that privacy is not just about having their data collected for the sake of sending them better targeted ads, but that it goes to the core of our democracy and how various actors are using data they collect to manipulate our opinions and even influence our votes. The widespread use of artificial intelligence and how it can lead to bias, discrimination and other challenges is also of increasing concern to many. A keynote presentation at the conference by Apple CEO Tim Cook, as well as messages from Facebook's Mark Zuckerberg and Alphabet's Sundar Pichai also suggest that big tech may now be in favor of a sweeping U.S. federal privacy law that would share some similarities with the EU GDPR. While the devil is in the details, such a development would mark a major shift in the way in which data collection and use practices are regulated in the US, with many technologies being by and large unregulated today. How does your research inform some of these types of discussions? Research is needed on many fronts, from developing a better understanding of how new technologies negatively impact people’s expectations of privacy to how we can mitigate the risks associated with undesirable inferences made by data mining algorithms. At the conference, I focused on some of the research we have conducted on modeling people’s privacy preferences and expectations, and how we have been able to develop technologies that can assist users in making better informed decisions at scale. How do you address the scale at which data is collected and the complexity of the value chains along with data travels? Regulations by themselves, such as more transparent privacy policies and offering users more control and more privacy settings, are important but not sufficient to empower users to regain control over their data at scale. I strongly believe that our work on using AI to build privacy assistants can ultimately make a very big difference here. People are just unable to read the privacy policies and configure the settings associated with the many technologies with which they interact on a typical day. There is a need for intelligent assistants that can help them zoom in on those issues they care about, answer questions they have and help them configure settings. What do you see as the main challenges for privacy in the age of AI and IoT? A first issue is the scale at which data is collected and the diverse ways in which it is used. A second challenge has to do with the difficulty of controlling the inferences that can be made by machine learning algorithms. A third challenge in the context of IoT is that we don’t have any mechanisms today to even advertise the presence of these technologies — think cameras with computer vision or home assistants — let alone expose privacy settings to people who come in contact with these technologies. For instance, how is a camera supposed to allow users to opt in or opt out of facial expression recognition technology, if the user does not even know the camera is there, doesn’t know that facial expression recognition algorithms are processing the footage, and has no user interface to opt in or out? If I had to identify one final challenge, I would emphasize the need to help and train developers do a better job when it comes to adopting privacy-by-design practices, from data minimization practices all the way to more transparent data practice disclosures.