News 2021

February 2021

AI Identifies Social Bias Trends in Bollywood, Hollywood Movies

New Method Can Analyze Decades of Films in a Few Days

Byron Spice

Babies whose births were depicted in Bollywood films from the 1950s and 60s were more often than not boys; in today's films, boy and girl newborns are about evenly split. In the 50s and 60s, dowries were socially acceptable; today, not so much. And Bollywood's conception of beauty has remained consistent through the years: beautiful women have fair skin. Fans and critics of Bollywood — the popular name for a $2.1 billion film industry centered in Mumbai, India — might have some inkling of all this, particularly as movies often reflect changes in the culture. But these insights came via an automated computer analysis designed by Carnegie Mellon University computer scientists. The researchers, led by Kunal Khadilkar and Ashiqur R. KhudaBukhsh of CMU's Language Technologies Institute (LTI), gathered 100 Bollywood movies from each of the past seven decades along with 100 of the top-grossing Hollywood moves from the same periods. They then used statistical language models to analyze subtitles of those 1,400 films for gender and social biases, looking for such factors as what words are closely associated with each other. "Most cultural studies of movies might consider five or 10 movies," said Khadilkar, a master's student in LTI. "Our method can look at 2,000 movies in a matter of days." It's a method that enables people to study cultural issues with much more precision, said Tom Mitchell, Founders University Professor in the School of Computer Science and a co-author of the study. "We're talking about statistical, automated analysis of movies at scale and across time," Mitchell said. "It gives us a finer probe for understanding the cultural themes implicit in these films." The same natural language processing tools might be used to rapidly analyze hundreds or thousands of books, magazine articles, radio transcripts or social media posts, he added. For instance, the researchers assessed beauty conventions in movies by using a so-called cloze test. Essentially, it's a fill-in-the-blank exercise: "A beautiful woman should have BLANK skin." A language model normally would predict "soft" as the answer, they noted. But when the model was trained with the Bollywood subtitles, the consistent prediction became "fair." The same thing happened when Hollywood subtitles were used, though the bias was less pronounced. To assess the prevalence of male characters, the researchers used a metric called Male Pronoun Ratio (MPR), which compares the occurrence of male pronouns such as "he" and "him" with the total occurrences of male and female pronouns. From 1950 through today, the MPR for Bollywood and Hollywood movies ranged from roughly 60 to 65 MPR. By contrast, the MPR for a selection of Google Books dropped from near 75 in the 1950s to parity, about 50, in the 2020s. Dowries — monetary or property gifts from a bride's family to the groom's — were common in India before they were outlawed in the early 1960s. Looking at words associated with dowry over the years, the researchers found such words as "loan," "debt" and "jewelry" in Bollywood films of the 50s, which suggested compliance. By the 1970s, other words, such as "consent" and "responsibility," began to appear. Finally, in the 2000s, the words most closely associated with dowry — including "trouble," "divorce" and "refused" — indicate noncompliance or its consequences. "All of these things we kind of knew," said KhudaBukhsh, an LTI project scientist, "but now we have numbers to quantify them. And we can also see the progress over the last 70 years as these biases have been reduced." A research paper by Khadilkar, KhudaBukhsh and Mitchell was presented at the Association for the Advancement of Artificial Intelligence virtual conference earlier this month.

Peet Named Assistant Dean for Undergraduate Experience

Matthew Wein

Veronica Peet has been named the School of Computer Science's first assistant dean for undergraduate experience. Peet joined SCS nearly two years ago as a senior academic advisor to first-year students, working with Tom Cortina, then the assistant dean for undergraduate education. Her new position emerged from departmental restructuring that saw Cortina elevated to associate dean for undergraduate programs. "My role very much focuses on the transition from high school to full-fledged college student, and all of the special things that take place during that transition," Peet said, adding that she will continue advising first-year students. Peet has spent eight years at CMU, beginning her time on campus as an academic advisor in the Mellon College of Science dean's office. There, she worked with first-year students and the school's precollege initiative before shifting her focus to the Science and Humanities Scholars Program. Tthe university phased out that program, and she made the move to SCS. A Detroit native, Peet earned an undergraduate degree in applied mathematics from UCLA, then went to work for a software startup. When the company moved to the other side of Los Angeles, Peet made what she now describes as a fairly easy decision. "Everything in LA is about the commute, and I didn't want to add an hour to my drive every day. So I went back to the department where I got my degree and started advising math majors," she said. Peet finds helping students navigate the transition to college life the most rewarding part of her job, even as the pandemic has presented new challenges. "It's been difficult with the isolation and remoteness that everyone is feeling, but I also make sure that we're celebrating students' victories," she said. "They keep me in the loop about the research they do and the internships they apply for and get. I'm a cheerleader for these students. Just being their advocate is my primary goal, and that part is easy because they're just so good."

CREATE Lab Honored For Monitoring Emissions at Shenango Coke Works

Joint Effort With Grassroots Advocates Highlighted Pollutants, Smells

Byron Spice

Carnegie Mellon University's CREATE Lab and the grassroots advocacy group Allegheny County Clean Air Now (ACCAN) are winners of an inaugural Constellation Prize for their collaboration on the Shenango Channel, an effort to highlight pollutants from a now-defunct coke works near Pittsburgh. The Constellation Prize was created by a group of engineers and social scientists to reimagine the role of engineering in society. The award for the Shenango Channel and three other winners will be presented in a virtual ceremony Feb. 24. The Shenango Channel was an offshoot of Breathe Cam, a network of cameras that the CREATE Lab uses to monitor visible air emissions in Allegheny County. The channel focused on DTE Energy's Shenango Inc. coke works on Neville Island in the Ohio River. CREATE Lab and ACCAN worked together to set up cameras to record high-resolution video of the plant 24 hours a day and collaborated to catalog thousands of fugitive emission events from it. The videos, along with other measurements of pollutants and reports of unusual smells, were shared by the public on the channel's website. The reports of fugitive emissions resulted in citations by the Allegheny County Health Department before DTE Energy shuttered the plant in 2016. The Shenango Channel "exemplifies successful collaboration between engineers and a 'frontline' community and shows how thoughtfully co-designed technology can empower citizens to advocate for social change," said Gwen Ottinger, associate professor in the Department of Politics at Drexel University, who nominated CREATE Lab and ACCAN for the award. The Shenango Channel helped lead to the 2018 expansion of Breathe Cam into the Monongahela Valley south of Pittsburgh, where cameras are trained 24/7 on three plants in U.S. Steel's Mon Valley Works. Breathe Cam is part of the Breathe Project, a program of the Community Foundation for the Alleghenies that receives support from the Heinz Endowments. The project serves as a clearinghouse for air quality data for Pittsburgh and southwestern Pennsylvania. The CREATE Lab is part of the Robotics Institute. It explores socially meaningful innovation and deployment of robotic technologies. Randy Sargent is visualization director of the lab and has been a leader in the Breathe Cam and Shenango Channel projects. More information about the Constellation Awards, the other recipients and the virtual award ceremony can be found at www.constellationprize.org.

AI May Mistake Chess Discussions as Racist Talk

References to Chess Piece Colors Can Trigger Alarm

Byron Spice

"The Queen's Gambit," the recent TV miniseries about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software. That's what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radić, or "agadmator," a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for "harmful and dangerous" content. YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh, a project scientist in CMU's Language Technologies Institute (LTI). It's nevertheless possible that "black vs. white" talk during Radić's interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested. "We don't know what tools YouTube uses, but if they rely on artificial intelligence to detect racist language, this kind of accident can happen," KhudaBukhsh said. And if it happened publicly to someone as high-profile as Radić, it may well be happening quietly to lots of other people who are not so well known. To see if this was feasible, KhudaBukhsh and Rupak Sarkar, an LTI course research engineer, tested two state-of-the-art speech classifiers — a type of AI software that can be trained to detect indications of hate speech. They used the classifiers to screen more than 680,000 comments gathered from five popular chess-focused YouTube channels. They then randomly sampled 1,000 comments that at least one of the classifiers had flagged as hate speech. When they manually reviewed those comments, they found that the vast majority — 82% — did not include hate speech. Words such as black, white, attack and threat seemed to be triggers, they said. As with other AI programs that depend on machine learning, these classifiers are trained with large numbers of examples and their accuracy can vary depending on the set of examples used. For instance, KhudaBukhsh recalled an exercise he encountered as a student, in which the goal was to identify "lazy dogs" and "active dogs" in a set of photos. Many of the training photos of active dogs showed broad expanses of grass because running dogs often were in the distance. As a result, the program sometimes identified photos containing large amounts of grass as examples of active dogs, even if the photos didn't include any dogs. In the case of chess, many of the training data sets likely include few examples of chess talk, leading to misclassification, he noted. The research paper by KhudaBukhsh and Sarkar, a recent graduate of Kalyani Government Engineering College in India, won the Best Student Abstract Three-Minute Presentation this month at the Association for the Advancement of AI annual conference.  

Hudson Wins SIGCHI Lifetime Research Award

Byron Spice

Scott Hudson, a professor in the Human-Computer Interaction Institute (HCII), is this year's winner of the Lifetime Achievement Award in Research presented by the Association for Computing Machinery's Special Interest Group in Computer-Human Interaction (SIGCHI). The award recognizes the best, most fundamental and influential research contributions accomplished over a lifetime of innovation and leadership in the field of human-computer interaction. Hudson, who received the SIGCHI Lifetime Service Award in 2017, is best known for creating tools and enabling technologies necessary for building interactive systems. This work began with software tools for implementing graphical user interfaces and later expanded to include the use of sensors in interactive devices and the application of machine learning techniques in HCI. "Scott has always been known for his inventiveness, creating, for example, a pixelated display implemented with air bubbles in water and a 3D printer which prints in needle felted yarn," SIGCHI noted in its award announcement. Hudson earned his Ph.D. in computer science at the University of Colorado. He joined the HCII in 1997 after stints at the University of Arizona and Georgia Institute of Technology. He founded the HCII's Ph.D. program and has served on program committees for a number of leading HCI conferences. He was elected to the CHI Academy in 2006. Hudson joins fellow HCII faculty members Sara Kiesler, Robert Kraut and Brad A. Myers, who are previous recipients of SIGCHI Lifetime Achievement Awards.

CMU Researchers Win NSF-Amazon Fairness in AI Awards

Byron Spice

Three Carnegie Mellon University research teams have received funding through the Program on Fairness in Artificial Intelligence, which the National Science Foundation sponsors in partnership with Amazon. The program supports computational research focused on fairness in AI, with the goal of building trustworthy AI systems that can be deployed to tackle grand challenges facing society. "There have been increasing concerns over biases in AI systems, for example computer vision algorithms working worse for Blacks than for other races, or ads for higher paying jobs only being shown to men," said Jason Hong, a professor in the Human-Computer Interaction Institute (HCII). "Machine learning researchers are developing new tools and techniques to improve fairness from a quantitative perspective, but there are still many blind spots that defy pure quantification." The CMU projects address new methods for detecting bias, translating fairness goals into public policy and increasing the diversity of people able to use systems that recognize human speech. "Understanding how AI systems can be designed on principles of fairness, transparency and trustworthiness will advance the boundaries of AI applications," said Henry Kautz, director of the NSF's Division of Information and Intelligent Systems. "And it will help us build a more equitable society in which all citizens can be designers of these technologies as well as benefit from them." The CMU projects selected as 2021 awardees are: Organizing Crowd Audits To Detect Bias in Machine Learning. Led by Hong, researchers in the HCII seek to increase the diversity of viewpoints involved in identifying bias and unfairness in AI-enabled systems, in part by developing an audit system that uses crowd workers. Fair AI in Public Policy — Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health & Human Services. Led by Hoda Heidari, an assistant professor in the Machine Learning Department (MLD) and Institute for Software Research, researchers in MLD and the Heinz College of Information Systems and Public Policy will help translate fairness goals in public policy into computationally tractable measures. They will focus on factors along the development life cycle, from data collection through evaluation of tools, to identify sources of unfair outcomes in systems related to education, child welfare and justice. Quantifying and Mitigating Disparities in Language Technologies. Led by Graham Neubig, an associate professor in the Language Technologies Institute (LTI), researchers in the LTI, HCII and George Mason University will develop methods to improve the ability of computer systems to understand the language of a wider variety of people. They will address variations in dialect, vocabulary and speech mechanics that bedevil today's smart speakers, conversational agents and similar technologies. "We are excited to see NSF select an incredibly talented group of researchers whose research efforts are informed by a multiplicity of perspectives," said Prem Natarajan, vice president in Amazon's Alexa unit. "As AI technologies become more prevalent in our daily lives, AI fairness is an increasingly important area of scientific endeavor."

CMU Robotics Alum Leads Development of Critical Landing Technology

Computer Vision System Will Enable Safe Martian Landing for NASA's Perseverance Rover

Byron Spice

"LVS Valid" The message would sound cryptic to most people, but for Andrew Johnson, a principal robotics system engineer at NASA's Jet Propulsion Laboratory, receiving it from Mars on Thursday will mean everything. It will mean that the lander vision system his team developed worked properly and that NASA's Perseverance rover is one step closer to landing safely on the Red Planet. Johnson, who has worked at JPL since earning his Ph.D. from Carnegie Mellon University's Robotics Institute in 1997, has spent more than eight years developing the LVS. It is critical to successfully landing Perseverance within the rugged expanse of Jezero Crater, where it will gather rock and soil samples in a search for microbial life. Computer vision will play an unprecedented role in the landing, ensuring that the rover avoids such obstacles as boulder fields, dunes and crater walls in the final seconds of its seven-month journey to Mars. "The landing will be a huge milestone for me and my team," Johnson said. "All of our development work will culminate in the landing, particularly the last 60 seconds." Johnson studied computer vision at CMU under the tutelage of Martial Hebert, now dean of the School of Computer Science. It's a technology that he helped NASA employ on the Mars Exploration Rovers mission in 2004. In that case, though, computer vision was used to estimate motion, not determine the craft's position on a map. Determining position is much harder, he said, and is essential to the Mars 2020 Rover mission. Mission scientists want the rover to explore what was an ancient river delta, collecting samples that will later be returned to Earth. But that terrain also is treacherous for landing a spacecraft, which necessitated a new system for landing places that previously were inaccessible. After Perseverance enters the Martian atmosphere at almost 12,500 miles an hour, it will deploy a parachute to slow its descent and the LVS will begin taking photos, matching the images with orbital maps of Jezero Crater. This Terrain Relative Navigation System will become critical as the craft nears the ground and jettisons the parachute. During the rover's powered descent to the surface, the algorithms and software in the spacecraft will divert the landing as necessary to avoid any hazards. In addition to software, the system required the team to design a special, high-speed computer. Johnson explained that space computers are rugged and built to withstand the harmful effects of radiation, but run slowly relative to a typical PC back on Earth. The new computer vision system, however, requires a computer that can process images in real time. JPL extensively tested the system, including test flights in 2014 on a vertical takeoff and landing rocket in the Mojave Desert. But uncertainty necessarily remains, Johnson said. "We can't fully test these systems until we get to Mars and they have to work perfectly when they do," he added. He will be at Mission Control for the landing, where he will monitor the temperature of a camera. But the landing itself has to be hands off: the descent takes about seven minutes, but it takes more than 10 minutes for a radio signal to reach Earth from Mars. "The rover has essentially landed by the time we get the signal that it has entered the atmosphere," he said. And thus, after so much work and preparation, he can only await the message, "LVS Valid."

Pittsburgh Team Wins Shape of Health Competition

Video Game Aims To Boost Physical Activity of Young Girls

Karen Harlan

A new game developed by Jessica Hammer of the Human-Computer Interaction Institute and Melissa Kalarchian of Duquesne University won first prize in a competition sponsored by the U.S. Department of Health and Human Services to create a video game that promotes physical activity and weight control.The game, called Frolic, is designed to inspire girls with inclusive and active playtime and involve parents in supporting their daughters as they develop healthy habits. It is now available for free in the Apple App Store."Play can serve as a great way to boost physical activity and carries additional benefits for girls, such as socialization," said Kalarchian, associate dean for research and professor in the Duquesne University School of Nursing. "By helping girls ages 7 to 12 to become more active, Frolic can help them to form healthy habits to carry with them into adulthood."The Frolic app initiates time for play by sending a notification to a parent's phone. If it is a good time to play, the child can then input some basic info about her surroundings, including whether she will be playing indoors or outdoors, the size of the space available, if she has any friends present, and if so, their abilities to move quickly.Answering questions like "How active do you want to get?" helps the girls to think about the consequences different types of aerobic or strength training exercises can have on their bodies.The basic situational data gathered each time the girl is able to play helps Frolic to recommend a few game ideas appropriate for her situation. Each game recommendation comes with step-by-step illustrated instructions in order to support girls of all abilities and enable everyone to have a great play experience.In addition to inviting girls to participate in active playtime, Frolic is also designed to encourage parents to support their girls' healthy habits. Research shows that parents are less likely to encourage their daughters to be physically active when compared to the encouragement they show their sons. The app shows parents their daughter's activity data and encourages them to have productive conversations with their girls about their physical activity.Frolic was one of ten entries to clear the first round of competition in the Shape of Health competition and advance to an in-person presentation in front of a panel of judges in Washington D.C."We gave a presentation and showed some demo clips of what we made so far. Then we had a really great discussion with the challenge team — we actually went 20 minutes over our allotted slot because the conversation was so vigorous and exciting," said Hammer, the Thomas and Lydia Moran Assistant Professor of Learning Science in the HCII and the Entertainment Technology Center. "They gave us a lot of great suggestions to incorporate into the game.""We're confident it's unlike anything currently available and excited to share it with girls and parents," Kalarchian said.

Workshop Sparks State Initiatives in AI Education

Byron Spice

A two-day virtual workshop organized by the AI4K12 Initiative involving education leaders from across the country has helped spark new K-12 artificial intelligence efforts in several states, said David Touretzky, research professor in computer science. AI4K12 is developing national guidelines for teaching AI in elementary and secondary schools as a joint project of the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA), with funding from the National Science Foundation. Touretzky leads the initiative along with Christina Gardner-McCune of the University of Florida and Deborah Seehorn of the CSTA. "Since 2017 there has been a worldwide realization that we should be teaching children about artificial intelligence," Touretzky said. "We need to prepare our youth for the huge societal changes coming from technologies such as intelligent assistants and self-driving cars. At the same time, we should be encouraging students to pursue careers in these areas to help meet national workforce needs. China, the UK and the EU are already implementing AI education plans." Several states are already updating their computing education standards to include AI, creating new AI courses and providing opportunities for teachers to become AI-fluent, he noted. The Jan. 28-29 workshop prompted several states to start working on their AI education plans or strengthen their K-12 AI leadership team, he added.

SCS Celebrates Simon, Alumni Research Professorships

Byron Spice

Artur Dubrawski will receive the Alumni Research Professorship of Computer Science and Carleton Kingsford will receive the Herbert A. Simon Professorship of Computer Science in a virtual ceremony at 5:30 p.m. on Thursday, Feb. 4. The usual ceremonies for these and other new professorships were delayed last year by the pandemic and have now been modified as virtual events. Dubrawski joined the Robotics Institute's Auton Lab in 2003, where he works on a range of applied artificial intelligence endeavors. In 2006, he was named director of the lab, where he had been a Fulbright Scholar in 1995-96. Prior to joining CMU, he was involved in several entrepreneurial efforts, including founding a company for integrating and deploying computerized control systems. He also served as chief technology officer for Aethon, developers of an autonomous hospital delivery robot called Tug. Dubrawski's projects in the Auton Lab have included using artificial intelligence to improve the maintenance of military aircraft, monitoring food safety, developing ad-tracking software used to combat sex trafficking, providing predictive analytics at the bedside of the critically ill, and the ongoing development of automated trauma care in the field.

Carnegie Mellon AI Collaborates With Pentagon To Improve Helicopter Reliability

Machine Learning Identifies Precursors of Engine Failure

Byron Spice

Researchers at Carnegie Mellon University, working with the Pentagon's Joint Artificial Intelligence Center (JAIC), have used artificial intelligence methods to help improve the reliability and availability of helicopters used by the U.S. Army's 160th Special Operations Aviation Regiment (SOAR). The Predictive Maintenance (PMx) project, which was discussed Jan. 26 during an Armed Services Committees staff briefing, employs machine learning to identify events and conditions that indicate an engine could potentially fail within a few flight hours, said Artur Dubrawski, Alumni Research Professor of Computer Science at CMU and director of the Auton Lab. Engine overheating during startup, for instance, is one early indicator of impending failure, Dubrawski said. But the model he, senior project scientist Kyle Miller and other lab members developed factored in a large number of other variables, including engine pressures, temperatures and speed. Machine learning algorithms identified patterns in this mountain of data to find combinations of variables that could be effective as early warning signs. Members of SOAR, also known as the Night Stalkers, support special operations forces, as well as general purpose forces, and conduct missions that demand a high level of performance in often harsh conditions. "When they need to shut down the engines during a mission, they want to be certain they will start again when they flip the switch," Dubrawski said. For the past 10 years, the Auton Lab has developed expertise in predictive maintenance of military aircraft — first with the U.S. Air Force's aging F-16 fighters and later with the U.S. Navy's Osprey tilt-rotor aircraft. By analyzing large amounts of flight data, maintenance records and other information sources, Dubrawski and his colleagues have been able to identify components that are likely to cause trouble. This helps maintenance crews avoid crises that might otherwise ground large numbers of planes. It has been estimated the program for F-16s had a potential to save the Air Force more than $100 million a year. For PMx, the haystacks of data that the CMU researchers sifted contained relatively few needles — the aircraft already are meticulously maintained, so the vast bulk of the flight and maintenance data reflected healthy operating conditions. Machine learning techniques generally depend on lots and lots of data, so reliably finding sparse indicators of impending failure was a challenge, Dubrawski said. This required a number of workarounds, such as building models that already incorporated physical principles of the engines. "It was fun to get back to my roots," said Dubrawski, whose original training is in aeronautical engineering. The JAIC project ended in September, but the Auton Lab continues to work on the predictive maintenance problem as part of the Army AI Task Force headquartered at Carnegie Mellon.

CMU Students Train AI To Write Book of Limericks

Byron Spice

CMU students all get their kicks By building apps that attract mass clicks So they teamed up in class Built an AI with sass That wrote them a book full of lim'ricks. Pardon the doggerel, but what else would be appropriate when Carnegie Mellon University students create an artificial intelligence for writing poetry? Their digital Shakespeare was a project last semester in the School of Computer Science's Introduction to Deep Learning course. The instructor, Rita Singh, associate research professor in the Language Technologies Institute, said she suggested the project as a way for students to explore how AI might capture elements of artistic expression that are hard to quantify. "What makes a few lines of English written by Tennyson 'poetry' and a 'masterpiece' while the same number of lines written by someone else following the same pattern/rule/rhyme turn out to be perfectly mundane and mediocre?" she said. Mitch Fogelson, a Ph.D. student in mechanical engineering, said he and his fellow students — Xinkai Chen, Qifei Dong, Christopher Dare and Tony Qin — opted to focus their AI on limericks because the form has a fixed AABBA rhyming structure. They also had access to a database of 90,000 limericks that they could use to train their AI. The team used an open-source language model called GPT-2, which was developed by OpenAI and had previously been used to produce poetry. "It generated a virtually endless stream of poetry, thousands of poems," Fogelson recalled. "The quality overall wasn't amazing." In fact, the early efforts included some really weird stuff — sort of limerick conversations. The AI also didn't always produce neat, five-line limericks. Some were just single lines. The bulk of the project for the students was developing a computational method for wading through this sea of poetry and plucking out the relative few samples worth reading, Fogelson said. They created an algorithm that included constraints for rhyme and rhythm and, by monitoring whether words occurred in the vicinity of related words, looked for poems that made some sense. Among those selected by the algorithm: As a kid, I would know without thinking That the limericks were just about drinking But I’d guess it's a sin To be somewhat akin To the innocent beverage, I'm thinking... Another: When an orchestra plays a soft part The accord of each music is smart But the music is slow And they never can know All the music is only a start The students then manually selected the 100 to 200 limericks that merited publication. The result is the first AI-generated book of limericks, "For You, Humans," now available for sale on Amazon. Fogelson said it was a useful exercise, demonstrating how data-driven models might support creative efforts in the future. "The AI does not, to our knowledge, impart meaning whilst generating poems, but the neurons in our brains were nevertheless able to draw connections between the words," he added. Singh said the book is just a beginning. "The project will continue this semester with other student teams, and go on until what the AI produces rivals human creativity in poetry," she said. "We want to see how far we can go." For now, she said, even the AI seems to know that more work is needed: I'm not guilty, am I? On reflection? That my knowledge, of meter negition Is that writing is shoddy I'll build on a noddy... My brain barely works in perfection.