Wednesday, April 24

Campus Queries: What does artificial intelligence do and how does it work?


(Andrea Grigsby/Daily Bruin Illustrations Director)


Campus Queries is a series in which Daily Bruin readers and staff present science-related questions for UCLA professors and experts to answer.

Q: What is artificial intelligence?

A: Artificial intelligence is emulating a task on a computer that’s easy for a human, said Judea Pearl, a researcher and UCLA computer science professor.

Home assistants like Amazon’s Alexa talk to their users the way people talk to each other, and many media portrayals of AI, like Skynet from the “Terminator” franchise, depict them as out to dominate humanity. Despite the range of intelligence displayed by AI in the real world and fiction, the larger question of what makes a computer truly intelligent remains.

Pearl said intelligence is the ability to not only ask and answer questions the way Siri can, but also to establish cause and effect relationships.

“For example, we understand how TV works. It means not only do we know how to turn the knob and get to the right channel but also we can repair the television station if something goes wrong,” Pearl said. “We know how to repair things and we can predict the result of our actions, which gives us the illusion that we are in control.”

One component of intelligence involves interpreting what we see and talking with other people, Pearl said.

Tesla’s self-driving cars are a prominent example of AI, as the program needs to factor in everything a human does while driving to avoid accidents. Amazon and Netflix use AI to recommend products or shows to their customers based on their previous purchases or selections for viewings.

Song-Chun Zhu, a computer science and statistics professor, said he and his team in the Center for Vision, Cognition, Learning, and Autonomy at UCLA focus on developing robots that can carry out ordinary human tasks such as opening medical bottles and talking with humans.

“The other things we do that are more real-world application include trying to use this (software) to have a dialogue with humans,” Zhu said. “(A) chatbot is usually fitting data but doesn’t understand the world. We want to make it understand the world and chat with people.”

The concept of fitting data ties in with machine learning – the idea of training a program to look at data and try to find some association in the set. For example, a program might look into a student’s sleep schedule and their midterm scores. If two variables are associated, then the program indicates there will be changes in a student’s scores as they change their sleep habits.

Pearl said it is mathematically impossible to infer the causal relationship between two variables from just correlation. However, developing AI tools to infer cause and effect relationships is critical for understanding the world, he added.

“Controlling the world is helping prevent undesirable effects,” Pearl said. “This is why it’s so critical to have cause-effect relationships.”

AI has several subfields, including machine learning, Zhu said. Many people have an understanding of AI as a combination of special algorithms and powerful computer processors, Zhu added.

Big data, or the availability of large previously established data sets such as the millions of pictures of dogs on Google Images, in addition to having algorithms and strong computers, has led to excitement about the potential of AI, Zhu added.

Megha Ilango, a third-year computer science student and president of the Association for Computing Machinery’s artificial intelligence division, said she believes students often have the misconception that artificial intelligence and machine learning are the same concept.

“Machine learning is a very hot topic in computer science, which is why they are used synonymously,” Ilango said. “Machine learning is a subset of AI.”

Many are not sure what the limits of AI are, Ilango said.

“(For example), we still don’t have a good way of making up emotional intelligence,” Ilango said.

Software developers constantly update technology to create more lifelike AI for daily use. However, despite how personable Siri may be, many believe virtual assistants that rely on well-defined algorithms need to do more to be considered AI. Pearl said artificial intelligence, like humans, should have the ability to predict the present if certain events in the past were changed.

“The computer must be able to predict the effective action and as well as answer questions about what things will look like if certain events were undone, like … if Hillary (Clinton) was elected president,” Pearl said.

The development of AI has raised many concerns in the legal field, said Jennifer Mnookin, dean of the School of Law.

One of the big legal debates is whether algorithms should be transparent and publically available for monitoring, said Mnookin, who is also the co-director of the Program on Understanding Law, Science & Evidence at UCLA Law, which explores the ways in which new technologies, law and science are linked.

Mnookin added that the challenge of monitoring arises from the ability of AI and machine learning systems to create connections or develop forms of knowledge that may not have been fully understood by their programmers.

“If machine learning algorithms are deciding who should get home loans and who shouldn’t, what do we do if the algorithm proves to have biases or turns out to be racist?” Mnookin said. “And if we don’t have transparency about what the algorithms are doing, how can we really evaluate the legitimacy of the decisions that are being made?”

Mnookin said socioeconomic biases could manifest in AI if the way the system forms correlations is not properly implemented.

For example, a machine learning algorithm could observe that a group of people with one racial identity is poorer than another racial group on average and associate that status with a racial identity, Mnookin said.

“It begins to treat that racial identity as if it means the same thing as a particular socioeconomic status when clearly that’s not the case,” Mnookin said. “Really, they may be correlated because of racism in the world, but that doesn’t mean that being of any particular race somehow means that it’s appropriate that you have less economic resources.”

When developing AI, it is important to understand it to prevent problems with future technologies, Pearl said.

“There should be a concerted effort by scholars involving computer scientists, philosophers, moralists and social scientists to understand what kind of genie we are producing here and how we can possibly understand what it means to control it,” Pearl said. “We don’t have any experience with this kind of adventure so we should worry about it.”

Oruganti was the 2021-2022 city and crime editor. He was also the 2020-2021 Enterprise editor and a News staff writer in the City & Crime and Science & Health beats 2020. He was also a fourth-year cognitive science student at UCLA.


Comments are supposed to create a forum for thoughtful, respectful community discussion. Please be nice. View our full comments policy here.