Monday, April 29

Deep Dive: ChatGPT, Part 2



How does ChatGPT impact the humanities and higher education as a whole? In the second and final installation of this miniseries, Podcasts contributor Phoebe Brous hands off the questioning to fellow contributors Kaitlyn Esperon and Sonia Wong. The pair sits down with third-year computer science and philosophy student Cole Hume, design media arts and English professor Daniel Snelson, and political science professor and chair Davide Panagia.

Kaitlyn Esperon: Hey everyone, my name is Kaitlyn Esperon, and I’m a Podcast contributor for the Daily Bruin.

Sonia Wong: I’m Sonia Wong, assistant editor for the Quad, News writer and Podcast contributor for the Daily Bruin. Welcome to Deep Dive, a Daily Bruin explanatory podcast that investigates national, UC-wide and campus-wide events affecting Bruins. In the last episode of our two-part series, we discussed the impact of ChatGPT on higher education and its potential societal impacts, with experts explaining the differences between ChatGPT and its predecessors, as well as concerns of ChatGPT’s development and innovation in the AI market. In the second part, we explore the current and future uses of ChatGPT as AI continues to permeate academia and art.

KE: We spoke with Cole Hume, a computer science and philosophy student who is also president of the AI Robotics Ethics Society. Then we spoke with Daniel Snelson, design media arts and English professor who is also faculty at the Game Lab for Digital Humanities. Lastly, we spoke with Davide Panagia, political science professor and chair. As a note, these interviews were conducted in April of 2023. During this first interview, Cole Hume talked with us about how he personally uses ChatGPT, along with explaining what he believes college professors should understand about the platform. Hope you enjoy.

Cole Hume: My name is Cole Hume, and I’m a third year philosophy and computer science student here at UCLA. I am the president of the AI Robotics and Ethics Society here at UCLA.

KE: Can you explain more what the AI Robotics and Ethics Society is?

CH: Absolutely. So we go by AIRES. And our goal is to educate both future AI leaders – so people that are going to be on the forefront of helping implement these technologies with our investments – output them, figure out the ways that they integrate into particular businesses, or policy-making regarding them, as well as just the broader UCLA population, who is sure to be impacted by these technologies, no matter their profession.

KE: So what would you like professors at UCLA to know about AI in general, but also ChatGPT? Because I know there’s been a lot of pushback, especially from professors, and maybe a lot of fear, as well as what that is going to mean for their classes.

CH: Yeah, great question. I mean, I think it’s a very, very understandable fear. Ultimately, most professors come up with the objective of optimizing for student learning. And typically, how do we do that? With assignments, with a lot of tasks that can typically be done by any student in a certain amount of time. But they need to get done, they need and they require the person to actually process it and look over the information in some kind of meaningful way. And it seems like if you can just throw a prompt into ChatGPT, it will completely mitigate that. So I completely respect the fear. And I think that in the case with programming, a lot of the lower-division computer science classes, you could just plug in Chegg for homework, you throw it all into Chegg and basically get 100% on that assignment via that. Yet, you kind of do need some foundational syntax to be a good programmer. I think the definition of what’s going to make a great programmer might change over the coming years. But ultimately, understanding what you’re just basically looking at and doing some assignments is helpful. And then in the case of writers, I mean, I think writing is not just something that’s honing a craft and honing your capability to form and structure sentences, but it’s honing your creativity. And usually, if you’re writing on particularly creative topics or intellectual topics, you’re honing your understanding of things via the written word. So I think there is a lot of danger in undermining really simple assignments. But with that being said, I think it’s really, really hard to regulate in the classroom, I don’t think that professors can just ignore that and just try to say, we’re abolishing ChatGPT, you can’t use it in the classroom, because ultimately, there’s still going to be a select few that will. And that select few is probably going to be the majority. And you’re basically being punitive for people with integrity at that point. Ultimately, I think all of our professions are going to be integrating these technologies in some interesting way. So I think instead of looking at these technologies as a danger in terms of these little assignments, look at it as actually an opportunity to allow these kids to hone an additional skill set that they’re going to be able to leverage to be the best of their profession. In the coming years, I would just basically encourage all professors to do the research as much as they can and try to recognize the strengths and weaknesses of these technologies. One of the primary weaknesses that a lot of us have recognized is its ability to get the facts right, because it’s a large language model. And it’s kind of a massive bullshitter at times, it just doesn’t really care about that truth or falsity. It’s just trying to give you what you want. So if you ask for something – let’s say you asked for a particular article, and you may be asked for like an additional parameter, you throw it under argument and basically what you want it in that prompt – it will try to give that to you even if that’s just like flagrantly not a thing. Like even if that is not grounded in reality. So it doesn’t usually present representations in that case of reality. And that’s, I think, where like one of the big weaknesses are and you have professors realize that. I mean, that’s a good thing to police for, if you’re like having kids write research papers, the facts should be there. And I think, recognizing the strength, though, of allowing kids to build skeletons really quickly. As a philosophy student, the more I’ve used ChatGPT, I’ve recognized both fallibility and weakness in its writing ability, and ultimately – it’s like a philosophy essay has a lot of unique components to it. And it usually does not actually get to a deeper meaning in the philosophical texts, it’s more so just summation. So that’s something that can be supportive, and for me to use ChatGPT, as like a skeleton, like my outlines, and it actually can save me time in that realm. But it’s not helpful in helping me process things. And if our professor actually was to preface that when they give an assignment and tell you like, “Oh, this is how you maybe could use a tool like this”, might lead to the kid actually using it to save time, to not just be doing busy work for the sake of busy work, and then basically return to their own brain, the stuff that the tool can’t be used for. And I think professors have to really beg the question for any assignment that they are giving that can be completely done by ChatGPT the efficacy of this assignment. Is there really that much meaning here if this tool can easily do it? What’s really like the reason why I’m having them do it and how could it be supplemented in some separate way? Because yeah, just once more, I think it’s really, really difficult to police at this point in time. And ultimately, the way that’s probably going to start getting places and other AI, it’s good at recognizing AI Chat. But right now, professors can’t really recognize whether I wrote the paper, or if ChatGPT wrote the paper, unless it’s on like a particularly dense topic that Chat’s just going to really do a poor job at.

KE: I was hoping if you feel comfortable, you could explain more in depth how you personally use ChatGPT?

CH: Yeah, absolutely. So I use it for virtually all my essays and somewhere. I think it’s oftentimes for skeletons. So I’d say that if I get like a five-page paper assigned on Kantian ethics, or something, I might throw the prompt into ChatGPT. And like, I’ll read that I’m like, “Oh, this is miserable.” But that did actually say one thing regarding this one prong of the prompts that I actually disagree with, and then maybe it makes me try to double click on how I understand that topic. And maybe I actually realized that my understanding of that topic might have been flawed. So it’s almost like a peer to play around with. It’s a collaborator in that sense. Yeah, typically, I think the dangerous one, though, is if I have like a simple reading response, or something, and something that’s maybe like a very, very common text that ChatGPT would be familiar with, I might use ChatGPT in a little bit less thoughtful way in those cases, if I’m like, on like a real time crunch. But I would encourage people not to do that. And I think for a podcast regarding my academic pursuits, I probably shouldn’t say that. I do prefer assignments, but I just think ultimately, at the end of the day, it’s best used as a collaborator, and I think it’s best used with iteration. I think, yeah, if any student is trying to figure out how to best utilize it, I would say, try to think of it as like a peer who is stupider than you. But it’s really, really quick with getting shit done. There’s a lot of positives there, but also a lot of dangers if you just take it at its word.

KE: So I guess you’ve already touched on this quite a bit. But do you have any advice outside of what you’ve already said for your peers approaching ChatGPT?

CH: Again, I would just really emphasize iteration and not glorification. Don’t turn to the tool and assume anything more than what you’d assume if you had a random classmate try to tell you about like a particular topic. Because like with Google search, we’re oftentimes going to see the top thing on Google search, and we’re going to be like, “Oh, that’s just fact”. I mean, we might, we might meet it with a little bit of doubt. But when we have just a random person, we don’t know, some complete stranger, tell us something, we might actually greet it with a bit more doubt. I think with ChatGPT, there’s a kind of a line to toe, where we should not greet it with the same amount of doubt that we maybe greet a random stranger with because it is trying to integrate some truth within the internet, and what it’s trained on. But we definitely should not greet it with the same amount of confidence that we do with Google. That’s where the real dangers will occur. And interpreting those outputs, just always be more interested in the structure that it’s essentially presenting rather than the content. For coding, the content can be pretty awesome. But again, it doesn’t get context. So you have to do some deep thinking yourself about like the actual application within your own code typically. And then just one other thing that I wanted to touch on regarding how to best use this tool. Use it as a tutor. The best use case I’ve definitely seen for myself has been just, it’s like going down a true ChatGPT rabbit hole, where I might ask it, “explain linear algebra to me like I’m 11 years old,” and it will give me something that’s actually extremely digestible. And then just continuing down, like almost like a drop-down list, of things I’m learning more and more and allowing myself to branch my knowledge base off. And I think a lot of professors won’t give you that time in office hours to ask those stupid questions. But ChatGPT is really fricking patient. So I think it’s a great tutor in that way. Yeah, I think that that’s one of the best things that ChatGPT can be used for, as something that will just entertain everything that you’re going to ask and keep on going with you.

KE: Thank you so much. This has really been invaluable. I appreciate your perspective.

SW: That was Cole Hume, giving us a student perspective about the possibilities and pitfalls of ChatGPT. Then we talked to professor Daniel Snelson, who gave us an overview of applications of AI in literature creation, fears and shortcomings of GPT’s current data collection process, and some hopes about the future of AI in art more broadly.

Daniel Snelson: My name is Daniel Scott Snelson. I’m an assistant professor in the Department of English and the Department of Design Media Arts, where I also serve as faculty in the digital humanities program. And with the Game Lab. He/him pronouns, and delighted to be here to talk about ChatGPT, I’ve been playing with it a lot.

SW: Awesome. So could you start a little bit by talking about your experience in ChatGPT, like personal as well as professional?

DS: Yeah. So I’ve been tracking AI and generative text for a long time, the precursors to ChatGPT. I was just on a podcast talking with Al Filreis at the Kelly Writers House about this thing called the Dada poem generator. What it does is it takes a very simple Dada exercise, which is to cut up a newspaper, put all the clippings in a hat, and then pull out the words one by one to recompose a poem. Of course, this is a perfect thing for a computer to do. So somebody put this up in the early ’90s. And I think it serves as a kind of good core principle for the kinds of things we do with ChatGPT. On the Dada poetry app, you have an input field, a button that just says Dada, which I love, like you Dada-ize the text, and then it generates an output text, and more or less what ChatGPT does.

SW: And as you’ve spoken before, like there have been other forms, or similar forms of like, rendering text and to say, like technology, what’s so special about ChatGPT? Like, why has it caught the attention of so many people? Or why has it caused so much concern even in the education sphere?

DS: Well, the real breakthrough is that it’s just so convincing, it’s increasingly hard to tell if some of the texts that ChatGPT and other large language models produce were made, in fact, by machines or by humans. They really tell us more about ourselves. And we’re starting to see ourselves reflected with more clarity than we ever have. And I think that there’s something kind of horrifying about seeing yourself reflected.

SW: And I have two followup questions. So the first one is about the reflection of how ChatGPT gathers information from humans from the internet, how accurately or how effective do you think ChatGPT is and how far and then in a general sense you think we are from attaining that like perfect human replication? And the second question, I know there have been recent technology, specifically targeting ChatGPT, like GPT writing, and just in general, what do you think about them?

DS: I think the first thing I want to say concerns algorithmic bias. And I think this is one of the most pressing, concerning and troubling issues with these large language models that do a kind of deep learning that we don’t fully understand yet. But we do know where its data comes from, right? All the data used doesn’t come from nowhere. It comes from humans, it comes from humans on the internet, who carry an attendant set of biases and prejudices, of racism and misogyny. So these tools, which we see are increasingly pervasive, increasingly important, will be structuring and restructuring knowledge work or content production. So unless we’re experimenting with them, unless we’re playing with them, we won’t understand how these biases are coming out. On the second part of your question, on the importance and the pervasiveness, I guess of new generative methods of text and image and sound production. I think it goes back to, you know, a certain idea that as these things are better at replicating humans, there’s a concern that the humans are no longer needed. And I think that some of these concerns are a bit misguided, right? Like you will always need a kind of human to point at things, to decipher things, to do the kind of rich pattern recognition that humans have always been so good at, right. So we need a kind of critical perspective on these things to get at both where their biases and failures are to find out where the errors are.

SW: You said definitely there still needs to be somebody there. What do you see for the future development for ChatGPT? What are some productive modes of collaboration that we can have as humans from all sorts of disciplines with ChatGPT?

DS: I’m really interested in a wide range of artists and poets that are using these tools to generate art, to think with the algorithm, to think with ChatGPT, both to find out what it can do, but also to find out what is the role of human creativity and technological milieu that’s increasingly determined by these kinds of generative processes. How do we structure language in a way that we can think about its form alongside its content? And so that’s why I think making poetry right now with ChatGPT is really important, because we don’t know how its forms make meaning, we don’t understand the “how” it signifies. And investigating that, exploring that, looking to those limits, to the very limits of its affordances is, I think, one way to come to understand what it does, what it does well, and what it feels.

SW: But I was wondering about what you think about ChatGPT, its possible future contributions to art, and just general commentary about how our the humanities field as a whole?

DS: I don’t think we know, I think we’re still figuring it out. And this, you know, so for me, as a kind of media historian, I’d like to look back at earlier moments where people didn’t know what to do with the printing press. Very worried about, you know, what’s going to happen to our manuscripts now that we are printing these words. Or the development of the Internet, and that the kind of disruption that had to cultural processes. So I think we’ll be faced with a similar kind of question, right? Like, once some of these forms are freed up from their original purpose, we can think about their aesthetic dimension a little bit more clearly. And once the technology becomes normalized, once it’s ubiquitous – technology has this very sneaky way of fading into the background. And I think there’s so many good examples of people who are already working with these tools, I think I would start with on a cautionary note, which is, you know, the kinds of image generators like Midjourney, or DALL-E, they’re often trained on very Western image sets. And so I think it also should cause us to pause and think about what kinds of databases we’re creating, what kinds of images we’re feeding and what kind of inherent presumptions of quality, aesthetic value these things have. So I would start there, there’s a lot of writers who are collaborating with GPT, probably more than we’d like to admit it actually. GPT can be a great tool for bouncing ideas off, for having conversations, for testing out certain kinds of constructions. One of the artists who I quite like is K Allado-McDowell, who’s written – I think, at this point – three books, in collaboration with AI. So the books, they use this really nice typographic feature where they modulate between two different typefaces. So you know, when K is writing, and when the bot is writing. And they bleed into each other. And so you get this seamless narrative that’s really kind of like a co-written work by an author and an AI at the same interface. That’s one example. And there’s, there’s many others, I shout out, Lillian-Yvonne Bertram, who I really love. I’d also had somebody else. I’d also like to shout out Alison Parrish, who I think is a phenomenal writer, who’s been working with generative systems for a long time.

SW: And I like never, if you asked me when I was a kid, I never would have imagined that AI could be something that could be in writing. But then you learn more about theory, like romanticism, or even like formalism, and then you come to realize that writing is a so much more complicated process and how AI has so much of the capacity to learn and even pick up on styles that it replicates so well, the style. Even in GPT, when you talk to it, you realize it starts to emulate your style and talks back to you. And so I was also wondering how ChatGPT fit into the schema of, say, academic writing? And have you seen it be used?

DS: Writing emerges from communities and technologies, typewriters, word processors, a whole community of editors. And I think, you know, when you have something it’s like a mass conglomeration of all of human writing on the internet. It brings some of those qualities of writing, which again, have become somewhat invisible, back into the foreground, right? And so its normative mode is one that relies on this huge communal database to produce novel texts. I think there’s something really interesting there that I’m interested in following up on.

SW: How do you think ChatGPT fits into the context of higher education, specifically in say, the humanities?

DS: I think that, you know, the core principle of humanities education is to teach critical thinking, to teach about how meaning is made, how to interrogate the world around us, and how to develop a critical perspective on things that are otherwise taken for granted. And here, I think that – you know, one of the tools we use to do that is the essay. The essay is a really good tool for thought. It really trains you to organize thoughts, marshal evidence, to think about facts, to think about rhetoric, to think about what’s convincing, or not convincing. Now that GPT can do that really well. We might have to invent forms that it’s not so good at so that we can continue that teaching paradigm, right? So I think it’s really about teaching, it’s really about presenting the world and now we live in a world where GPT exists, like, if not, you know, Pandora has opened this box, it’s not going away, it’s not going back. And so the question is not how do we stop it from interfacing with our essays from, you know, generating fake papers or, you know, other forms of plagiarism? But rather, how can we use it to think critically about the world? And from my perspective, how can we use it to think critically about itself? Right, like, how do we use ChatGPT in a way to understand the technological milieu that we find ourselves imbricated in.

SW: So speaking of the future of ChatGPT, I’ve noticed that they’re coming up with newer versions. I don’t know much about ChatGPT, so I’d like for you to explain a little bit. What are the differences between the versions? I know they’re moving from version three to four. What are some updates? And how does that look for the future trajectory as a whole for ChatGPT?

DS: So far, it’s really been a matter of scale. So, early GPT 2 text to transformer was much smaller than GPT 3, which was fed, you know, a lot more data, but also had a far greater number of parameters, which is effectively the way that bits of data in the collection relate to each other. They’re vectors of collection. GPT 3 was then developed. They built on top of that. And then GPT 4. And at that point, I think the sophistication of 4 is what led to a number of industry professionals – and others, for a variety of reasons which I won’t get into, but again, I’m pretty pessimistic about most of them – to call for a moratorium, like a short pause, because we’re going too fast.

SW: Do you think there are going to be other forms similar to ChatGPT that’s likely going to emerge? How would you conceive of it coming to being?

DS: Yeah, there’s a few different ways to answer this. Yes, I think, you know, we’re just at the very beginning of a lot of developments in generative text, image, sound, movies, right. So it’s kind of insofar as we’re training ChatGPT, it’s also training us. And it’s training us to expect a certain kind of responsiveness from these interfaces that will probably end up paying like $5 a month to have ChatGPT in our Word document, right, or subscribing so that ChatGPT will help us with our taxes, or who knows what. You can see the use cases are kind of endless, which is one exciting thing, but also one of the reasons why there’s such a gold rush in tech corporations right now to harness these kinds of predictive models. But I think that it’s really a time to pause and think, do we want it to have all of our data? Do we want it to know everything? And where are the limits to that? And who decides? I don’t think it should just be tech corporations that decide what data it can use and how it can use it. So yeah, I think all of these things are part of an ongoing conversation.

SW: I think the ‘who decides’ is such an important question to ask. It’s like how we decide what value the narrative voice has the same way in literature as we do now as it relates to ChatGPT. It’s really important to think about the power we give it and the power that we ascribe to the person that decides, which we now know is large corporations. And that can become really scary because it becomes very disproportionate for anybody that’s using ChatGPT, purchasing from ChatGPT, trying to even develop it in new way too. And so the last question I have is what are you most excited about the future for ChatGPT or any related tools?

DS: Oh, I don’t know, I don’t know. I feel like I’m physiologically excited. Like it creates a fight-or-flight response in me. It’s scary and there’s a lot going on and we just don’t understand it yet. I think that, you know, as somebody who studies media, who studies digital culture, I’m excited by the challenge of addressing these systems and thinking through them, thinking through their politics, thinking through their effect on the world, thinking through their effects on culture. And, you know, we live in interesting times, which is a damning statement, right? These interesting times are often scary times like when things haven’t been formulated yet. And so as a scholar, there’s a kind of excitement about engaging in that dialogue, trying to think critically. And ideally, at the end of the day, thinking about how we can harness tools, how we can critically evaluate and shape how these tools will continue to develop. They’re not going to go away. They will continue to develop and so I’m excited to play some small role in critically evaluating them as they develop with the hopes that they can be developed toward a better future.

KE: Professor Snelson introduced possibilities for integrating AI generative text modes with art, education, and even our daily lives. Our next interviewee, Professor Davide Panagia, will share an educator’s view on ChatGPT, as well as counsel against the dangers of assuming its outputs are true representations of the world.

Davide Panagia: My name is Davide Panagia, I am a professor of political science. I teach political theory with an emphasis on modern and contemporary political thought – so from the 1700s to the present. And my research area focuses quite a bit on media aesthetics and political theory. I’m also currently the chair of the political science department.

KE: Awesome. Well, thank you so much for being here and agreeing to be interviewed by us.

DP: It’s my pleasure. It’s very exciting.

KE: Can you explain what an algorithm is for people who don’t really have a concrete idea already?

DP: No, I can’t because there, I mean – part of the answer to that question is that there’re so many going definitions of what an algorithm is. The most conventional and most popular definition that you will find from anybody who writes about algorithms, or who studies algorithms at a very basic rudimentary level, is that it’s kind of a recipe. It’s a set of instructions to execute particular functions in order to produce some kind of a result that we usually call an output. That output could be a JPEG image, it could be a Tweet, it could be anything, a recommendation, and that’s why they’re so ubiquitous and so effective. But it’s basically a set of commands that is intended to execute a function. So algorithms don’t just operate on their own. Because we talk about algorithms, you know, it’s this term that’s entered into our common lexicon, we use it as if everybody has an intuitive sense, or an intuitive understanding of what it is. But as you noted earlier, in asking me what even is an algorithm? It’s almost impossible to give an answer to the question. Most of the types of algorithms that we worry about are the types of algorithms that are now referred to as machine learning algorithms, which generate predictions. And it is the case that any kind of algorithm in order to operate has to have a huge data set in order to generate the kinds of results that we associate with, you know, something like a search engine. Take the classic Google search engine. It produces results based on the greatest number of hits. The problem with that is how the greatest number of hits is coded and labeled because every data set requires the labeling and the coding of the values that are being examined that are being processed within an algorithm. And there are algorithms that do that kind of coding. But you still nonetheless fall into the problem that is the attribution of a value. This is better than that is based on certain kinds of criteria that have to be programmed into the algorithm. The criteria which we live in, they are as biased as our own sort of senses. We all have certain biases because we all have certain preferences, right? Some biases are innocent, like wearing a purple shirt rather than a blue shirt. But some biases are, as we know, a lot less innocent than that. And so, you know, we’re still talking about systems that encode hierarchies of value. You know, this is better than that. This is a better outcome than this outcome. The problem comes that when we see it on a computer page, we tend to assume that it is neutral, it is devoid of bias, it is innocent, and that’s a bad assumption to make. And so one of the things that is really important to me, and I think to everybody who studies these things, is not just worrying about the normative and ethical stakes of algorithms and the kinds of biases that may or may not be implicit within any algorithm, but also the biases we have as readers of a webpage in assuming that what we read is somehow neutral or innocent of biases because it’s produced by a machine.

KE: Do you believe that ChatGPT has similar political effects as a search engine in the way that the algorithm affects its users?

DP: Yeah, of course. I am not a ChatGPT naysayer, you know, just like I am not a sort of spell check, AutoCorrect naysayer. I don’t believe a student is cheating because they use spell check, rather than keeping the mistake on the page, because they might not know how to spell or they may have some kind of form of neurodiversity that doesn’t allow them to spell in the manner in which neurotypical people can spell. So there’s a lot of things that need to be unpacked before we can make any decisions about ChatGPT. All of those things that need to be unpacked, I find are politically relevant. You know, from my perspective, from the types of things that I study, I find the reaction against ChatGPT to be very neurotypical. There’s only one way of learning. It’s this traditional American civil rights kind of liberal paradigm of the individual going and learning on their own and producing a work of genius, etc, etc. I don’t understand how that is sustainable in a university that wants to attract a large number of diverse students who may not have had access to that pre-programming that is imagined in that ideal student, right? Because really, what’s happened – you know, the critique of ChatGPT, as I’ve seen, this sort of moral panic around – it is basically saying, ‘oh, it’s okay to pre-program students with a built-in algorithm on how to write before coming to college, it’s wrong for students to have access to tools that may help them to write better.’ I mean, think about it, think of your ELA classes in high school, right? They were all about programming you to articulate a formal system for how to write an essay. What ChatGPT is showing us is that A, we’ve done that really well at some level, and B, we are lazy and expecting the same types of things, as professors, and expecting the same types of things to be the only formula that’s available to us. Because not only are we – we become so habituated with that now a computer can do it for us. And so the question is, well, first of all, there is an implicit sort of bias in the critique of ChatGPT that says that neurotypical form of learning is the only one that should be privileged.

KE: Can you explain more how utilizing ChatGPT is a form of learning?

DP: Yeah, I mean, you know, the number one form of learning how to write is through imitation. You can teach people grammar, you can teach people form, but basically, you want them to imitate a good essay. Why not use ChatGPT as an example, and then get the student to grade the ChatGPT essay and decide whether it’s correct or incorrect, etc. etc., so that the student interacts in a critical way with the output rather than saying, ‘thou shalt not look.’ The move towards legislating something and prohibiting something, or at the very least saying that it’s inevitably bad, is to me highly suspect, in part because if you look at the history of technology, especially of learning technology, over the centuries, there’s never been a moment where a moral panic hasn’t accompanied the invention of a new technology.

KE: It’s very true. A lot of professors have included statements in their syllabus saying that the usage of ChatGPT for any of the assignments would violate the code of conduct and the cheating policy. Are you in the position that you don’t believe that is the case, at least for your classes?

DP: Now, there is a very good answer to that question. Which is to say that, you know, there is a fundamental belief that learning is individual, that learning is solitary, that learning is the product of the person’s singular mind, etc, etc. You go into a science lab, and that’s not the case. It’s collective. It’s integrated with machines, machine learning technologies everywhere. In a science lab, it’s generating constant amounts of data and patterns of data. What you’re asking is a question that can’t be answered across the board. If it is the case that some of my colleagues have those types of things in their syllabus, those disclaimers in their syllabus, it’s perfectly legitimate for them to do so within the parameters of the university’s code of conduct. It’s a choice on the part of the professor, which is the nature of academic freedom, to decide on what learning looks like within the classroom. And to me, as somebody who’s interested in getting students to critically engage, their participation with technical media as part of their sense of subjectivity within a democratic society, their sense of political subjectivity within a democratic society, excluding or prohibiting one’s participation with it is problematic because it sort of goes against what I’m trying to get them to learn. We have these crutches all over the place that are supposed to, you know, assist us in dealing with how we manage knowledge and information. And this is actually the real problem for me, right? We’re in a situation in which we have to reinvent ourselves as humans in relationship to knowledge and information because the systems have changed dramatically. We no longer can rely on our eyes and on our mind because those aren’t the only tools for the management of knowledge and information. For me, the bigger question is, why do students feel compelled to have to use ChatGPT?

KE: Do you have advice for students when using AI?

DP: You know, I think the most important thing, from my perspective, from the things that I teach and learn, that needs to be said is to not treat technology as mere tools and not assume that tools don’t have biases built into them. And in order to do that is you have to do the hard work of unpacking things: our uses, how they were made, what they affect. All this sort of stuff.

KE: Professor Panagia left us with some important words of wisdom that will hopefully guide users in an increasingly AI-saturated world.

SW: From conducting the interviews to putting the episodes together, we gained a few major takeaways. Firstly, while we should maintain a reasonable amount of skepticism towards AI technology, we should simultaneously be excited for the new opportunities they offer in a multitude of academic fields. Secondly, rather than steering clear of all users of ChatGPT, we should seek to experiment with its boundaries in our own time, so as to make it as accurate of a learning assistant as we can by providing constructive user feedback. Lastly, the development of ChatGPT has become a crucial time when we should reevaluate our learning methods, and how to incorporate technology into our education without developing an over dependence on the new exciting tools.

KE: This is the end of this episode. You can listen to the first part of the series and other Daily Bruin podcasts on Spotify, Apple Music and SoundCloud, and the transcript for this show is available at dailybruin.com. Thanks, everyone. See you all next time.

Kaitlyn Esperon

Comments are supposed to create a forum for thoughtful, respectful community discussion. Please be nice. View our full comments policy here.

×

Comments are closed.