ChatGPT Tips for the CUNY Classroom, Version 2.0

September 18, 2023

Graduate Center scholars share their latest insights on teaching in an AI-infused world.

A robot in the library
(Photo generated by Canva's AI software)

Plagiarism. Hallucinations. Prompt-writing techniques. Critical thinking about Language Learning Models and knowledge appropriation.

After a semester in the AI era, professors have experienced the highs and lows of teaching alongside ChatGPT in CUNY college classrooms.

In a follow-up to our advice in January, we invited Graduate Center scholars to share their latest thoughts on designing courses that engage and challenge students with AI, while avoiding the pitfalls it can bring.

 

Luke Waltzer headshot
Luke Waltzer (photo credit: Coralie Carlson)

Luke Waltzer, Director of the Graduate Center’s Teaching and Learning Center

If I were to give faculty three general tips on acknowledging the reality of ChatGPT and other AI chat tools in their courses, it would be to revisit their assignments to make sure that the intent is as clear as it could possibly be, and that students can see that intent in the prompt and understand how it's contextualized in the course. The more that students see the assignment as authentic and connected to the goals of the class, and the more they understand what faculty are looking for in the work, the lower the incentive may be for students to use AI tools as a shortcut for important and necessary steps. A second tip is to build in opportunities for students to revise and to build their work over time so that they're not just handing in what is expected to be a finished product, but that the exercise is a dialogue between the faculty member and the students around ideas or goals that are central to the course. Make the work less about some external standard and more about students’ progression towards understanding an idea or being able to enact a methodology or to make an argument within the course. A third idea is to incorporate in-class writing and sharing to build a social, collective component around assignments, which can help students feel more ownership and responsibility over their work.

Ultimately, I think all faculty should talk with their students about how and why they should or shouldn't use an AI tool within the context of their class. Some faculty may choose to invite students to use ChatGPT or other AI tools. And if they do so, they should talk to students about citing how and when they've used it, and ask them to reflect upon that use. Other faculty members may encourage students to play with generative AI to potentially deepen their understanding of course material, or to challenge their own assumptions about the work, or for other reasons that make sense within the context of the course. And yet other faculty members may choose to exclude it altogether from the work that they do with students. Each of those approaches is defensible, but they should be contextualized and explained by the faculty members so that students understand that these are tools to be reckoned and reasoned with, and gain experience exercising that criticality. Faculty must develop that criticality, too; it simply won’t be sufficient to ignore or to wish AI away.

Michelle McSweeney
Michelle McSweeney (Photo courtesy of Michelle McSweeney)

Michelle McSweeney (Ph.D. ’16, Linguistics); Adjunct Assistant Professor (Data Analysis and Visualization); Data Science Managing Producer at Brilliant.org

Getting ChatGPT and other large language models (LLMs) to produce the outcome you want is hard. In writing prose, it performs best when provided with a detailed outline along with a stated point of view. In programming, it can comment code beautifully, but writing original code requires very narrowly specifying the problem, constraints, and the tests to ensure it works. For both of these, the prompt is central to the output. 

This semester, I’m teaching a course at the Graduate Center on the social implications and technical underpinnings of large language models in the Data Analysis and Visualization master’s program. Students get practice with the technologies behind large language models. The more students are familiar with these tools, the more they realize that large language models are like all other tools: They have a purpose but are no replacement for clear thinking and communication. As far as tips for teaching with LLMs, I ask for students to include the prompt they used with their writing.*

*Disclaimer: ChatGPT helped edit this response. The thoughts and opinions are entirely my own.

Zach Muhlbauer portrait
Zachary Muhlbauer (Photo credit: Alex Irklievski)

Zachary Muhlbauer, English Ph.D. Candidate; Graduate Center Teaching and Learning Center Fellow; Adjunct Lecturer at Baruch College

For fellow instructors, I suggest building structured learning activities into the classroom that help students learn to use ChatGPT responsibly and which encourage learners to grapple with the political and epistemic ramifications of this technology. Instructors can ask students to compile and vet popular source material to explore whether and how ChatGPT “hallucinates” misinformation on some topics more than others. Instructors may even wish to break students into “AI red teams” tasked with finding glitches in how ChatGPT appropriates knowledge and approximates human subjectivity. Such activities invite students to tinker and experiment with their peers in a guided learning environment, while also fostering more critical approaches to the ecosystem of AI tools at their disposal. 

When I asked students in my Baruch College writing class to discuss ChatGPT, their responses were mixed. Some were avoidant or withdrawn, perhaps for fear of its punitive associations in the classroom, while others were excited about the tool and its seemingly limitless applications. As the group settled into discussion, we placed ChatGPT in conversation with spinoff platforms like Perplexity, which uses many of ChatGPT’s natural language functions but with clickable references for claims made in its responses. Here, students developed their skills in comparative analysis and reflected on how information networks are constructed, who’s left in and out, and what this means for knowledge production. In turn, they expressed a desire to learn more about generative AI topics, ranging from the technical discourse of prompt writing to the wholesale flattening of language differences. 

 Janelle Poe headshot
Janelle Poe (Photo courtesy of Poe)

Janelle Poe, English Ph.D. student; Has taught courses at The City College of New York (CCNY) and Lehman College; CUNY Humanities Alliance Fellow; CCNY Open Educational Resources (OER) Fellow

While I haven’t been a classroom instructor since ChatGPT soft launched last year, I am familiar with the plagiarism fears it provokes. During my time teaching at CUNY from 2017 to 2022, I found that many plagiarism issues are the result of students, particularly non-native standard English speakers and recent immigrants, overcompensating in efforts to sound as “academic” as some peer-reviewed journal articles and course materials might appear to be. This reveals a greater need to address the different languages students (and scholars) are encouraged to employ, the support offered to multilingual students, and grading policies.

After reading students’ work closely, their voice and lexicon emerge and are easily distinguished from a generic, Wikipedia-style bot. Requiring students to reference library database sources with active links limits the inclusion of false citations. These labor-intensive strategies reveal the increased importance and value of human intellect and time, respect, and compensation all educators deserve.

As others have suggested, there are plenty of ways to use ChatGPT, to analyze, challenge, or collaborate with the technology. Students of all ages and professionals alike are actively using and training ChatGPT and other AI chatbots in and far beyond the classroom. Given the immense issues with false and biased misinformation, ChatGPT requires ongoing, critical, and nuanced conversations about so many issues impacting our daily lives and futures. Like fast fashion or fast food, AI is an industry with significant human and environmental exploitation — hidden costs that should be identified and questioned along with the more obvious debates over morality and efficiency.

This technology can help students and instructors to see and love the value of their unique brilliance and process of learning, as we all make way for and contend with the exponentially expanding universe and worlds we inhabit, traverse, and create.

Kristine-Riley headshot student
Kristine Riley (Photo courtesy of Riley)

Kristine Riley, Sociology Ph.D. Candidate; Graduate Center Teaching and Learning Center Fellow; Has taught courses at Hunter College, Baruch College, Borough of Manhattan Community College, and New York University

At the Teaching and Learning Center’s 2023 Summer Institute, I co-facilitated a workshop on ChatGPT and academic integrity with the TLC’s director, Luke Waltzer. Our conversation went beyond university and classroom policies and focused on critical explorations of scholastic values. It included issues with ChatGPT in perpetuating logics of inequality and discrimination as well as the fact that the tool draws from works without the original authors’ consent. In the spirit of that workshop, I offer two questions for other instructors to consider when discussing ChatGPT with their students:

What is an approach that aligns with your pedagogical values? I want students to challenge themselves, explore new ideas, and feel proud of their work. I let them know I think they are capable of brilliance, while acknowledging and normalizing feelings of insecurity about academic writing. I emphasize that my assessments of their work are most generous and generative when I’m not concerned about breaches of academic integrity.

What are actions we can take to create conditions where students don’t feel pressured to use these technologies in bad faith? I’ve integrated a 24-hour, no-questions-asked extension policy as well as accommodations for students who meet with writing tutors. I also encourage them to come to office hours where we can be thought partners in exploring areas where they feel stuck. Finally, I build in collaborative opportunities throughout the course that encourage students to turn to me or each other, hopefully mitigating pressures to use ChatGPT outside of their integrity.

Ariel Leutheusser headshot
Ariel Leutheusser (Photo courtesy of Leutheusser) 

Ariel Leutheusser, Comparative Literature Ph.D. Candidate; Adjunct Instructor at Borough of Manhattan Community College; Writing Across the Curriculum Fellow, New York City College of Technology; Learning Experience Designer, Team Open, Borough of Manhattan Community College; Former CUNY Humanities Alliance Fellow

My experience as an educator at CUNY teaching alongside generative AI tools like ChatGPT has been colored by anxiety and frustration — both for myself as a teacher as well as for the faculty that I support through my work at Borough of Manhattan Community College’s Center for Excellence in Teaching, Learning and Scholarship. Over my tenure as a CUNY Humanities Alliance Fellow, I had the privilege to spend time and effort thinking deeply about pedagogy and how it shapes our classrooms and our students’ whole experience of higher education. And, grounded in this experience, I can assert that my feelings about teaching alongside generative AI needs –start from a definitive tenet: teaching in good faith. 

Many discussions of generative AI in the classroom involve incorporating the tool into lessons and assignments and experimenting with how it can be used as well as the importance of acknowledging its presence in students’ lives and how one might even cite the tool when submitting work that emerged from engagement with it. 

While these engagements are certainly significant, in my teaching and support of other faculty, I believe in grounding our engagement with our students from a place of good faith. I teach my students expecting that they will enter my classroom eager to hone their knowledge and skills, and I engage with them and their work thus. I aim to avoid suspicion when receiving and evaluating their work. Though, of course, I have received texts that were clearly created by generative AI tools that invent sources and make up plots. In these instances, I approach the students who have submitted this work and ask them to explain what they’ve written to me, and out of these discussions we can assess what exactly was going on that resulted in their relying on this tool this way and come to an agreement for resubmission. 

Part of teaching in good faith also involves acknowledging students’ whole selves in the classroom and making space for them. Following the tenets of universal design for learning (UDL), I have made sure to incorporate alternative modes for students to communicate their engagement with the class and their knowledge of the subjects of study especially through the use of audio assignments that allow students to tap into students’ facility with talking through their arguments. An asset-based pedagogy seeks to create space in the classroom for students to use their existing faculties and knowledge in the classroom, and I believe staunchly that cultivating such a classroom and a culture of care is key to teaching in this new environment of generative AI that we now face. 

Published by the Office of Communications and Marketing