Skip to content

Are College Faculty Boycotting Artificial Intelligence Tools Like ChatGPT?

New survey data recently appeared about the adoption rates of generative artificial intelligence tools like ChatGPT by college students and faculty. Although many of these findings released by Tyton Partners aligned with expectations, some of these data will certainly leave analysts and observers scratching their heads.

Titled “GenAI in Higher Education, Fall 2023 Update,” the new report appears as a supplement to Tyton’s more comprehensive “Time for Class 2023” annual polling results released in June 2023. We covered the earlier report in our July 2023 feature article, “Analysis: Online Student & Faculty Preferences at Odds, Says New Poll.”

Although much of Tyton’s earlier report discussed current college students’ increasing preferences for online education over face-to-face instruction, one section presented limited polling data on faculty and students’ use of artificial intelligence tools. By contrast, Tyton’s update compares the previous AI utilization results with newer pulse survey data collected in September from more than 1,000 higher education faculty and 1,600 students at more than 600 colleges and universities.

The update specifically focuses on these groups’ use of generative AI tools for writing. That newer report also examines how learners and instructors apply large language model (LLM) platforms such as ChatGPT, Microsoft Bing Chat (recently rebranded as Microsoft Copilot), Google Bard, and Meta Llama 2 within course settings.

The AI Disconnect Between Student and Faculty Interests

To be sure, even though students continue to leverage AI platforms to a vastly greater extent than faculty, there’s been a dramatic increase in the use of generative AI tools among both groups since Tyton’s predecessor study. For example, the update describes how half of undergraduates now work with AI tools like ChatGPT at least once each month; that’s an 81 percent increase over the 27 percent of users reported by the earlier poll.

Instructors, by contrast, still fall about 5 percent short of the adoption rate reported by students a full six months earlier. At that time, only a tiny 9 percent of the faculty had reported regular AI usage; that proportion has now climbed to 22 percent—a 144 percent gain over the previous report.

Nevertheless, that still means almost 80 percent of the faculty across the nation rarely or never use artificial intelligence tools. But among the one-fifth of instructors who use these platforms regularly, 75 percent believe that their students won’t be able to succeed in the labor force unless they know how to use the technology. And among the faculty that only tried AI systems a couple of times at most, half of that group also expresses the same opinion about how student success will partially depend on AI skills.

In other words, this report has identified a remarkable disconnect between student and faculty interests. Among the instructors, we have a group where 70 percent believe that understanding how to operate artificial intelligence platforms will be vital to their students’ success in the workforce. Yet at the same time, 78 percent of that group also appears to be boycotting AI, since they report that they rarely or never use the technology as late as a full 11 months after its November 2022 introduction.

This disconnect concerns the researchers more than any of the survey’s other findings. “To resolve these perspectives, institutional leaders and vendors might need to intervene,” say the pollsters. They continue:

While faculty recognize the significance of GenAI on the future of students’ careers, they are not personally experimenting and learning how to use GenAI at the same rate as students, suggesting that institutional leaders and/or the vendor community need to play a strategic role(s) in helping students learn to use GenAI in preparation for work.

One potential mechanism for this strategic support could be for institutional leaders and solution providers to train educators on GenAI writing tools, propose ways to evolve instructional practices to include GenAI tools, and improve Al literacy.

“The Resistance Began Immediately”

Unfortunately, recent anecdotal reports suggest that this sort of training approach is not likely to work with a large and vocal segment of resistant faculty members. For example, in a November article in the Chronicle of Higher Education, the associate director of the Teaching for Learning Center at the University of Missouri at Columbia, Flower Darby, wrote:

The resistance began immediately. After I wrote an essay last summer on preparing to teach with AI tools, the very first comment I received was from an instructional designer casting doubt. Many faculty members, she said, had valid ethical concerns about AI and had no plans to use ChatGPT in their courses any time soon. . .

There is a significant pool of AI resistors. I’m hearing from many of them when I give talks on this issue. In early fall, for example, I gave a virtual presentation on this issue to a group of community colleges. Two strong naysayers insisted that it was unethical of me to even encourage faculty members to bring this “biased” tool into our courses.

A controversial Inside Higher Ed report published a few weeks before Darby’s piece entitled “Why Faculty Members Are Polarized on AI” suggests that fierce resistance to the technology might be more common among faculty than previously suspected. In that article, Daniel Stanford—a lecturer with the Jarvis College of Computing and Digital Media at DePaul University in Chicago—told IHE’s Dr. Susan D’Agostino that the vitriolic tone of the debate around AI’s role in teaching and learning bothered him.

Although Stanford said he and other academics had also witnessed this tone in real life, he pointed specifically to a series of comments posted in response to an essay by Dr. Corey Robin. A professor at Brooklyn College and the Graduate Center of the City University of New York, Dr. Robin has taught political science since 1993.

In his essay, Dr. Robin expresses regrets that because of ChatGPT, he was now planning in-class, proctored midterms and finals instead of take-home essay exams for the first time in his 30-year teaching career. In response, the first comment by Middlebury College’s Professor Jason Mittell asked, “What’s the real harm for students who opt to cheat by using AI to write papers in passing the class?” Dr. Mittell continued:

After 23 years of teaching, I’ve come to realize that my job is neither to police students who don’t want to learn nor to rank students via grades, but to maximize learning for those who want to learn, and try to inspire the others to try to join in the learning.

That question drew fire from several commentators, starting with this harsh response:

You are unfairly giving good grades to students who cheated and very likely giving worse grades to students who didn’t. Teachers who ask questions like yours tend to see school as just a personal development system. Not sure why you were able to get this far in teaching without seeing the catastrophic impact that laissez-faire attitudes towards cheating have on the entire system. Teachers like you kill the system entirely, creating ever more cheaters.

Other instructors talked about feelings like sadness, a sense of loss, and even depression:

The prospect of grading and commenting on work not actually written by the students is just too depressing to bear.

There’s always been a risk here. Many of us put *roughly* the same effort into grading all student work, even though only some of them profit from our efforts, and there are always some who have cheated.

So grading and commenting involves a leap of faith. But the prospect of grading heaps of symbolic output generated by LLMs rather than students makes it hard to find the strength needed to make that leap.

This July 2023 report from the Modern Language Association of America and the Conference on College Composition and Communication presents general reasons for the rebellion by a segment of faculty against artificial intelligence usage. However, statistical data about faculty resistance isn’t provided by Tyton’s update because the pollsters don’t appear to have asked their sample respondents about this topic. We understand that Tyton plans to conduct more polling early in 2024 that specifically examines faculty opinions about AI.

Differences in AI Use Cases

The disconnect between student and faculty interests with respect to artificial intelligence platform usage isn’t the only surprising finding in Tyton’s update. The poll results also provide a detailed analysis of the AI use cases by both faculty and students, and some of the differences among the groups appear striking.

Student Use Cases

Among the students, the use cases vary depending on whether the students are infrequent or frequent users. The poll defines the former category as weekly or monthly users, and the latter category as daily users.

The weekly or monthly users appear to only call upon AI tools to help them with academic, learning-specific use cases—that is, difficult or time-consuming conceptual and writing assignments for which they need special help. Here are the top five use cases listed by the non-daily users:

  1. Understanding difficult concepts (36%)
  2. Summarizing or paraphrasing text (33%)
  3. Assisting with writing assignments (32%)
  4. Answering homework questions (30%)
  5. Analyzing or interpreting data (28%)

By contrast, the daily users appear to use AI tools for many more activities in their lives, and they don’t seem to bring the AI tools strictly to bear on academic assignments. Rather, daily users seem to perceive the AI platforms more broadly as tools that can boost their efficiency in a wide variety of day-to-day situations. Here are the top five use cases listed by the daily users:

  1. Summarizing or paraphrasing text (34%)
  2. Organizing my schedule (32%)
  3. Answering homework questions (31%)
  4. Making resumes, cover letters, or applications for internships/jobs (31%)
  5. Assisting with writing assignments (30%)

Note that both groups rank the same three use cases among the top five positions. For example, both groups rank summarizing or paraphrasing text highly; it’s the top use case for daily users and the second-ranked use case for non-daily users. That’s not surprising, since conscientious students constantly use it as they take notes while studying textbook chapters.

Also appearing on both lists are answering homework questions—which appears in the daily user’s list in the third slot and in the fourth on the non-daily list—and assisting with writing assignments, which ranks fifth for the daily users and third for the non-daily users.

Beyond those three academic use cases common to both groups, the surprises start showing up. For example, among the daily users, organizing my schedule placed second, selected by 32 percent of the students and ranked above every use case on the list except summarizing or paraphrasing text.

Also appearing within the daily users’ top five slots in fourth place is making resumes, cover letters, or applications for internships/jobs, selected by 31 percent of the students. These two options show up on the non-daily users’ choices, but they rank below their top five choices, and only 21 percent of those students selected them.

Faculty Use Cases

Faculty appear to use these platforms for much more limited and specific purposes than the students—mainly to understand how students use these tools, and to teach students how to use them.

More than two-fifths of the instructors (43 percent) say that by running prompts through various AI platforms, they monitor what their students see when they use the tools, making this use case the faculty’s top selection. Their second most frequent use case involves teaching students how to use the tools most effectively, a function selected by just over a third of the instructors.

Faculty attempts to use AI platforms to work more efficiently appeared less frequently than in the student poll results. Nevertheless, one-fifth of the instructors said they were using these platforms to create quizzes and exams, and 10 percent reported grading student work with support from AI. Tyton’s report did not elaborate on how instructors might create exams and grade with assistance from artificial intelligence platforms, including the four reviewed for this study.

Douglas Mark

While a partner in a San Francisco marketing and design firm, for over 20 years Douglas Mark wrote online and print content for the world’s biggest brands, including United Airlines, Union Bank, Ziff Davis, Sebastiani and AT&T.

Since his first magazine article appeared in MacUser in 1995, he’s also written on finance and graduate business education in addition to mobile online devices, apps, and technology. He graduated in the top 1 percent of his class with a business administration degree from the University of Illinois and studied computer science at Stanford University.