Rebecca Garabed is assistant dean for graduate studies in the College of Veterinary Medicine. She holds a VMD in Large Animal Medicine from the University of Pennsylvania and a PhD in Epidemiology from the University of California, Davis.
What made you want to start using AI in your course?
In veterinary medicine there is an impossible amount of information that we expect students to memorize (something people are not as good at as computers), but we don't spend much time teaching students how to use that information and extrapolate it in the different contexts that come up in practice (something people are better at than computers). Back in 2019, a now-defunct veterinary chatbot launched on Amazon Alexa. That really got me thinking about what a computer can do versus a veterinarian and brought it to the forefront for me that my students need to know how to manage this technology or risk finding themselves out of a job.
Can you share specifics on how you have students use it?
In a first assignment, students work in groups of four, with each member assigned a different source: general web search, generative AI, professional reference, or a database of peer-reviewed publications. To encourage discussion, groups create a combined list of available tests for a specific disease in a specific species and use their sources to develop a plan for testing an animal population. The process is supported by report-outs and whole-class discussion on effective search strategies, the tests and references found, and the pluses and minuses of each source. Students submit their search strategies, test lists, and reflections individually in a Carmen quiz, graded coarsely with general class feedback. The groups' final diagnostic plans are graded with a rubric assessing the justification of chosen tests and how information is communicated to the owner. On exams, I include scenarios asking which source would be most appropriate or asking students to match sources with their strengths and limitations.
In the second assignment, students read an article about nine ethical principles related to AI use in veterinary medicine. I then provide case studies involving real technology used in veterinary medicine and a choice about whether to use them. Students discuss the scenarios in groups and are given links to explore the technologies further. As a class, we discuss the ethical issues raised in the group discussions and how students weighed them. The formative assessment is a reflection; the summative assessment is recognition of the ethical principles on exams.
What were you hoping students would get out of it?
I hope that the students will get a larger understanding of where they will interact with AI in their future careers and how to use emerging technology to improve the way they practice medicine.
What’s the most important thing you’ve learned since starting to teach with AI? Have there been any surprises?
When I started using the ethics assignments in 2021, students had little experience with the technologies and treated them largely as a novelty. Last year, most students had used one or more of the technologies in veterinary practices they worked for and had clear opinions and experiences, making the discussion much livelier. The speed of the transition from novelty to everyday practice has been surprising.
For the information-searching assignment, I learned I needed to teach how to interact effectively with different technologies. I had assumed students knew the difference between asking a chatbot a question and using Boolean logic in a database query. I've since added more didactic content about the differences between sources and how to search them. I let students choose which specific sources to use. A surprising discovery was that Ask Jeeves still exists — and a student used it. It knew absolutely nothing about diagnostic tests for cattle, which makes sense given what it was designed for.
Is there a moment when you realized AI was actually working better than you expected in your class? What happened?
I was surprised the first time I ran the exercises at how well ChatGPT faked references that didn't exist. Other than that, you can imagine that not many people have asked the general chatbots about obscure animal diseases, so they are still in the early stages of learning about them. Generally, the generative AI answers sound like students who didn't do their homework and are trying to fake it. I expect that they will get better over time, but as of September 2024 they still needed a lot of checking. It encourages students to check the other sources for confirmation of what they get in the AI responses.
Some people are nervous about AI, thinking it might hold students back from developing critical thinking or creativity. What’s your take on that?
Like any technology, it’s all in how you use it. One ethical principle my students learn about is “skill erosion,” the loss of critical thinking and creativity from simply "reading the screen." These skills are essential for applying AI in veterinary practice. I want to train students to recognize when to trust the AI and when not to, and how to use it to improve their practice. This technology can enhance veterinary care, not just make veterinarians lazy or replace them. To make that happen, we must teach students about AI now and give them chances to practice not turning off their brains.
Will embedding AI in the curriculum actually help students prepare for their careers after graduation?
If we do it well, it will. That's on us. It doesn't work everywhere, but it is a natural fit for my class.