Tuesday, March 18, 2025 - 09:00 am
Online

DISSERTATION DEFENSE
Department of Computer Science and Engineering

University of South Carolina

Author : Vipula Rawte
Advisor: Dr. Amit Sheth
Date: March 18, 2025
Time:  9:00 am – 11:00 am
Place: Zoom and AI Institute, Seminar Room 529
Meeting Link: https://sc-edu.zoom.us/j/83442966750
Meeting ID: 834 4296 6750

Abstract

Deception is inherent in human interactions, and AI systems increasingly exhibit similar tendencies, mainly through hallucinations - plausible yet incorrect outputs stemming from their design, memory limitations, and statistical nature. As AI progresses into Wave 2 - Generative AI, as outlined by Mustafa Suleyman in The Coming Wave, models like GPT and DALL-E are revolutionizing fields like healthcare and education. However, their rapid adoption brings misinformation, safety, and ethics challenges. Notable cases, such as Air Canada’s chatbot providing false information, highlight the real-world impact of AI hallucinations, a phenomenon so prevalent that hallucinate was named Cambridge Dictionary’s Word of the Year for 2023.

This dissertation tackles AI hallucinations through six key areas: (i) Characterization - developing a taxonomy and benchmark (HILT); (ii) Quantification - introducing evaluation metrics (HVI and HVI_auto); (iii) Detection - proposing a span-based Factual Entailment method to improve accuracy; (iv) Avoidance - creating techniques like “Sorry, Come Again?” (SCA) and [PAUSE] injection for better responses; (v) Mitigation - developing RADIANT, a retrieval-augmented framework for entity-context alignment; and (vi) Multi-modal - constructing VHILT and ViBe datasets for hallucination analysis in image-to-text and text-to-video models. This research makes generative AI more reliable and trustworthy by systematically addressing AI hallucinations.