Beyond LLMs: Neuro-Symbolic Approaches for Language Understanding
Abstract: Deep learning approaches to natural language processing (NLP), such as large language models (LLMs), have achieved tremendous success in recent years. However, these systems often "hallucinate" and can be difficult to understand, control, and maintain as needs evolve. In the first part of the talk, I will review these limitations and discuss how they affect the adoption of LLMs and other AI tools in important domains such as medical.
In the second part of the talk I will introduce several neuro-symbolic approaches developed in our lab that combine the strengths of both paradigms: the generalization power of neural methods and the flexibility of symbolic approaches. For example, in one of these approaches, we show that symbolic systems can guide the reasoning of LLMs in complex biomedical applications by highlighting which parts of the context are most relevant for understanding the underlying causal mechanisms. This simple strategy consistently improves LLM performance, in some cases doubling it.
Bio: Mihai Surdeanu is a Computer Science professor at the University of Arizona with courtesy appointments in Linguistics and Cognitive Science. Previously, he served as a research scientist at Stanford's NLP group and chief scientist at Lex Machina. His research focuses on systems that process and extract meaning from natural language texts, including question answering and information extraction, with an emphasis on interpretable models that can explain their decisions in human-understandable terms.