Smaranda Muresan: Human-centric Natural Language Processing for Social Good and Responsible Computing

Smaranda Muresan

Abstract:

Large language models (LLMs) constitute a paradigm shift in Natural Language Processing and its applications across all domains. To move towards human-centric NLP designed for social good and responsible computing, I argue we need knowledge-aware NLP systems and human-AI collaboration frameworks.  NLP systems that interact with humans need to be knowledge aware (e.g., linguistic, commonsense, sociocultural norms) and context aware (e.g., social, perceptual) so that they communicate better and in a safer and more responsible fashion with humans. Moreover, NLP systems should be able to collaborate with humans to create high-quality datasets for training and/or evaluating NLP models, to help humans solve tasks, and ultimately to align better with human values. In this talk, I will give a brief overview of my research agenda in the context of NLP for social good and responsible computing (e.g., misinformation detection, NLP for education and public health, building NLP technologies with language and culture diversity in mind). I will highlight key innovations on theory-guided and knowledge-aware models that allow us to address two important challenges: lack of training data, and the need to model commonsense knowledge. I will also present some of our recent work on human-AI collaboration frameworks for building high-quality datasets for various tasks such as generating visual metaphors or modeling cross-cultural norms similarities and differences.

 

Bio:

Smaranda Muresan is a Research Scientist at the Data Science Institute at Columbia University and an Amazon Scholar. Before joining Columbia, she was a faculty member in the School of Communication and Information at Rutgers University where she co-founded the Laboratory for the Study of Applied Language Technologies and Society. At Rutgers, she was the recipient of the Distinguished Achievements in Research Award. Her research focuses on human-centric Natural Language Processing for social good and responsible computing. She develops theory-guided and knowledge-aware computational models for understanding and generating language in context with applications to computational social science, education, and public health. Research topics that she worked on over the years include: argument mining and generation, fact-checking and misinformation detection, figurative language understanding and generation (e.g., sarcasm, metaphor, idioms), and multilingual language processing for low-resource and endangered languages. Recently, her research interests include explainable models and human-AI collaboration frameworks for high-quality datasets creation. She received best papers awards at SIGDIAL 2017 and ACL 2018 (short paper). She served as a board member for the North American Chapter of the Association for Computational Linguistics (NAACL) 2020-2021, as a co-founder and co-chair of the New York Academy of Sciences' Annual Symposium on NLP/Dialog/Speech (2019-2020) and as a Program Co-Chair for SIGDIAL 2020 and ACL 2022.

 

Stefan Mathe: Towards Efficient Real-Time Perception in Self-Driving Cars: Methods, Challenges and Open Questions

Mathe Stefan

Abstract:

Arguably the first mass-produced consumer-oriented intelligent autonomous robots, self driving c ars are subject to stringent and conflicting design and operating constraints. On one hand, they need to accurately sense and understand their surroundings, predict changes in a dynamic and uncertain environment, and formulate safe navigation plans. On the other hand, a self-driving care must react fast, consume little power, while at the same time being cost-effective. A viable product must meet these constraints "in the wild", with safety argumentations extending beyond empirical validation cases, towards foreseen and, sometimes, even unforeseen scenarios. Given this seemingly daunting task, in this workshop, we tackle the more modest -- but still tremendously challenging -- visual sensing and scene understanding problems. As human beings, we solve this problem effortlessly, in real-time and with astonishing accuracy. The true difficulty surfaces when we try to design systems that do the same: a step-by-step procedure (algorithm) eludes us. We find ourselves in need to resort to machine learning techniques to automatically find "good" solutions. But this opens a Pandora's box. How do we define a good solution: should we use our own perception on a (unavodably limited) set of scenarios as the "gold" standard? Will such a solution work in other scenarios? How can we argue for safety? Can we explain the behavior of the system? Does its reasoning process resemble ours in any way? Finally, how do we reduce computational costs while not compromising predictive accuracy? In this workshop, we aim to briefly revisit the currently available methods that can help answer these questions. In our journey we shall touch on the three core elements of machine learning, the task - What does the perception system need to solve? - the experience - How does the learning algorithm interact with the world in order to provide a good solution? - and the performance measure - How do we provide feedback on what a good solution is? By presenting rigorous formulations for these elements, the methods we revisit open the path towards a working practical system, and partly answer our questions. Finally, while we analyze the merits and trade-offs in state-of-the-art methods, we use the opportunity to highlight open problems and challenges, from both a theoretical and purely pragmatic perspective.

Bio:

Stefan Mathe is Sr. Embedded Machine Learning Expert at the Bosch Engineering Center Cluj. He obtained his PhD degree from the Department of Computer Science at the University of Toronto. His work is focused on real-time embedded visual perception systems for assisted and autonomous driving, with particular interest in hardware-aware neural network compression, semi-supervised learning, active learning and explainable AI.

 

 

Raoul de Charette: Scene Understanding: Do we even need Labels and Data ?

Raoul de CharetteAbstract:

Despite their ever growing sizes, computer vision datasets are doomed to reflect only a tiny fraction of our world. The induced biases raise ethical issues and show that blind reliance on data can have critical outcomes when it comes to applications like autonomous driving. In this talk, we will investigate visual scene understanding in the era of large-scale datasets, self-supervised learning and LLMs. Following our recently published research we will question our use of machine learning and data for real and open world scene understanding by navigating through these questions: Are we making the best use of existing datasets ? Can we benefit from more knowledge priors ? Can vision algorithms perform in the unknown open-world ?

BIO: Raoul de Charette is a researcher of the Astra Team at Inria Paris, where he leads the Astra-Vision group focusing on robust 2D/3D scene understanding. His research interests are on scene understanding with less supervision and how to adapt vision algorithms to the open-world using additional priors. He has obtained his PhD in 2012 from Mines Paris (France) and prior to Inria has worked at Carnegie Mellon Univ. (USA) and Univ. Of Makedonia (Greece). His research appears in all the top-tier conferences and journals of the field and got rewarded with 2 best papers awards and several grants.