Monday, November 9, 2020 - 04:00 pm
Date and time: Monday, Nov 9, 2020; 4:00-5:00 pm Abstract: Natural Language Inference is a fundamental task in natural language processing particularly due to its ability to evaluate the reasoning ability of models. Most approaches for solving the problem use only the textual content present in training data. However, use of knowledge graphs for Natural Language Inference is not well explored. In this presentation, I will detail two novel approaches that harness ConceptNet as a knowledge base for Natural Language Inference. The framework we use for both these approaches include selecting relevant information from the knowledge graph and encoding them to augment the text based models. The first approach selects concepts mentioned in the text and shows how information from knowledge graph embeddings of these concepts can augment the text based embeddings. However, it ignores the primary issue of noise from knowledge graphs while selecting relevant. The second approach builds upon the first by alleviating noise and using graph convolutional networks for not only considering the concepts mentioned in text but also their neighborhood structure that can be utilized. Overall we show that knowledge graphs can augment the existing text based NLI models with being robust in comparing to text-based models only. Bio: Pavan Kapanipathi is a Research Staff Member in the AI-foundations reasoning group at IBM Research. He is broadly intereseted in Knowledge Graphs, Semantic Web, Reasoning and Natural Language Processing. He graduated with a PhD from Wright State University in 2016. Pavan Kapanipathi has had a winning entry in the open track for the Triplification Challenge at I-Semantics and a best paper award at MRQA workshop at ACL, 2018. He has served as a Program Committee member of prominent AI, NLP, and Web conferences. Blackboard link: https://us.bbcollab.com/guest/4bae3374fe194ee0a0fd2ef232d48aec