+abstract: "It is widely assumed in Pragmatics that when understanding and generating language, people analyze and formulate intentions, namely what the speaker aims to do with their words. In this talk, I will present our initial investigation on how to endow LLMs with the same ability. As a first step, we have explored ARR, an intuitive and effective zero-shot prompting method that explicitly incorporates three key steps in answering questions: Analyzing the intent of the question, Retrieving relevant information, and Reasoning step by step. In comprehensive experiments across diverse and challenging Question-Answering tasks, we demonstrate that ARR consistently outperforms the popular technique of Chain-Of-Thought, with intent analysis playing a vital role in the process. While ARR is about an LLM paying attention to the intentions behind a question, in a second line of work, we introduce the concept of Speaking with Intent (SWI), where the LLM is explicitly prompted to generate the intent behind every sentence it produces. Our hypothesis being that this provides high-level planning to guide subsequent analysis and communication. Empirically, we show that SWI enhances the reasoning capabilities and generation quality of LLMs both on reasoning-intensive Question-Answering and Text Summarization benchmarks. Overall, ARR and SWI are just initial steps in making LLMs more intentional and therefore more rational, transparent and safe."
0 commit comments