Peter Stone
2024
LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning
Zifan Xu
|
Haozhu Wang
|
Dmitriy Bespalov
|
Xian Wu
|
Peter Stone
|
Yanjun Qi
Findings of the Association for Computational Linguistics: EMNLP 2024
Chain-of-thought (CoT) prompting is a popular in-context learning (ICL) approach for large language models (LLMs), especially when tackling complex reasoning tasks. Traditional ICL approaches construct prompts using examples that contain questions similar to the input question. However, CoT prompting, which includes crucial intermediate reasoning steps (rationales) within its examples, necessitates selecting examples based on these rationales rather than the questions themselves. Existing methods require human experts or pre-trained LLMs to describe the skill, a high-level abstraction of rationales, to guide the selection. These methods, however, are often costly and difficult to scale. Instead, this paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales, with a latent variable called a reasoning skill. Concurrently, LaRS learns a reasoning policy to determine the required reasoning skill for a given question. Then the ICL examples are selected by aligning the reasoning skills between past examples and the question. This approach is theoretically grounded and compute-efficient, eliminating the need for auxiliary LLM inference or manual prompt design. Empirical results demonstrate that LaRS consistently outperforms SOTA skill-based selection methods, processing example banks four times faster, reducing LLM inferences during the selection stage by half, and showing greater robustness to sub-optimal example banks.
2020
Learning and Reasoning for Robot Dialog and Navigation Tasks
Keting Lu
|
Shiqi Zhang
|
Peter Stone
|
Xiaoping Chen
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Reinforcement learning and probabilistic reasoning algorithms aim at learning from interaction experiences and reasoning with probabilistic contextual knowledge respectively. In this research, we develop algorithms for robot task completions, while looking into the complementary strengths of reinforcement learning and probabilistic reasoning techniques. The robots learn from trial-and-error experiences to augment their declarative knowledge base, and the augmented knowledge can be used for speeding up the learning process in potentially different tasks. We have implemented and evaluated the developed algorithms using mobile robots conducting dialog and navigation tasks. From the results, we see that our robot’s performance can be improved by both reasoning with human knowledge and learning from task-completion experience. More interestingly, the robot was able to learn from navigation tasks to improve its dialog strategies.
2018
Learning a Policy for Opportunistic Active Learning
Aishwarya Padmakumar
|
Peter Stone
|
Raymond Mooney
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.
Search
Co-authors
- Zifan Xu 1
- Haozhu Wang 1
- Dmitriy Bespalov 1
- Xian Wu 1
- Yanjun Qi 1
- show all...