[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
-
Updated
Oct 28, 2024 - Python
[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
An awesome repository & A comprehensive survey on interpretability of LLM attention heads.
ICLR 2024 论文和开源项目合集
SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
[EMNLP 2024] This is the official implementation of the paper "A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners" in PyTorch.
We introduce a benchmark for testing how well LLMs can find vulnerabilities in cryptographic protocols. By combining LLMs with symbolic reasoning tools like Tamarin, we aim to improve the efficiency and thoroughness of protocol analysis, paving the way for future AI-powered cybersecurity defenses.
Data and software artifacts for the EMNLP 2024 (Main) paper "What Are the Odds? Language Models Are Capable of Probabilistic Reasoning"
Implement CoT using guidance-ai
Add a description, image, and links to the llm-reasoning topic page so that developers can more easily learn about it.
To associate your repository with the llm-reasoning topic, visit your repo's landing page and select "manage topics."