A prompting technique that instructs language models to break down complex problems into intermediate reasoning steps before producing a final answer. Chain-of-thought improves accuracy on reasoning tasks and makes model behavior more interpretable in multimodal AI systems.
Chain-of-thought prompting includes examples or instructions that demonstrate step-by-step reasoning. Instead of directly outputting an answer, the model generates intermediate reasoning steps that lead to the conclusion. This explicit reasoning process helps the model handle multi-step problems, reduces errors from skipped logic, and makes the reasoning process auditable.
Approaches include few-shot CoT (providing reasoning examples in the prompt), zero-shot CoT (adding 'Let's think step by step'), and tree-of-thought (exploring multiple reasoning paths). Self-consistency samples multiple reasoning chains and selects the most common answer. CoT significantly improves performance on math, logic, and multi-hop reasoning tasks. The technique works best with larger models (>10B parameters) that have sufficient capacity for explicit reasoning.