-
A prompting technique in Large Language Models (LLMs) where the model is guided to show intermediate reasoning steps before arriving at the final answer.
-
Inspired by how humans solve problems step by step.
-
Helps LLMs handle complex reasoning tasks such as math, logic, and multi-step decision-making.
Key Concepts
-
Step-by-Step Reasoning: Instead of jumping to an answer, the model explains its thought process.
-
Intermediate Steps: Similar to “showing work” in math problems.
-
Better Accuracy: Effective in arithmetic, logical reasoning, and multi-hop questions.
-
Prompt Example:
-
“Let’s think step by step.”
-
Benefits
-
Improves reasoning accuracy.
-
Makes the model’s output more interpretable.
-
Reduces errors in multi-step problems.
-
Useful in education, programming, and decision support.
Limitations
-
Verbosity: Responses may become too long.
-
Overthinking: The Model may generate unnecessary steps.
-
Hallucinations: Incorrect reasoning may still appear logical.
-
Resource Usage: Longer responses = higher compute cost.
Variants of Chain of Thought
-
Zero-Shot CoT
-
Instructs the model to reason step by step without providing examples.
-
Typically uses trigger phrases like “Let’s think step by step.”
-
Example:
Q: A farmer has 17 sheep. All but nine run away. How many are left?
Reasoning: 17 − (17 − 9) = 9.
Answer: 9 sheep remain.
-
Effective for quick reasoning tasks.
-
-
Few-Shot CoT
-
Provide examples of problems with reasoning steps before presenting the actual question.
-
Helps the model learn the reasoning format.
-
Example:
Q: A car travels 180 km in 3 hours. What is its average speed?
Reasoning: distance ÷ time = 180 ÷ 3 = 60.
Answer: 60 km/h.
-
When to Use CoT
-
Solving math or word problems.
-
Logic puzzles and reasoning-heavy tasks.
-
Multi-step planning (coding, workflows, business decisions).
-
Explaining why/how behind a choice.
When Not to Use CoT
-
Simple fact lookups.
-
Strict latency/cost constraints.
-
Scenarios where verbose rationales are undesirable.