Skip to content

Prompt Engineering Techniques

Beyond structuring prompts well, there are specific techniques that dramatically improve how AI models respond. These methods help you get better results without any model fine-tuning.


Definition: Asking the model to perform a task without providing any examples or prior training on that specific topic.

  • How it works: Relies entirely on the model’s pre-existing general knowledge to figure out what to do.
  • When it works best:
    • With larger, more capable Foundation Models (FMs)
    • With models that have undergone Instruction Tuning (often via RLHF - Reinforcement Learning from Human Feedback)

Prompt: “Classify this Git commit message as a bug fix, feature, or refactor: ‘Fixed null pointer exception in user login’”

Result: “Bug fix” (The model figured it out without being shown how)


Definition: Providing the model with a few contextual examples (Input + Desired Output) to guide it.

TypeDescription
One-shotProviding 1 example
Few-shotProviding multiple examples
  • Quality: Examples must be representative and clear.
  • Quantity: More examples generally help, but too many can confuse the model (introduce noise). Experiment to find the right number.
Classify these error messages:
"Connection timeout" => Network Error
"Invalid JSON format" => Parse Error
"File not found: config.yaml" => ?

Result: “File Error” (The model followed the pattern)


Definition: A technique used for complex reasoning or multi-step tasks. It encourages the model to break the problem down into intermediate steps rather than guessing the answer immediately.

  • The Magic Phrase: Add “Think step by step” to trigger this behavior.
  • Compatibility: Can be used with Zero-shot OR Few-shot prompting.

Prompt: “Cloud Provider A charges $0.10 per GB with a 100GB minimum. Cloud Provider B charges $0.08 per GB with a 200GB minimum. For 150GB of storage, which is cheaper? Think step by step.”

Result:

  • Provider A: 150 × $0.10 = $15
  • Provider B: 200 × $0.08 = $16 (must pay for minimum 200GB)
  • Answer: Provider A is cheaper for 150GB

TechniqueWhen to UseComplexity
Zero-ShotSimple tasks, capable modelsLow
Few-ShotPattern-based tasks, specific formatsMedium
Chain-of-ThoughtMath, logic, multi-step reasoningHigh