Skip to content

Prompt Engineering Fundamentals

Improving how we prompt a Foundation Model (FM) is the fastest way to control Generative AI. By adjusting our inputs (questions/instructions), we can change the behavior of the model without needing to retrain it.

Prompt Engineering Fundamentals Infographic

  • Quality: Better inputs lead to higher-quality outputs.
  • No Fine-Tuning Needed: You can equip the model with domain-specific knowledge or tools without touching the model parameters.
  • Safety: Good prompts can bolster safety measures and reduce errors.
  • Discovery: It helps us fully understand the potential of the model.

A helpful mnemonic for crafting effective prompts comes from “The CLEAR Path: A Framework for Enhancing Information Literacy through Prompt Engineering” by Leo S. Lo. The framework outlines five principles:

PrincipleDescription
ConciseStrip unnecessary words and focus on essential information.
LogicalStructure prompts with clear flow and natural progression.
ExplicitSpecify exact output format, scope, and content requirements.
AdaptiveIterate and refine prompts based on AI responses.
ReflectiveContinuously evaluate outputs for accuracy and relevance.
PrincipleInstead of…Try…
Concise”Can you provide me with a detailed explanation of how REST APIs work?""Explain REST APIs and their core principles”
LogicalRandom, unstructured requests”List the steps to deploy a Node.js app, from setup to production”
Explicit”Tell me about Docker""Provide a concise overview of Docker, covering containers, images, and basic commands”
AdaptiveAccepting vague resultsIf “Discuss cloud computing” yields generic output, try “Compare AWS Lambda vs Azure Functions for serverless backends”
ReflectiveMoving on without evaluationAfter receiving content, assess quality and adjust subsequent prompts

The framework was designed to help develop critical thinking skills for evaluating and creating AI-generated content—a skill valuable for developers and non-developers alike.


A robust prompt typically contains four specific components. Including all of them drastically improves results.

ElementDescription
InstructionsThe specific task description. What do you want the model to do?
ContextExternal information or background. Why are we doing this? Who is it for?
Input DataThe content that needs to be processed. What data are we analyzing?
Output IndicatorThe desired format. How do you want the answer to look?
  • Instruction: “Given a list of API error logs, categorize each error by severity level.”
  • Context: “This task helps the DevOps team prioritize bug fixes for a web application.”
  • Input Data: [List of Error Logs] with timestamps and error codes
  • Output Indicator: “Error Category:” (Signals the start of the response)

Sometimes it is easier to tell the model what not to do than to explain what it should do.

  • Definition: Providing constraints or examples of what should be excluded.
  • Goal: To steer the model away from unwanted behaviors, toxic content, hate speech, or bias.
  • Example: “Write a README for this Python library, but do not include installation steps or license information.”

The Prompt: “Generate documentation for my API.”

ElementStatus
Instructions✅ Present — the model will try to answer
Context❌ Missing — What kind of API? Who is the audience?
Input Data❌ Missing — What endpoints? What parameters?
Output Indicator❌ Missing — Format? Structure?

Result: The output will be generic, low-quality, and likely irrelevant to your specific API.


While Foundation Models (FMs) are capable, their output quality relies heavily on the prompt. Modifying prompts and adjusting inference parameters allows you to unlock the full potential of the model without fine-tuning.

Modifying Prompts for Better AI Outputs Infographic


These are settings you configure before sending the prompt. They control the “style” of the response (Randomness/Diversity) and the “size” of the response (Length).

These parameters change how the model selects the next word (token).

ParameterDescription
TemperatureControls the “creativity” of the model. Low = deterministic, safe. High = creative, diverse.
Top P (Nucleus Sampling)Limits word choices to a cumulative probability (e.g., top 90%).
Top KLimits the choice to the top K most probable words. Low (10) = focused. High (500) = diverse.

Controls when the model stops writing.

ParameterDescription
Maximum LengthThe hard limit on tokens generated. Set low for summaries, high for detailed docs.
Stop SequencesSpecial words/symbols that force the model to stop immediately.

Example: In a coding assistant, "```" might be a stop sequence so the AI stops after completing a code block.


Beyond parameters, the actual text of the prompt matters most.

GuidelineBad ExampleGood Example
Be Clear and Concise”Parse data… JSON…""Convert this CSV file to JSON format.”
Include Context”Debug this code.""Debug this Python function that handles user authentication.”
Use Directives“Provide the response as a numbered list.”
Consider Output in Prompt”…Return only the function name, nothing else.”
  • Start with an Interrogation: Use Who, What, Where, When, Why, How.
  • Provide Examples (Few-Shot Prompting): Show the model what you want.
    • Example: input: "bug fixed" => commit_type: fix
  • Break Up Complex Tasks: Split into subtasks. Ask the model to “think step by step” (Chain of Thought).
  • Experiment: Try different phrasings.
  • Use Prompt Templates: Standardize your prompts with placeholders for consistency.

Scenario: Improving a Technical Documentation Prompt

Section titled “Scenario: Improving a Technical Documentation Prompt”

“Generate documentation for my REST API.”

Parameters:

  • Temperature (0.3): Low setting for consistent, accurate output
  • Max Length (3,000): Allows for detailed documentation

The Text:

“Generate comprehensive API documentation for a REST API that handles user authentication. The audience is frontend developers integrating with this API. Structure the documentation with: Overview, Authentication Flow, Endpoints List, Request/Response Examples, Error Codes, and Rate Limits. Use a professional, developer-friendly tone.”

ImprovementWhat was added
Context”REST API for user authentication” + “Frontend developers” audience
DirectivesExplicitly listed the 6 sections required
ToneSpecified “Professional, developer-friendly”