Skip to content
general artificial-intelligence

Prompt Engineering

prompt-engineering ai llm techniques
Plain English

Prompt engineering is the skill of asking AI the right way. The same question phrased differently can produce dramatically different results. A vague prompt like “help me with my network” gets a vague answer, while “Diagnose why VLAN 10 devices on ports 1-8 cannot reach the gateway at 10.10.10.1 on a Cisco 2960X” gets an actionable troubleshooting guide. Prompt engineering is about structuring your instructions clearly, providing relevant context, and specifying the format and depth you want.

Technical Definition

Prompt engineering is the discipline of designing inputs (prompts) to language models to elicit desired outputs. It encompasses techniques for controlling content, format, reasoning depth, and reliability of model responses.

Core techniques:

  • Role assignment: define the model’s persona and expertise (“You are a senior network engineer with 15 years of Cisco experience”)
  • Few-shot prompting: provide examples of desired input/output pairs in the prompt to establish the pattern
  • Chain of thought (CoT): instruct the model to reason step by step before answering, improving accuracy on complex tasks (“Think through this step by step”)
  • Output constraints: specify format (JSON, markdown, table), length, and structure requirements
  • System prompts: persistent instructions that frame all subsequent interactions (separate from user messages in API calls)

Advanced techniques:

  • Metaprompting: using one LLM to generate or refine prompts for another
  • Self-consistency: generate multiple responses and select the most common answer
  • Structured output: force JSON schema compliance using tool-use or constrained decoding
  • Retrieval-augmented prompting: inject relevant context from external sources before the question
  • Prompt chaining: break complex tasks into sequential subtasks, each with its own optimized prompt

Anti-patterns:

  • Vague instructions without success criteria
  • Conflicting requirements in the same prompt
  • Over-constraining (too many rules stifle useful output)
  • Assuming the model remembers previous conversations (stateless API calls)

Temperature as a prompt parameter:

  • 0: deterministic, best for factual/structured tasks
  • 0.3-0.7: balanced creativity and consistency
  • 1.0+: maximum creativity, higher hallucination risk

Prompt engineering patterns

# Bad prompt: vague, no context, no format
bad = "Tell me about security"

# Good prompt: specific role, context, format, constraints
good = """You are a cybersecurity analyst reviewing firewall rules.

Given these iptables rules:
-A INPUT -p tcp --dport 22 -j ACCEPT
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 3306 -j ACCEPT
-A INPUT -j DROP

Identify security issues. For each issue:
1. State the rule and the risk
2. Rate severity (Critical/High/Medium/Low)
3. Provide the corrected rule

Format as a markdown table."""

# Few-shot example for consistent formatting
few_shot = """Convert network descriptions to CIDR notation.

Example: "The engineering VLAN uses 10.10.10.0 with a 255.255.255.0 mask"
Answer: 10.10.10.0/24

Example: "Management network is 172.16.0.0 with 255.255.240.0"
Answer: 172.16.0.0/20

Now convert: "Guest Wi-Fi uses 192.168.100.0 with 255.255.255.128"
Answer:"""
In the Wild

Prompt engineering has become a core skill for IT professionals, not just AI engineers. System administrators use it to generate Ansible playbooks, Terraform configurations, and troubleshooting scripts. Security analysts use it to analyze logs, summarize threat intelligence, and draft incident reports. The key insight is that LLMs respond dramatically better to structured, specific prompts with context and examples. In production AI systems, prompts are version-controlled, tested, and iterated like code. Tools like LangSmith, Braintrust, and Promptfoo enable systematic prompt evaluation against test datasets. The field is evolving rapidly: as models improve, some prompting techniques (like chain of thought) become less necessary, while others (like structured output and tool use) become more important.