ScaledByDesign/Insights
ServicesPricingAboutContact
Scaled By Design

Fractional CTO + execution partner for revenue-critical systems.

Company

  • About
  • Services
  • Contact

Resources

  • Insights
  • Pricing
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service

© 2026 ScaledByDesign. All rights reserved.

contact@scaledbydesign.com

On This Page

The Prompt That Worked in January Broke in MarchWhat Prompt Engineering Actually IsContext Engineering: The Real ArchitectureThe Four Pillars of Context Engineering1. Retrieval — Get the Right Information2. Memory — Remember What Matters3. Tools — Let the Model Act, Don't Make It Guess4. Guardrails — Validate the AssemblyThe Migration PathThe ResultsStop Tweaking Prompts. Start Engineering Context.
  1. Insights
  2. AI & Automation
  3. Prompt Engineering Is Dead — Context Engineering Is What Matters

Prompt Engineering Is Dead — Context Engineering Is What Matters

February 9, 2026·ScaledByDesign·
aillmcontext-engineeringproduction

The Prompt That Worked in January Broke in March

A client came to us with a "prompt engineering problem." Their customer support AI had been working great for two months, then started giving wrong answers. They'd spent three weeks tweaking the prompt — adding more instructions, more examples, more edge cases. The prompt was now 4,000 tokens long and still failing.

Sound familiar? If you've spent any time shipping AI into production, you've probably hit this exact wall. You keep adding instructions, the prompt keeps growing, and somehow the model gets worse.

The prompt wasn't the problem. The architecture was.

What Prompt Engineering Actually Is

Let's be clear: prompt engineering isn't useless. Writing good instructions for a language model is a real skill, and it matters. But here's the thing — it's the smallest piece of a production AI system. Treating it as the whole solution is like treating the SQL query as your entire database strategy.

And yet, that's exactly what most teams do.

Here's what a typical "prompt-engineered" system looks like:

const prompt = `You are a helpful customer support agent for Acme Corp.
You sell widgets in three sizes: small ($10), medium ($20), large ($30).
Our return policy is 30 days with receipt.
Our hours are 9-5 EST Monday through Friday.
Do not discuss competitors.
Do not make promises about delivery times.
If the customer is angry, be empathetic.
If you don't know something, say so.
... (200 more lines of instructions)`;
 
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "system", content: prompt },
    { role: "user", content: userMessage },
  ],
});

This works great in a demo. You show it to the team, everyone's impressed, and you ship it. Then reality hits.

It breaks in production because:

  • Product catalog changes and the prompt doesn't
  • Policy updates require prompt redeployment
  • Edge cases multiply until the prompt is unmanageable
  • Context window fills up with instructions instead of conversation

We've seen this pattern at least a dozen times. The prompt starts at 500 tokens, grows to 2,000, then 4,000, and somewhere around 3,000 tokens the team realizes they're playing whack-a-mole with edge cases. Fix one answer, break three others.

There's a better way.

Context Engineering: The Real Architecture

So if cramming everything into a prompt doesn't work, what does? Context engineering — dynamically assembling the right information at the right time instead of stuffing it all into a static prompt.

async function buildContext(
  userMessage: string,
  conversation: Message[],
  customerId?: string
): Promise<ContextBundle> {
  // 1. Classify intent to know what context we need
  const intent = await classifyIntent(userMessage);
 
  // 2. Retrieve relevant knowledge (not everything)
  const knowledge = await retrieveRelevant(userMessage, {
    sources: getSourcesForIntent(intent),
    maxChunks: 5,
    minRelevance: 0.78,
  });
 
  // 3. Pull customer-specific context if authenticated
  const customerContext = customerId
    ? await getCustomerContext(customerId, intent)
    : null;
 
  // 4. Get current policies (not hardcoded ones)
  const policies = await getPoliciesForIntent(intent);
 
  // 5. Determine available actions
  const actions = getAvailableActions(intent, customerContext);
 
  return {
    systemPrompt: buildSystemPrompt(policies, actions),
    retrievedKnowledge: knowledge,
    customerContext,
    conversationHistory: trimConversation(conversation, 10),
  };
}

The difference isn't subtle — it's architectural:

Prompt EngineeringContext Engineering
Static instructionsDynamic context assembly
Everything in the promptRight information at right time
Prompt changes = redeploymentKnowledge base changes = instant
Breaks when products changeAdapts to current data
One-size-fits-all contextIntent-specific context

Now let's break down how this actually works in practice.

The Four Pillars of Context Engineering

1. Retrieval — Get the Right Information

This is the biggest shift. Instead of stuffing your prompt with everything the model might need, you retrieve only what's relevant for this specific question:

async function retrieveRelevant(
  query: string,
  options: RetrievalOptions
): Promise<RetrievedChunk[]> {
  // Hybrid search: semantic + keyword
  const semanticResults = await vectorStore.search(
    await embed(query),
    { topK: options.maxChunks * 2, sources: options.sources }
  );
 
  const keywordResults = await fullTextSearch(query, {
    sources: options.sources,
    limit: options.maxChunks,
  });
 
  // Merge and re-rank
  const merged = reciprocalRankFusion(semanticResults, keywordResults);
 
  // Filter by relevance threshold
  return merged
    .filter((chunk) => chunk.score >= options.minRelevance)
    .slice(0, options.maxChunks);
}

2. Memory — Remember What Matters

Here's a mistake we see constantly: teams dump the entire conversation history into context and call it "memory." That's not memory — that's a transcript. Real memory is structured information that persists across conversations:

interface ConversationMemory {
  // Short-term: current conversation context
  currentIntent: string;
  mentionedProducts: string[];
  customerSentiment: "positive" | "neutral" | "frustrated";
 
  // Long-term: persisted across conversations
  previousIssues: Issue[];
  preferences: Record<string, string>;
  lifetimeValue: number;
  supportTier: "standard" | "premium" | "enterprise";
}

3. Tools — Let the Model Act, Don't Make It Guess

This one's simple but transformative. Instead of hardcoding "our return policy is 30 days" into the prompt (and hoping it stays accurate), let the model look it up in real-time:

const tools = [
  {
    name: "check_order_status",
    description: "Look up the current status of a customer order",
    parameters: { order_id: "string" },
    execute: async (params) => {
      return await orderService.getStatus(params.order_id);
    },
  },
  {
    name: "initiate_return",
    description: "Start a return process for an order",
    parameters: { order_id: "string", reason: "string" },
    execute: async (params) => {
      return await returnService.initiate(params);
    },
  },
];

4. Guardrails — Validate the Assembly

Here's something most tutorials skip: the context you assemble needs validation before it reaches the model. You're pulling data from multiple sources dynamically — what if one of those sources leaks PII? What if you overshoot the context window?

function validateContext(context: ContextBundle): ContextBundle {
  // Ensure no PII leaked into retrieved knowledge
  context.retrievedKnowledge = context.retrievedKnowledge.map(
    (chunk) => ({ ...chunk, text: redactPII(chunk.text) })
  );
 
  // Ensure total context fits in window with room for response
  const totalTokens = estimateTokens(context);
  if (totalTokens > MAX_CONTEXT_TOKENS) {
    context.retrievedKnowledge = trimToFit(
      context.retrievedKnowledge,
      MAX_CONTEXT_TOKENS - estimateTokens(context.systemPrompt)
    );
  }
 
  return context;
}

The Migration Path

Okay, so this all sounds great in theory. But you've got a production system with a 4,000-token prompt that's mostly working. You can't just tear it down and rebuild. Here's the good news — you don't have to. We typically walk teams through this in about four weeks:

Week 1: Extract hardcoded knowledge into a retrievable store Move product info, policies, and FAQs from the prompt into a vector database or structured knowledge base. Your prompt shrinks from 4,000 tokens to 400.

Week 2: Add intent classification Route different types of questions to different context bundles. A billing question doesn't need product specs. A product question doesn't need return policies.

Week 3: Implement tool use Stop telling the model what the data is. Let it look up order status, check inventory, and verify account details in real-time.

Week 4: Add memory and personalization Track customer context across conversations. A premium customer with a history of large orders gets different context than a first-time buyer.

The Results

Remember the client from the top of this article — the one with the 4,000-token prompt that kept breaking? Here's what happened after four weeks of migrating to context engineering:

Before (prompt engineering):
  Prompt size: 4,000 tokens (static)
  Accuracy: 72% (declining monthly)
  Maintenance: 8 hours/week of prompt tweaking
  Failure mode: Wrong answers with high confidence

After (context engineering):
  System prompt: 400 tokens (stable)
  Retrieved context: 800-2,000 tokens (dynamic)
  Accuracy: 94% (stable)
  Maintenance: 2 hours/week (knowledge base updates)
  Failure mode: "I don't know" (graceful)

The prompt barely changed. Everything around it did.

Stop Tweaking Prompts. Start Engineering Context.

Look, prompt engineering is a real skill and it's not going away. But if your entire AI strategy is "write a better prompt," you're optimizing the wrong layer.

If your AI system is held together by a carefully worded prompt that breaks when you change a comma, you don't have a production system — you have a house of cards. And at some point, someone's going to sneeze.

The teams shipping reliable AI aren't better at writing prompts. They're better at building the systems that assemble the right context at the right time. That's the engineering work that actually matters — and honestly, it's a lot more interesting than tweaking system messages at 11pm on a Tuesday.

Previous
A/B Testing Is Lying to You — Statistical Significance Isn't Enough
Insights
Prompt Engineering Is Dead — Context Engineering Is What MattersYour AI Agent Isn't Working Because You Skipped the GuardrailsRAG vs Fine-Tuning: When to Use What in ProductionHow to Cut Your LLM Costs by 70% Without Losing QualityThe AI Implementation Playbook for Non-Technical FoundersWhy Most AI Chatbots Fail (And What Production-Grade Looks Like)Building AI Agents That Know When to Hand Off to HumansVibe Coding Is Destroying Your CodebaseAI Won't Fix Your Broken Data Pipeline

Ready to Ship?

Let's talk about your engineering challenges and how we can help.

Book a Call