The Four Pillars of Context Engineering

Four architectural pillars representing the core strategies of context engineering for AI systems

Happy New Year, and welcome to the nuts and bolts of context engineering. In December we introduced the concept; now we'll unpack each of the four core strategies -- Writing, Selecting, Compressing and Isolating -- and show how they apply to enterprise use cases. The goal is to give executives a framework to assess their AI teams' practices and identify opportunities for improvement.

1. Writing: Give Your Agent a Job Description

If you've ever managed a team, you know that success starts with clear expectations. The same applies to agents. Without a precise system prompt, agents flounder. An effective system message should answer three questions: Who am I? What's my mission? How do I behave? For example, a customer-support agent might be instructed to "act as a helpful support representative for our SaaS product, adopt a friendly but professional tone, and always refer to the knowledge base before making assumptions." This script becomes the agent's north star.

Executives should ensure that system prompts reflect company policies and branding. They should also include safety guidelines (e.g., "if uncertain, ask the human user for clarification") and escalation procedures ("route high-risk decisions to a manager"). These guardrails are crucial for compliance under frameworks like NIST's AI RMF.

2. Selecting: Curate with Care

Imagine an executive briefing that includes every email you sent last year. Useless, right? That's what happens when you dump all available data into an agent's context. Selection is about curation: choose the right information for the task at hand. In practice, this might mean:

  • Pulling the latest product spec rather than historical drafts.
  • Fetching customer purchase history to personalize recommendations.
  • Including only the most recent support interactions to resolve an open ticket.

Selection isn't manual; it's algorithmic. Modern frameworks use vector databases and metadata tags to rank documents by relevance. In 2025, we'll see more executives demanding transparency in how selection algorithms work, ensuring they don't inadvertently exclude critical information or amplify bias.

3. Compressing: Fit the Elephant in the Room

Most large language models have fixed context windows -- currently on the order of tens of thousands of tokens. That might sound like a lot until you realize your company policies alone could consume half that space. Compression involves summarizing or extracting the essence of documents. Techniques include:

  • Summarization -- Generate concise summaries of long documents or transcripts.
  • Extraction -- Pull key data points, such as dates, amounts or names.
  • Chaining -- Break a task into subtasks, each with its own context; pass only the necessary result to the next step.

For executives, the takeaway is that compression isn't optional; it's how you make enterprise knowledge accessible to your agents. Invest in summarization pipelines and leverage models with adaptive context windows.

4. Isolating: Keep Your Compartments Clean

In a multi-agent system, messages fly back and forth. If you don't separate system instructions from retrieved facts or user input, chaos ensues. Isolation means grouping information by type and using explicit delimiters. For example:

<SYSTEM>
You are a support agent...
</SYSTEM>
<USER>
Customer's last message...
</USER>
<KNOWLEDGE>
Summary of the troubleshooting guide...
</KNOWLEDGE>

This structure helps the model understand where to look for instructions vs. data. It also makes auditing easier; compliance teams can see exactly what information influenced a response.

Enterprise Scenarios

To make this concrete, consider two examples:

  • Legal research agent -- A law firm uses an agent to draft memos summarizing case law. Writing: the system prompt instructs the agent to act as a junior associate. Selecting: it retrieves only relevant cases from the past five years. Compressing: it summarizes each opinion into three key takeaways. Isolating: it labels the legal opinion summaries separately from the attorney's notes.

  • Supply-chain assistant -- A manufacturing company deploys an agent to optimize inventory. Writing: the agent knows its objective is to minimize stockouts while reducing inventory costs. Selecting: it pulls the latest demand forecasts and supplier lead times. Compressing: it extracts only the relevant metrics. Isolating: it separates instructions from data so the model doesn't confuse supplier names with internal policies.

Conclusion

By January 2025, executives need to ensure their teams treat context engineering as a first-class discipline, not an afterthought. The four pillars -- Writing, Selecting, Compressing and Isolating -- provide a framework for building agents that are reliable, compliant and effective.

In February's post, we'll explore why good context isn't enough: agents also need a control plane to manage the explosion of tools and ensure governance at scale.

Share

Get insights like this delivered

Join leaders navigating AI governance and agentic systems.

Misha Sulpovar

Misha Sulpovar

Thought leader in AI strategy and governance. Author of The AI Executive. Former IBM Watson, ADP. MBA from Emory Goizueta.