Primitives over Pipelines
Give agents modular functions instead of prescriptive workflows
A lot of early AI systems that got built at orgs were built under an old era where models didn't know how to follow instructions, drifted off task, and hallucinated.
To ensure these systems delivered value, we created rigid guardrails, prescriptive pipelines, and step-by-step orchestrations. The goal was to force user intent onto a predetermined path that the designer had already foreseen. In many ways, we were force-fitting our insights from the classic software era in this new age of AI-native software.
However, as of gpt-5.2 and opus-4.5, models exhibit improved instruction following and significantly reduced hallucination. The addition of native reasoning capabilities has enhanced the models' ability to admit uncertainty and, consequently, decrease hallucination by better saying "I don't know" when lacking context. The models also have better planning capabilities and are able to break down complex tasks into smaller chunks without losing sight of the broader goal.
Paradoxically, the quality-control pipelines that were once protective are now becoming obstacles, as they impose a rigid trajectory through a solution space that the models are now capable of navigating more effectively on their own.
"In many ways, we were force-fitting our insights from the classic software era in this new age of AI-native software."
Primitive-Oriented Agent Design
We propose an alternative: Primitive-Oriented Agent Design. Instead of prescribing rigid use cases or workflows, we provide the agent with a small, modular set of pure, composable functions — "primitives". This grants the agent the freedom to assemble its own workflows at runtime, utilizing the full context of its capabilities.
Pipeline
Primitives
Each primitive is simple enough to reason about independently, but when combined they form a large surface area of possible behaviors. Good primitives share some common traits:
- Purity: Primitives should behave predictably given the same inputs.
- Clear contracts: Each primitive should expose a well-defined schema describing what it expects and what it returns.
- Narrow responsibilities: A primitive should do one thing well rather than encapsulate an entire workflow.
- Composability: Primitives should be able to combine naturally with others.
Thin system prompts
Primitive-Oriented Agent Design also changes how we think about system prompts.
Early AI systems treated the system prompt as a repository of knowledge. Designers front-loaded it with policies, examples, domain facts, and long lists of instructions so the model had everything it needed up front.
This made sense when models were unreliable. But in primitive-oriented systems, this pattern becomes counterproductive.
When agents can retrieve information through primitives (for e.g. searching documents, fetching records, or recalling memories) the system prompt no longer needs to store the system’s state.
In fact, large system prompts hurt agent performance by cluttering context and hindering reasoning. They also cost more due to increased compute needs, as attention scales quadratically with context length (doubling the context quadruples the attention workload). Since APIs charge per token, bigger prompts also increase operational costs.
Instead, primitive-oriented systems favor thin system prompts. A thin prompt defines the goal, boundaries, and voice of the agent and knowledge is retrieved dynamically through primitives when needed.
In other words, the system prompt encodes taste, not state.
The prompt defines how the agent should think and behave, while primitives supply the information required to act. This keeps the prompt small, keeps the system flexible, and ensures the agent reasons over fresh context rather than static assumptions.
"A thin system prompt encodes taste, not state."
A practical example
Consider a shopping agent responsible for helping a user browse and inspect items in an online store. A user might ask to see what products are available, inspect the details of a specific item, or render a list of results in a structured UI. The agent’s job is to retrieve the relevant records and present them in a usable format.
There are two ways to build this system.
Pipeline approach
You prescribe a fixed “search workflow.” The agent lists the product skeleton, calls a search tool, maybe passes through a result-correction step, and finally renders the result. Every request follows the same sequence.
This produces predictable UI behavior, but it also inherits the limitations of the workflow itself. If the underlying catalog doesn’t match the use case you anticipated, the pipeline still executes the same steps. The agent cannot adapt the strategy to the situation. It simply walks the path it was given. Latency also accumulates because each stage runs even when it isn’t necessary.
Primitives approach
Instead of prescribing the workflow, you expose a small set of primitives:
- list: Retrieves a collection of products with options for filtering and pagination.
- get: Retrieves a single, specific product by its identifier.
- render: Renders a collection of products using a specified rendering logic.
None of these dictate order. They simply expose capabilities. Given these primitives, the agent can assemble its own workflow at runtime. For example:
- It might list available products, retrieve a few with get, then render them.
- If the user asks for details about a specific product, it might skip listing and call get directly.
- If the user asks to compare multiple products, it could retrieve them in parallel before rendering.
- If the agent already knows which item the user means from prior context, it may go straight to rendering.
Browse available products
The same primitives support many different execution paths. This is the core advantage of primitive-oriented systems: the designer defines capabilities, not trajectories. Each primitive remains simple and predictable, but together they give the agent the freedom to construct the right workflow for each request.
The result is a system that adapts to the user’s intent, avoids unnecessary steps, and reasons over the problem instead of replaying a predetermined script.
Let it cook
The core problem is that the pipeline encodes one trajectory through the solution space, while user intent and real-world data rarely follow a single predictable path.
The shopping example is one instance of a broader pattern. When the system can navigate the space itself, prescribing the path gets in the way. We're used to designing systems where we control the flow; with capable models, the leverage is in giving them room to assemble.
Don't try to anticipate every path. Give primitives and let the agent figure it out.
If this perspective matches what you're seeing, let's talk.
Rubric is an applied AI lab helping teams design and ship intelligent products.


