A practice in applied intelligence / 2026

The data is there.
The interpretation is not.

Systems record what happened.
Intelligence lives in the spaces between them.

Fig. 01
Live system
Nodes 24
Edges 41
UTC 14:32
Tick 04
Layer
active
1995 — internal databases 2003 — enterprise software 2010 — cloud platforms 2016 — analytics & dashboards 2022 — generative models 2026 — interpretation layer 2030 — autonomous judgment 1995 — internal databases 2003 — enterprise software 2010 — cloud platforms 2016 — analytics & dashboards 2022 — generative models 2026 — interpretation layer 2030 — autonomous judgment
Evolution

The future is not more systems.
It is intelligence between them.

For years, software work focused on building the systems: websites, apps, cloud, analytics, automations. The next layer is different. It reads across those systems and helps organisations understand what their operations are already trying to say.

1995–2005

Storage era

Capture the facts. Build the database.

2005–2015

Software era

Workflow tools, ERPs, SaaS for every function.

2015–2022

Analytics era

Dashboards. KPIs. The data became visible.

2022 → now

Interpretation era

Models read across the systems. People still decide. The layer is forming now.

2030 →

Judgment era

Operational autonomy on bounded decisions.

Areas of practice

Where systems
stop short.

These places are not empty. Data is already being captured. What is missing is the intelligence that connects the signals.

01 / Operational
SIGNAL

Signals across systems

Operational data often lives across several systems. The value appears when patterns across those systems become visible: risk, movement, delay, pressure, or opportunity.

68%
of dashboards go unread weekly
02 / Unstructured

Context outside the database

Important context often sits outside structured fields: notes, messages, calls, documents, feedback, and exceptions. That context needs to become usable without forcing people into another system.

~80%
of org knowledge is unstructured
03 / Decision
CONTEXT HISTORY EXCEPTIONS ACT 5–9 hr per decision

Judgment before automation

Some decisions should remain human until the pattern is understood. The first step is better preparation. The next step is deciding what can safely become a rule.

5–9h
per material decision, manual
Current work / live

A pattern
we are studying.

Online education is a useful example because the signals already exist: attendance, progress, batches, recordings, fees, schedules, and instructor load. The question is not whether the data is there. The question is whether the relationships between those signals are being read.

signal-flow / cohort-04 / w19 recording — 14:32 utc

Signals / week

Attendance2,401
Assessments820
Recordings312
Tickets187
Fee status1,148

Interpretation layer

Reading across
Who is quietly disengaging?
Where is follow-up overdue?
Which cohort is drifting?

Output

▮ weekly briefing — 4 items
3 cohorts at risk
2 instructors stretched
11 students disengaging
5 follow-ups overdue
→ act this week
Statusin active development
Stageprototype → pilot
Testingdoes the briefing change behaviour?
Engagements

How the practice
operates.

No stage assumes the next. Discovery can end with a no-build recommendation. Build produces a working system before anything else is scoped. Each step is designed to earn the one that follows.

01 / discovery

Map the gap.

How systems work today. Where people assemble meaning manually. Whether a layer is worth building.

DeliverableGap report & build/no-build call
02 / build

Smallest useful version.

One workflow. One decision. One output. Prove the interpretation is valuable before the system grows.

DeliverableWorking v1 in real operation
03 / operate

Maintain or hand off.

Documented to be understood, maintained, improved. The system shouldn't depend on hidden knowledge.

DeliverableRunbook, eval suite, traces
Start with a discovery call
Engineering

Built like software,
not like a demo.

Demos drift. Prompts change behaviour. Edge cases multiply. The model is one layer of seven. The other six are what make it hold up.

Looks impressive once.

— for the boardroom
Sample data, controlled inputs
Outputs drift between runs
No traceability when it fails
Prompt changes break behaviour
Rebuilt from scratch every six months

Holds up in operation.

— for the team that runs it Monday morning
Defined inputs, traceable outputs
Versioned prompts & eval suite
Controlled failure modes
Maintained past the first demo
Real operational data from week one
fig. 04 — the stack ai is one of seven layers
07Interface→ humans
06Evaluation & traces→ trust
05AI model→ one of seven
04Prompt versioning→ behaviour
03Retrieval & context→ memory
02Data pipeline→ inputs
01Source systems→ ground
Writing

Latest thinking.

Field notes from work in active development. Not whitepapers — what we learned this month.

Is the meaning getting lost between systems?

Bring the workflow, the recurring judgment, or the pattern that is hard to see from one system alone. We will help determine whether an intelligence layer belongs there.

Begin a conversation