The Missing Layer Between AI and Action
A management framing of the execution boundary. Why most AI failures in regulated industries happen at the handoff to action — not in the model.
Across peer-reviewed venues, industry publications, and the iQuelo Substack — ideas worth defending, with citations.
A management framing of the execution boundary. Why most AI failures in regulated industries happen at the handoff to action — not in the model.
The technical companion to the HBR piece — formalizing the execution boundary as a governance primitive, with auditability, determinism, and safety guarantees.
A ground-up processor redesign featuring compute unit pooling, a Decision Logic Tile, and a programmable mesh NoC. Connects AI decision intelligence philosophy to silicon.
New x86 machine instructions and MSR registers for tile-granular performance monitoring — proposing PLX+ as an incremental path on top of Panther Lake silicon.
The companion publication — short essays on AI governance, regulated systems, and the architecture of trust. Where the long-form research becomes a working notebook.
The theoretical foundation underlying SignalDeck — twenty-three chapters on the formal structure of explainable, auditable, deterministic decision systems.
Most AI safety, governance, and regulatory frameworks treat AI output and AI action as the same event. They aren't. Between a model's prediction and a system's action there is a narrow interval — a handoff — where the prediction becomes binding.
Architecting that handoff — with explainability, override paths, audit logs, and deterministic guarantees — is the execution boundary. iQuelo builds for it. SignalDeck commercializes it. The research publishes the framework.
The iQuelo Substack — short, frequent, opinionated pieces on AI governance, regulated systems architecture, and the trust layer underneath both.