The public lesson from the latest agent scare stories sounds obvious: give AI agents a kill switch. If an agent can act across real systems, the business needs a way to stop it before speed becomes damage. That is true, but it is not the deepest lesson.

The kill switch is the easy part. Useful agents usually do not run by magic. They respond to an event stream, a cron job, a queue, a webhook, or some other heartbeat that tells them when work exists. Stop the event source and the agent stops receiving work. Delete the cron, disable the trigger, pause the queue, or cut the heartbeat, and the system has a natural brake.

The harder problem is upstream. The agent should never have dangerous direct authority in the first place.

An agent doing meaningful work needs better instructions and a little structure. It needs an order, not a vibe. It needs a communication contract that says what job it is doing, which skills it may use, which tools it may call, which permissions those tools require, and which actions must pass through a safer interface before anything changes in production.

The question is not merely, “Can we stop the agent?” The better question is, “Why was the agent ever allowed to do that directly?”

The database story is not really about deletion

On May 6, 2026, Fortune published a ServiceNow Knowledge 2026 story with the headline: “Your company’s AI could delete everything in 9 seconds. ServiceNow wants to be the kill switch.” The opening example was designed to land hard: “an AI agent gained elevated permissions and, in 9 seconds, deleted an entire production database—customer records, reservations, every backup. Gone.”

That example makes the kill switch feel like the center of the story. Nine seconds is too fast for a meeting, too fast for an approval chain, and usually too fast for a human operator to notice, understand, and intervene. If the agent is already deleting production records, of course the business wants a way to stop it.

But the deeper failure happened before the clock started. An agent should not have direct production database authority. That is the architectural mistake. The deletion is the visible explosion, but the real problem is the permission path that made the explosion possible.

A useful agent may need to read data, propose changes, create records, route exceptions, or trigger workflows. None of that requires raw destructive access to the system of record. If the agent needs database-shaped work, the safe pattern is MCP + Worker + Stored Procedure + View.

That chain matters because each layer narrows authority. The MCP interface gives the agent a defined tool surface. The Worker receives the request and applies business rules. The Stored Procedure performs only the approved operation. The View exposes only the data shape the agent is allowed to see. The agent can still do useful work, but it cannot wander around production like a junior developer with root access and caffeine.

The capability remains. The dangerous authority does not.

Agents respond to events, so stop the events

A chatbot waits for a person. An agent responds to conditions.

That distinction is where both the value and the fear come from. VentureBeat reported on April 30, 2026, that Writer launched event-based triggers enabling agents to “execute complex multi-step workflows without any human initiating the process.” That is the operating shape: agents watch for moments when work should begin, then act without waiting for someone to open a chat window.

This is exactly why the “kill switch” conversation can get blurry. People imagine an autonomous thing moving through the business under its own power. In practice, a well-designed agent has a heartbeat. Something wakes it up. Something hands it a task. Something gives it context. Something marks the work complete or asks for the next step.

If the heartbeat stops, the agent stops running. That does not solve every problem, but it clarifies the control model. The kill switch is not a mystical emergency feature. It is an operating control over the event stream that feeds the agent.

That means every agent should have a known event source. The business should be able to point to the trigger and say: this is what wakes the agent, this is what payload it receives, this is who can pause it, and this is what happens to queued work when the pause occurs. If nobody can answer those questions, the agent is not autonomous. It is loose.

Stopping the heartbeat is the brake. Designing the road is the real work.

Communication contracts are orders

The usual agent setup is too casual. A team writes a prompt, connects a tool, tests a happy path, and calls the thing an agent. Then the agent receives real work with vague authority and uneven context. That is not an operating model. That is a bet.

Agents that work need communication contracts. A contract is not a legal document in this context. It is the operating order for the agent. It tells the agent what product it is responsible for creating, what inputs count as valid, what tools are available, what skills should be used, what permissions each tool carries, and what conditions force escalation.

A human can sometimes survive unclear orders because humans infer the missing structure from experience, politics, memory, and fear. An agent will follow the rails it has. If the rails are vague, the work becomes vague at machine speed.

A good communication contract makes the job inspectable before the agent starts. It should answer a few plain questions:

  • What event wakes this agent up?
  • What exact product is the agent expected to produce?
  • Which skills may it apply to complete the work?
  • Which tools may it call, and for what actions?
  • Which permissions are required, and which are explicitly forbidden?
  • Which data may it read, and through what view?
  • Which changes may it request, and through what Worker or stored procedure?
  • Which conditions stop the agent and escalate to a human?
  • Where are tool calls, decisions, and failures logged?

That is not bureaucracy. That is how orders become safe enough to automate.

The contract also gives the business a better improvement loop. When the agent fails, the first question is whether the order was complete: task, tool surface, permissions, escalation, and interface. The failure becomes a way to sharpen the contract instead of another vague warning in the prompt.

Better instructions. A little structure. Most agent governance starts there.

More tools without contracts only increases blast radius

When an agent struggles, the tempting move is to give it more. More connectors. More permissions. More memory. More workflow access. More freedom to finish the task without bothering anyone.

Sometimes the agent genuinely needs another tool. Useful work often requires tools. But tools added before contracts do not create maturity. They create blast radius. The agent’s mistakes become more expensive because the system gave those mistakes more places to travel.

Computerworld reported on May 5, 2026, that “Microsoft and Google are adding new controls for AI agents,” as companies move beyond chatbots into agents that can access corporate data and act across business applications. That is the shift. Agents are moving from answer generation into business execution.

Execution requires smaller doors, not bigger permissions. If an agent needs to update a CRM, it should call a tool that performs the allowed update under known rules. If it needs finance data, it should see the approved view, not the whole database. If it needs to change production state, it should request the change through a Worker that validates the request and calls a stored procedure built for that exact operation.

This is safer for agents and humans. Humans make mistakes too. The same structure that prevents an agent from deleting the wrong data can prevent an exhausted operator from doing the same thing on a bad afternoon.

That is the point of architecture. It makes the right action easier and the catastrophic action unavailable.

The operating test

The Hacker News piece published with Orchid Security on May 6, 2026 named a real enterprise problem: “AI agents are being deployed faster than enterprises can govern them.” It also described why traditional identity systems struggle. AI agents “run continuously, span multiple applications, acquire permissions opportunistically, and generate activity at machine speed.”

That diagnosis is useful. Agents need identity. A business should know which agent took which action, under which contract, with which tool, against which system, at what time. But identity is not enough. A badge does not make an unsafe door safe.

Every agent should pass one practical test before it receives more reach: if this agent receives the wrong event, misreads the context, or chooses the wrong next step, what is the worst thing it can actually do?

That question is better than asking whether the agent is smart enough. Smart systems still make bad calls. The safer design assumes error and limits consequence. It asks what the agent can do directly, what it can only request, what it can only read, and what it cannot touch at all.

A weak answer sounds like confidence. “The model is pretty good.” “We have logs.” “We can revoke the API key.” “Someone in ops watches it.” Those answers may help after the fact, but they do not define the operating boundary.

A strong answer is structural. The agent wakes from a known event stream. Its communication contract defines the order. Its tools are mapped to the job. Its permissions are narrow. Its data access runs through views. Its writes go through Workers and stored procedures. Its abnormal conditions stop the workflow and escalate. Its heartbeat can be paused without mystery.

ServiceNow’s Bill McDermott told Fortune, “Governance isn’t a feature. It’s the whole ball game. Because without it, your whole company can come down.” Strip away the vendor frame and the operating truth remains. Governance for agents is not a panic button added after deployment. It is the contract, the event stream, the permission boundary, and the safe interface that exist before the agent starts doing real work.

AI agents do need a way to stop. Of course they do. But the better answer is not to build agents with dangerous authority and then hope the kill switch gets pressed in time.

Stop the heartbeat when you need to stop the agent. More importantly, design the agent so stopping it is rarely the thing that saves you.

Give it better instructions. Give it a clear contract. Give it the right tools through safe interfaces. Give it only the permissions the job requires. That is how agents become business capacity instead of production risk.

Sources

  • Fortune, “Your company’s AI could delete everything in 9 seconds. ServiceNow wants to be the kill switch,” May 6, 2026. https://fortune.com/2026/05/06/servicenow-kill-switch-ai-agents-bill-mcdermott/ Quotes used: “an AI agent gained elevated permissions and, in 9 seconds, deleted an entire production database—customer records, reservations, every backup. Gone,” and “Governance isn’t a feature. It’s the whole ball game. Because without it, your whole company can come down.”
  • Computerworld, “Microsoft, Google push AI agent governance into enterprise IT mainstream,” May 5, 2026. https://www.computerworld.com/article/4167054/microsoft-google-push-ai-agent-governance-into-enterprise-it-mainstream.html Quotes used: “Microsoft and Google are adding new controls for AI agents,” companies are “no longer just testing chatbots,” and agents “can reach corporate systems and carry out tasks on behalf of users.”
  • The Hacker News, “Your AI Agents Are Already Inside the Perimeter. Do You Know What They’re Doing?”, May 6, 2026. https://thehackernews.com/2026/05/your-ai-agents-are-already-inside.html Quotes used: “AI agents are being deployed faster than enterprises can govern them,” and agents “run continuously, span multiple applications, acquire permissions opportunistically, and generate activity at machine speed.”
  • VentureBeat, April 30, 2026, reporting on Writer event-based triggers. Quote used: agents can “execute complex multi-step workflows without any human initiating the process.”

Stephen Nickerson.
Built for operators who need agents they can test, trust, and improve.