NearconSF: What We Noticed (and What We’re Building Next)
Notes from NearconSF: where AI agent infrastructure is headed, what teams are getting wrong, and how we’re applying the lessons to OpenClaw.
I spent time at NearconSF this week and came away with a clear signal: teams aren’t struggling because models are weak — they’re struggling because operating AI systems in the real world is still weirdly hard.

NearconSF stage slide on agentic workflows.
Not “hard” in the research sense. Hard in the messy sense:
- reliability across networks and devices
- permissioning and auditability
- human-in-the-loop workflows that don’t collapse under load
- shipping changes without breaking everything
If you’re building agentic systems (or even just LLM-powered features that touch production data), the hard part is the same: infrastructure + workflow design.
Here are the biggest takeaways I’m carrying forward.
1) The conversation is shifting from “chains” to “systems”
A year ago, many demos still looked like: prompt → tool call → prompt → tool call.
At NearconSF, more of the interesting conversations were about:
- state (what the system remembers, for how long, and why)
- execution environments (where code runs, what it can touch)
- observability (what happened, what changed, and how to replay it)
- coordination (multiple agents, multiple queues, multiple owners)
That’s a good sign. It means we’re growing up.
A useful litmus test I kept using:
Can you explain what your agent did yesterday?
If the answer is “kind of” or “we have logs somewhere,” you don’t have an agent system yet — you have a prototype.
2) Trust is now an engineering problem, not a vibes problem
In the hallway chats, nobody was debating whether AI is “real.” Everyone assumes it’s here.
The debates were about trust boundaries:
- Who is allowed to run what actions?
- What data is allowed to leave the system?
- What happens when a tool call fails halfway through?
- How do you prove (later) that the agent didn’t do something dangerous?
This is exactly why we’ve been obsessive about explicit gates, structured execution, and audit trails in OpenClaw.
When an agent touches real money, real customers, or real infrastructure, you need:
- deterministic permissions
- recorded intent + outcome
- safe retries
- a rollback story
“Just prompt it better” is not a control plane.
3) The best teams treat humans as part of the system
The strongest implementations I saw weren’t trying to eliminate people. They were trying to move people to higher-leverage decisions.
Patterns that consistently work:
- draft → review → execute
- confidence thresholds (“auto-approve under X risk”)
- operational checklists encoded as gates
- clear escalation paths when the agent is uncertain
When humans are included intentionally, you get speed and safety.
When humans are bolted on at the end, you get:
- notification fatigue
- unclear ownership
- slow approvals
- brittle “just this once” exceptions
4) Distribution is the new normal
Whether you call it “edge,” “nodes,” “devices,” or “workers,” the future looks distributed:
- multiple machines
- multiple networks
- multiple channels (Slack/Telegram/email/web)
- multiple identities
That’s why we lean into a node-based model: it’s the only way to make agents feel present in the places they actually operate.
If your architecture assumes one server, one queue, one happy-path environment — it won’t survive contact with reality.
What we’re doing with these lessons
NearconSF reinforced our roadmap priorities:
- More explicit workflow gates (so “done” means verifiably done)
- Better observability (so you can debug without archaeology)
- Cleaner permission boundaries (so production isn’t a gamble)
- Repeatable deployment (so shipping is boring)
If you’re building something in this space and want a second set of eyes, we’re happy to help.
CTA: Want to run OpenClaw in your organization?
If you’re an operator or exec trying to move from demos to something real, we built a hands-on path:
OpenClaw Executive Tutorial → https://www.ncubelabs.com/services/openclaw-executive-tutorial
It’s designed to help you evaluate agent workflows, security boundaries, and deployment realities before you bet a quarter on it.