Comments on: Understanding the A2A Protocol for Agentic AI in Network Operations https://networkphil.com/2026/03/02/understanding-the-a2a-protocol-for-agentic-ai-in-network-operations/ networking | writing | teaching Tue, 03 Mar 2026 16:53:22 +0000 hourly 1 http://wordpress.com/ By: Phil Gervasi https://networkphil.com/2026/03/02/understanding-the-a2a-protocol-for-agentic-ai-in-network-operations/comment-page-1/#comment-22946 Mon, 02 Mar 2026 21:00:31 +0000 http://networkphil.com/?p=6422#comment-22946 In reply to nhoward92.

That’s a great point! I think the issue will be more on the interpretation side than the retrieval of metrics side so long as we take certain precautions. One thing is that data that’s stored in the artifact has appropriate source metadata like source of the telemetry, the query that was used, timestamp, etc). Another idea would be to not allow the RCA agent to advance to diagnosis until the necessary artifacts exist in the first place.

And one thing that I’m seeing folks do right now is make sure that any factual statement returned by the system has to reference the actual source information in some way, whether it’s the artifact or just a list of sources. And if a result is generated without the sources, the task can’t proceed and alerts a person. That means you’d need to add a human-in-the-loop as a check.

The steps I put in the blog were pretty high level to help folks get an understanding of a basic workflow, but verification gates at specific points would definitely be a good idea.

Like

]]>
By: nhoward92 https://networkphil.com/2026/03/02/understanding-the-a2a-protocol-for-agentic-ai-in-network-operations/comment-page-1/#comment-22945 Mon, 02 Mar 2026 18:55:39 +0000 http://networkphil.com/?p=6422#comment-22945 Great walkthrough, Phil. The A2A + MCP separation is the right architecture — horizontal agent collaboration plus vertical tool access. One piece I think is missing from the stack: observation verification. In your Step 2, how does the RCA Agent know the Telemetry Agent’s data is real? I ran into this firsthand building a multi-vendor AI ops platform. The AI fabricated three firewall policies that didn’t exist on my FortiGate — among a lot of other data when it wasn’t presented with expected output. It guessed the next most plausible response, which meant hallucinating perfectly formatted, convincing output to the user. Passed every sanity check. Just weren’t real. I’m building a protocol layer (VIRP — Verified Infrastructure Response Protocol) that sits at the MCP boundary and cryptographically signs device observations so agents can prove their data came from the actual device, not from hallucination. The way I think about it: A2A = agent-to-agent, MCP = agent-to-tool, VIRP = tool-to-truth. Would love to compare notes — our work is complementary.

Like

]]>