Sign up now to get the most out of Books2Read
We're always making new tools to help you discover, save, and share your favorite books.
Sign up now to get updates whenever we release new features!
Discover something great at Books2Read.
We're always making new tools to help you discover, save, and share your favorite books.
Watch your email for exciting announcements and new features coming soon!
Thanks for signing up for Books2Read notifications!
Check your inbox for a confirmation email with instructions to finish signing up.


Shrikant Wagh
Shrikant Wagh's journey into software quality began with curiosity — a drive to understand not just how systems work, but how to make them trustworthy. After earning his Master's in Electrical Engineering from IIT Madras in 1993, he launched his career at ITI Limited in Bangalore before moving to the US in 1996 as a QA consultant during the Java and dotcom era.
He co-founded Optimyz Software, where he built two commercially patented testing tools — one for distributed testing, another for web services workflow orchestration. Over the decades that followed, he led quality efforts at Symantec, Macrovision, ZoomSystems, Switchfly, PayCertify, and Bellwether Coffee — each role reinforcing the same core belief: real systems live in the hands of users, under real constraints, with real consequences when they fail.
By 2025, a new class of failure had emerged that conventional testing couldn't name or catch. RAG systems were cross-contaminating tenant data. Agentic pipelines were leaking customer records through manipulated tool descriptions. Model updates were silently shifting behavior in ways no test suite detected. That observation became the foundation of Evaluating RAG and Agentic AI Systems.
The book establishes eleven contract dimensions — Knowledge, Retrieval, Generation, Agent/Tool, Skill, MCP Protocol, Security, Operational, Multi-Agent, Multi-Modal, and Fine-Tuning — covering the failure surfaces observed in production through early 2026. The framework is designed to be extensible: new contracts, new attack patterns, and new architectures can be absorbed as the field evolves.
Wagh is watching two frontiers most closely: the convergence of fine-tuning and RAG, where safety regression testing remains poorly understood, and multi-agent systems, where growing network complexity will surface failure modes not yet fully mapped.
His goal is simple — to give the engineering community contracts that make trust in agentic AI something you can measure, not just hope for.
