Leher Pathak, OpenAI’s API product marketing lead, later said in a post on X that she understood the terms “assistants” and “agents” to be interchangeable — further muddying the waters.
Meanwhile, Microsoft’s blogs try to distinguish between agents and AI assistants. The former, which Microsoft calls the “new apps” for an “AI-powered world,” can be tailored to have a particular expertise, while assistants merely help with general tasks, like drafting emails.
AI lab Anthropic addresses the hodgepodge of agent definitions a little more directly. In a blog post, Anthropic says that agents “can be defined in several ways,” including both “fully autonomous systems that operate independently over extended periods” and “prescriptive implementations that follow predefined workflows.”
Salesforce has what’s perhaps the most wide-ranging definition of AI “agent.” According to the software giant, agents are “a type of […] system that can understand and respond to customer inquiries without human intervention.” The company’s website lists six different categories, ranging from “simple reflex agents” to “utility-based agents.”
So why the chaos?
Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map.
Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions.
“They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.”
Leave feedback about this
You must be logged in to post a comment.