Dr. Minh Tran
Head of AI Research ·

2026 is the year of AI agents. From OpenAI Operator to Anthropic Claude computer use to Google Gemini agents, every major lab is shipping autonomous systems that browse the web, write code, manage files, and interact with APIs. But training these agents requires a completely different kind of data than what worked for chatbots.
Traditional instruction-following data consists of single-turn or multi-turn conversations. Agent training data must capture multi-step trajectories: sequences of observations, reasoning, tool calls, and environment feedback that can span dozens of steps. Each step has branching possibilities, error recovery paths, and context-dependent decisions.
The annotation challenge is exponentially harder. Annotators must understand the tools the agent uses, the environment it operates in, and the strategies for recovering from errors. A single trajectory annotation can take 30-60 minutes of expert time, compared to 2-5 minutes for a preference comparison.
Tool-use demonstrations: Expert annotators demonstrate correct API calls, function invocations, and parameter selections for specific tasks. These teach the agent when and how to use each tool in its toolkit.
Trajectory annotations: Complete task execution paths from start to finish, including the reasoning at each decision point. These are the most valuable and most expensive to produce.
Error recovery examples: Deliberately introduce failures — wrong API responses, permission errors, ambiguous instructions — and annotate the correct recovery strategy. Agents that cannot recover from errors are useless in production.
Environment feedback pairs: For each action the agent takes, annotators label whether the environment response indicates success, partial success, or failure, along with the appropriate next action.
Consistency across trajectories is the hardest quality dimension. Two expert annotators solving the same task may take completely different valid paths. Your quality framework needs to evaluate whether a trajectory achieves the goal effectively, not whether it matches a single reference path.
At SyncSoftAI, we have developed specialized annotation pipelines for agentic AI data. Our annotators work in simulated environments that mirror real-world tool ecosystems, and our QA process evaluates trajectory quality based on goal completion, efficiency, and error handling — not just step-by-step matching.
Agent training data is the fastest-growing segment in AI data services. As enterprises deploy AI agents for customer support, software development, data analysis, and operations management, the need for high-quality trajectory data will only accelerate. Teams that build this capability now will have a decisive advantage.

The data labeling market is projected to reach $17B by 2030, with 60% of enterprises outsourcing annotation. A comprehensive guide to evaluating and selecting the right data annotation partner.

34% of multimodal annotations had sync errors in one major project. Explore the challenges, best practices, and quality frameworks for annotating text, image, video, and 3D data for generative AI.

A practical comparison of RLHF and DPO for aligning large language models — covering data requirements, cost, quality trade-offs, and when to use each approach.