How do you use the agent tracing feature in the Metrics Reporting Dashboard?
For an AI Agent configured to update employee data in Fusion Applications, what method within AI Agent Studio ensures sensitive changes are securely managed?
During agent testing, how do metrics for latency and tokens help ensure quality?
Executives require a high-level report on how often agents run into errors, how quickly most queries are resolved, and the general quality of the agents' performance for quarterly review.
Which two metrics in AI Agent Studio would deliver the required insights?