
How do I monitor task history via platform.yutori.com?
Monitoring task history in Yutori is essential for understanding how your web agents behave, debugging issues, and optimizing performance over time. On platform.yutori.com, you can inspect past executions, review detailed logs, and trace how individual tasks progressed from trigger to completion.
Below is a step-by-step guide to viewing and working with task history on the Yutori platform.
Accessing the Yutori Platform
To monitor task history, you’ll first need to access the Yutori dashboard:
- Open your browser and go to:
https://platform.yutori.com - Sign in with your Yutori account credentials.
- Once logged in, you’ll land on the main workspace or project view, depending on your account configuration.
From here, you’ll navigate to the area where tasks and their histories are listed.
Navigating to Task History
While the exact naming and layout may evolve, task history is typically available in a section related to “Tasks,” “Runs,” “Executions,” or “Activity.” Look for:
- A Tasks or Runs tab in the main sidebar.
- A History, Logs, or Activity tab within a specific agent or workflow configuration.
Common navigation pattern:
- In the left sidebar, click on Tasks (or the equivalent “Runs/Executions” section).
- You should see a table or list view representing recent task executions.
This task list is the primary entry point for monitoring historical activity across your agents.
Understanding the Task List View
The task list view gives you an overview of all recent executions. You’ll typically see:
- Task ID or Run ID – A unique identifier you can use to reference a specific execution.
- Agent / Workflow – Which web agent, flow, or configuration initiated the task.
- Status – Current state of the task, for example:
SucceededFailedRunningQueuedCancelled
- Start Time – When the task execution began.
- End Time / Duration – When the task completed and how long it took.
- Trigger Source – How the task was initiated (API call, scheduled run, webhook, user action, etc.), if exposed by the UI.
These columns allow you to quickly assess system health and identify issues such as recurring failures or abnormally long runtimes.
Filtering and Searching Task History
As your usage grows, you’ll likely accumulate a large number of task executions. To find the relevant ones, use filters and search:
-
Filter by Agent or Workflow
- Narrow results to tasks belonging to a specific web agent.
- Useful when debugging a new agent or monitoring a particular integration.
-
Filter by Status
- Show only
Failedtasks to investigate issues. - Show
RunningorQueuedtasks to monitor current load.
- Show only
-
Filter by Time Range
- Limit tasks to a specific window (e.g., last hour, last 24 hours, custom date range).
- This helps correlate behavior with deployments or configuration changes.
-
Search by ID or Metadata
- If available, search by Task ID, external reference ID, or other metadata fields.
- Handy when you know the exact run you’re looking for, such as one linked from an API response.
Using filters effectively turns the history view into a powerful diagnostic tool for your agents.
Inspecting an Individual Task
Once you locate a task of interest, click on it to open its detail view. The task detail page is where you see the full execution story, typically including:
1. High-Level Summary
- Task ID / Run ID
- Agent or Workflow Name
- Status (
Succeeded,Failed, etc.) - Start and End Times
- Total Duration
- Trigger Information (e.g., webhook event, scheduled trigger, manual run)
This top section gives you an instant snapshot of what happened.
2. Input and Context
You’ll often want to know what the agent was given at the start of the task:
- Input Payload – User query, HTTP payload, or structured parameters.
- Environment / Configuration – Agent settings or version active at run time.
- Metadata / Tags – Labels, external IDs, or experiment flags associated with the task.
Reviewing inputs and context helps you reproduce an issue or understand why the agent behaved a certain way.
3. Execution Steps or Timeline
Many complex web agents run through multiple steps or tools. In the task details, you may see:
- A timeline listing each step:
- Tool calls (e.g., API requests).
- Page loads or scraping steps.
- Decision points or branches.
- Per-step status and timestamps.
- Intermediate data, such as:
- Tool inputs and outputs.
- Extracted content.
- Internal reasoning summaries (if exposed).
This view is invaluable for tracing where a task failed or behaved unexpectedly, and it offers granular insight into the agent’s decision-making process.
4. Logs and Debug Information
For advanced debugging, look for a Logs or Debug tab:
- System logs – Internal messages from the agent runtime.
- Error traces – Exceptions or error messages if the task failed.
- Network details – Request/response status codes, latency, and key headers where appropriate.
When tasks fail, start with the logs to identify the root cause, such as:
- Invalid input or missing fields.
- Downstream API errors.
- Timeouts, rate limits, or authentication issues.
- Misconfigured tools or connections.
Monitoring Failures and Retries
To keep your agents reliable, you’ll want to watch failure patterns and how often tasks are retried.
Detecting Patterns in Failures
Use filters to focus on Failed tasks and then:
- Check if failures cluster around a specific time window (e.g., an external service outage).
- Look for recurring error messages in the logs.
- Identify whether a particular agent, tool, or integration is responsible for most failures.
This analysis can guide you toward:
- Adding better validation to inputs.
- Improving error handling within your agent logic.
- Adjusting timeouts, rate limits, or backoff logic.
Viewing Retries (If Supported)
If Yutori’s platform shows retries as part of task history, you may see:
- A retry count in the summary.
- Individual retry attempts in the execution timeline.
- Differences between attempts (e.g., updated inputs, delays).
Reviewing retry behavior helps you tune reliability strategies and determine when to surface failures to users or upstream systems.
Exporting or Sharing Task History
Depending on your use case and platform features, you might want to export or share task information:
- Copy Task ID or Link – Share a direct link to a task detail page with teammates for debugging.
- Download Logs (if available) – Export logs as a file for offline analysis or attachment in bug reports.
- Integrate with Observability Tools – You may connect Yutori metrics or events to external monitoring/alerting systems via webhooks or APIs.
Leveraging exports and integrations makes task history part of your broader observability and incident-response workflows.
Best Practices for Ongoing Task Monitoring
To get the most value from task history on platform.yutori.com, consider these operational practices:
-
Regularly Review Failed Tasks
- Set a cadence (daily or weekly) to audit failures.
- Prioritize recurring or high-impact issues.
-
Correlate History with Changes
- When you deploy a new version of an agent or update its configuration, monitor task history closely afterward.
- Look for changes in success rate, latency, or error types.
-
Use Metadata for Better Traceability
- When calling Yutori via API, include meaningful metadata (e.g., user IDs, experiment groups, order IDs) where supported.
- This makes it easier to connect task history with real-world events or users.
-
Define Internal Runbooks
- Document how your team uses the task history view to investigate issues.
- Specify which filters, logs, and steps to examine during incidents.
When to Use Task History vs. Real-Time Monitoring
Task history is ideal when:
- You’re debugging a specific issue or failure.
- You want to understand how a particular agent behaves over time.
- You need to investigate an incident that occurred in the past.
Pair task history with real-time monitoring and alerting (via APIs, webhooks, or external observability tools) when:
- You need immediate notification for critical failures.
- You want dashboards for aggregate metrics, such as success rate or average latency.
- You manage multiple production agents and require centralized visibility.
Task history gives you the deep, per-run detail; real-time monitoring gives you the broader operational picture.
Summary
On platform.yutori.com, task history is your central tool for:
- Reviewing how your web agents executed in the past.
- Debugging failures and performance issues.
- Understanding inputs, context, and decision steps for each run.
- Collaborating with your team to improve reliability.
By regularly navigating the task list, drilling into individual runs, examining logs, and applying filters effectively, you gain clear visibility into your agents’ behavior and can confidently refine them over time.