Batch Process
The Batch Process tool lets your AI Agent process many items at once using the same tool or subagent for each one. Instead of calling a tool manually for every row in a CSV or every item in a list, your agent hands the entire dataset to Batch Process and gets back a compiled results file when it's done.
Use it when you have more than 3 items that need the same action — qualifying leads, scraping URLs, enriching contacts, generating reports per company, etc.
Don't use it for 1–3 items. For small numbers, your agent should call the target tool directly for each item.
How It Works
- Source — Your agent reads the data (a CSV file or a list of items).
- Transform — If the data columns don't match what the target tool expects, the agent writes a small mapping function.
- Validate — Every row is checked against the target tool's expected format before anything runs.
- Execute — Each item is processed. For 5 or fewer items, results come back immediately. For more than 5, the batch runs in the background and the agent is notified when it finishes.
Configuration
When you add the Batch Process tool to your AI Agent, you configure these fields:
| Field | Required | Purpose |
|---|---|---|
| Name | Yes | Label for this tool instance (e.g., "Batch Processor") |
| Description | Yes | Tells the AI when and how to use this tool |
| Needs Human Approval | No | If checked, the agent must get your approval before starting a batch |
| Notify on Completion | No | Send an email when the batch finishes (default: on) |
Most of the batch parameters (which data to process, which tool to use, etc.) are provided by the AI Agent at runtime based on your conversation — you don't need to configure them upfront.
What the AI Provides at Runtime
When your agent decides to use the Batch Process tool, it automatically fills in these parameters:
| Parameter | Required | What It Does |
|---|---|---|
| Source file path | Yes* | Path to a CSV file the agent has access to |
| Source array | Yes* | Or a list of items to process (alternative to file) |
| Tool | Yes | The name of the tool or subagent to call for each item |
| Transform function | No | A mapping function if the data format needs conversion |
| Batch name | Yes | A human-readable label for tracking |
| Skip invalid | No | Skip rows that fail validation instead of stopping (default: off) |
| On failure | No | What to do when items fail: skip remaining items or abort the batch (default: skip) |
*One of source file path or source array must be provided.
The Transform Function
Sometimes your source data has different column names than what the target tool expects. The AI Agent handles this automatically by writing a transform function.
When it's needed:
Your CSV has columns company_name, website, first_name, last_name — but the target tool expects company, url, contact_name.
What the agent writes:
def transform_row(row):
return {
"company": row["company_name"],
"url": row["website"],
"contact_name": f"{row['first_name']} {row['last_name']}"
}
When it's not needed: If your source columns already match the tool's expected inputs, no transform is required. The agent figures this out on its own.
Validation
Before processing any items, the tool validates every row against the target tool's expected format. This catches problems early — missing fields, wrong data types, etc.
- If validation finds errors and Skip invalid is off, the agent reports the issues and asks you to fix the data.
- If Skip invalid is on, bad rows are skipped and the rest are processed normally.
Results
When the batch finishes, results are compiled into a CSV file with:
- Input columns (prefixed with
input_) - Output columns (prefixed with
output_) - A
statuscolumn (success, failed, skipped) - An
error_messagecolumn for any failures
If your agent has sandbox access (code interpreter enabled), the CSV is saved to the sandbox and the agent can read and analyze it.
If sandbox is not enabled, the results are returned directly to the agent as text so it can still summarize and work with them.
Quick Setup
- Open your AI Agent → Tools tab.
- Click Add Tool and select Batch Process.
- Give it a name and description that tells the AI when to use it.
- (Optional) Check Needs Human Approval if you want to review before batches start.
- Save.
That's it. The next time you ask your agent to process multiple items, it will use the Batch Process tool automatically.
Example: Qualifying 100 Leads
You: "I uploaded a CSV with 100 companies. Qualify each one using the Qualification Agent and tell me which ones are a good fit."
What happens:
- Your agent reads the CSV to understand the columns.
- It checks what the Qualification Agent expects as input.
- If the columns don't match, it writes a transform function.
- It calls Batch Process with the CSV, the Qualification Agent as the target tool, and a batch name.
- Since there are more than 5 items, the batch runs in the background.
- When all 100 companies are processed, the agent is notified and gets the results.
- The agent summarizes: "23 companies qualified, 71 didn't meet criteria, 6 had errors."
Example: Scraping Multiple URLs
You: "Scrape these 10 URLs and create a summary of each article."
What happens:
- Your agent creates a list of the 10 URLs.
- It calls Batch Process with the list and the Web Scraper tool.
- Since there are more than 5 items, the batch runs in the background.
- When done, the agent reads the results and writes a summary for each article.
Error Handling
Not every item will succeed — APIs can timeout, websites can block scrapers, data can be malformed.
- Skip (default): Failed items are recorded with their error message. The remaining items continue processing. Your agent gets a complete results file showing which items succeeded and which failed.
- Abort: If too many items fail (more than 20% after a minimum sample), the entire batch stops and remaining items are marked as failed.
The failure threshold only kicks in after at least 3 items (or 10% of the total batch, whichever is larger) have been processed. This prevents a single early failure from killing the entire batch.
FAQ
Can I use this for just 1 or 2 items? No — for 3 or fewer items, your agent should call the target tool directly. Batch Process adds overhead that isn't worth it for small numbers.
How do I know when the batch finishes? Your agent is automatically notified when all items are processed. If you enabled email notifications, you'll also get an email.
What if the target tool requires human approval? The tool respects per-item approval settings. If the target tool has "Needs Human Approval" enabled, each item will wait for your approval before executing. See Approve in chat for details on the approval flow.
Can I see progress while the batch runs? Currently, the agent is notified when the batch completes. Real-time progress tracking is coming in a future update.
What file formats are supported? CSV files and arrays (lists of items). The agent can also convert other formats to arrays before batching.
Human approvals in batches
If the target tool (the one Batch Process calls for each item) has Requires Human Approval turned on, every item in the batch generates its own approval card, which you resolve individually from the chat or the Pending Approvals view. Keep batches small when the target tool needs approval — every row is a card you'll have to act on.
Troubleshooting
| Symptom | Likely Cause | Fix |
|---|---|---|
| "Validation found invalid rows" | Source data doesn't match the target tool's expected format | Ask the agent to fix the transform function, or fix the source data |
| Batch completes but all items failed | The target tool is misconfigured or an external API is down | Check the target tool works for a single item first |
| Agent doesn't use Batch Process | The tool description doesn't clearly tell the AI when to use it | Update the tool description to mention bulk/batch processing |
| Results file is empty | All items were skipped due to validation errors | Set "Skip invalid" to off so the agent reports the errors instead |
Start simple: try a small batch (4–5 items) first to verify everything works, then scale up to larger datasets.