Batch Scrape
Batch Scrape lets you submit multiple URLs and run them asynchronously, returning a job id.
Endpoint
Section titled “Endpoint”POST https://api.firecrawl.dev/v2/batch/scrape
Request fields (Body)
Section titled “Request fields (Body)”Required
Section titled “Required”| Field | Type | Required | Notes |
|---|---|---|---|
urls | string[] (uri) | Yes | The URLs to scrape |
Scheduling and validation
Section titled “Scheduling and validation”| Field | Type | Default | Notes |
|---|---|---|---|
maxConcurrency | number | - | Max concurrent scrapes for this job (otherwise uses team limit). |
ignoreInvalidURLs | boolean | true | Ignores invalid URLs and returns them in invalidURLs. |
webhook | object | - | Optional webhook specification. |
Other fields
Section titled “Other fields”Besides urls/maxConcurrency/ignoreInvalidURLs/webhook, Batch Scrape accepts the same scrape options as /v2/scrape (e.g. formats/onlyMainContent/includeTags/excludeTags/maxAge/minAge/headers/waitFor/mobile/timeout/parsers/actions/location/proxy/storeInCache/zeroDataRetention).
See:
Response fields
Section titled “Response fields”| Field | Type | Notes |
|---|---|---|
success | boolean | Whether the job was created |
id | string | Batch job id |
url | string | Job status URL |
invalidURLs | string[] | null | Invalid URLs (when ignoreInvalidURLs=true) |