New New Scravity Scraping Browser is now live. Read the docs
Browser API

Scraping Browser

A balance of speed and stealth. Fast enough for efficiency while bypassing most website protections. Render JavaScript, extract data, and choose your proxy strategy.

JS rendering Anti-bot bypass Screenshot capture Proxy selection

Overview

The Scraping Browser API provisions a managed headless browser instance that renders JavaScript and bypasses bot protections. Send a URL, choose your proxy and output format, and receive clean extracted content along with a full-page screenshot.

Property Value
Endpoint POST /v1/browser/scravity/
Cost 10 credits per successful request (datacenter proxy) · 35 credits (residential proxy)
Rate Limit 2 requests/second, 20 requests/minute
Max Wait 120 seconds per request
Auth X-API-KEY header (key starts with scravity)
Response JSON (application/json)
Interactive Docs api.scravity.com/docs

Getting Started

Need an API key?

Sign up for a free Scravity account to generate your API key. All new accounts start with demo credits.

Get your free API key →

  1. 1

    Get your API key

    Sign up and generate a key from your dashboard

  2. 2

    Authenticate requests

    Add X-API-KEY header to every request

  3. 3

    Send a POST request

    Pass {"{ url: \"https://example.com\" }"} with optional proxy and format settings

  4. 4

    Process results

    Receive rendered page content, a base64 screenshot, and processing logs

Read the full documentation → for endpoint reference, parameters, and code examples in multiple languages.

Try the playground → to test the API with your own URL before integrating.

API Playground

Test the Scraping Browser endpoint with a real API request. Your key is sent directly to our API — nothing is stored or logged on our side.

Your key is used for a single request and never stored. Get one from your dashboard.

0–120 seconds

Screenshot Preview
Page screenshot preview

How it works

This sends a real POST request directly to https://api.scravity.com/v1/browser/scravity/ with your API key. The response includes the page content, a base64-encoded screenshot (previewed above), and processing logs. You need a Scravity account with available credits — get one here.

Documentation

Complete reference for the Scraping Browser API endpoint.

Endpoint

POST
POST https://api.scravity.com/v1/browser/scravity/

Authentication

Include your API key in the request header:

Header
X-API-KEY: scravity_k8xP2mN9qR7w...

Request Body

Parameter Type Required Description
url string Yes The URL to scrape (must be a valid URI, max 2083 characters)
wait_seconds integer No Seconds to wait before extracting content (0–120, default: 0)
proxy string No Proxy type: "datacenter" (faster, default) or "residential" (stealthy, higher cost)
output_format string No Output format: "raw", "text", or "markdown" (default: "markdown")

Success Response Fields

Field Type Description
success boolean Whether the request was processed successfully (always true)
content string The scraped page content in the requested output format
screenshot string Base64 encoded PNG screenshot of the rendered page
logs string[] Processing log messages describing what the endpoint did

Error Response Fields

Field Type Description
success boolean Whether the request was processed successfully (always false)
error string A human-readable error message describing what went wrong
logs string[] Processing log messages up to the point of failure

Example Request

cURL
curl -X POST https://api.scravity.com/v1/browser/scravity/ \ -H "X-API-KEY: scravity_k8xP2mN9qR7w..." \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com", "wait_seconds": 2, "proxy": "datacenter", "output_format": "markdown" }'

Example Success Response

200 OK
{ "success": true, "content": "# Example Domain\n\nThis domain is for use in illustrative examples...", "screenshot": "iVBORw0KGgoAAAANSUhEUgAABAAAA...", "logs": [ "Launching browser instance", "Setting proxy: datacenter", "Navigating to: https://example.com", "Waiting 2 seconds", "Extracting page content as markdown", "Capturing screenshot" ] }

Example Error Response

500 Error
{ "success": false, "error": "Navigation timeout: page took too long to load", "logs": [ "Launching browser instance", "Setting proxy: datacenter", "Navigating to: https://example.com", "Timeout reached after 30 seconds" ] }

Error Handling

Status Description
200 Success — page scraped, content and screenshot returned
422 Validation error — missing or invalid URL, out-of-range wait_seconds, or invalid enum value
500 Internal error or scraping failure — the browser failed to load or render the page

Good to know

  • Use wait_seconds for JavaScript-heavy pages that need time to render content
  • Residential proxies cost 35 credits (vs 10 for datacenter) but significantly improve success rates on protected sites
  • The "raw" output format returns the full unprocessed HTML — use "markdown" or "text" for cleaner data
  • The screenshot field is a base64-encoded PNG — decode it to save or display the rendered page
  • Errors return a 500 status with success: false, an error message, and any logs collected before the failure
  • Full documentation with code examples: documentation →