· 12 min read
How to Take Screenshots with Node.js in 2026: API vs Puppeteer vs Playwright
You need to take a screenshot of a website in your Node.js application. Maybe you are building link previews, generating PDF reports, monitoring competitor pages, or creating social media cards. Whatever the reason, you have three choices: Puppeteer, Playwright, or a screenshot API.
Each approach has real trade-offs. This guide walks through all three with working code, discusses the infrastructure implications, and helps you decide which one fits your project.
Three Options for Node.js Screenshots
| Puppeteer | Playwright | SnapAPI | |
|---|---|---|---|
| Lines of code | 30-50 | 25-40 | 5-10 |
| Dependencies | puppeteer (170MB) | playwright (200MB) | node-fetch (200KB) |
| Browser management | You manage | You manage | Managed for you |
| Server requirements | 1GB+ RAM per instance | 1GB+ RAM per instance | Any server |
| Ad blocking | You build | You build | Built in |
| Cookie banner removal | You build | You build | Built in |
| Scales to 10K/hour | Requires infrastructure | Requires infrastructure | Out of the box |
| Monthly cost (10K/day) | $100-500 (servers) | $100-500 (servers) | $79 (SnapAPI Pro) |
Option 1: Puppeteer (DIY)
Puppeteer is Google's official Node.js library for controlling headless Chrome. It has been the default choice for browser automation since 2017.
Basic screenshot with Puppeteer
// puppeteer-screenshot.js
import puppeteer from 'puppeteer';
async function takeScreenshot(url, outputPath) {
// Launch browser (downloads Chromium on first run - ~170MB)
const browser = await puppeteer.launch({
headless: 'new',
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage', // Required in Docker
'--disable-gpu',
'--single-process',
]
});
try {
const page = await browser.newPage();
// Set viewport
await page.setViewport({ width: 1440, height: 900 });
// Navigate and wait for network to settle
await page.goto(url, {
waitUntil: 'networkidle2',
timeout: 30000
});
// Optional: wait for specific element
// await page.waitForSelector('.main-content', { timeout: 5000 });
// Optional: dismiss cookie banners (fragile, site-specific)
try {
await page.click('[class*="cookie"] button[class*="accept"]');
await page.waitForTimeout(500);
} catch (e) {
// No cookie banner found, continue
}
// Take screenshot
await page.screenshot({
path: outputPath,
fullPage: true,
type: 'png'
});
console.log(`Screenshot saved to ${outputPath}`);
} finally {
await browser.close();
}
}
// Usage
takeScreenshot('https://github.com', './screenshot.png');
That is 40+ lines just for a basic screenshot with minimal error handling. And this does not include ad blocking, dark mode, device presets, AVIF format support, or retry logic.
Issues with Puppeteer at scale
- Memory: Each Chrome instance uses 300-500MB of RAM. Running 10 concurrent screenshots requires 3-5GB of dedicated RAM. On a typical cloud VM, that is $50-100/month just for the compute.
- Browser management: Chrome updates can break Puppeteer. You need to pin versions and test after every update. The
puppeteer-corevspuppeteerdistinction trips up many developers. - Docker complexity: Running Chrome in Docker requires specific flags (
--no-sandbox,--disable-dev-shm-usage), increased shared memory allocation, and Chrome-specific base images. Getting this right takes hours of debugging. - Timeouts and crashes: Pages that load slowly, have infinite scroll, or trigger heavy JavaScript can cause Chrome to hang or crash. You need retry logic, timeout handling, and process monitoring.
- Font rendering: Server environments often lack the fonts that pages expect. You end up installing font packages or dealing with missing character glyphs.
Real-world stat: In a production Puppeteer deployment we reviewed, Chrome crashed an average of 3-5 times per day under moderate load (2,000 screenshots/day). Each crash required process restart logic that took an additional 2-5 seconds to recover.
Option 2: Playwright (DIY)
Playwright is Microsoft's browser automation library. It supports Chromium, Firefox, and WebKit, has better auto-waiting, and is generally considered the successor to Puppeteer.
Basic screenshot with Playwright
// playwright-screenshot.js
import { chromium } from 'playwright';
async function takeScreenshot(url, outputPath) {
const browser = await chromium.launch({ headless: true });
try {
const context = await browser.newContext({
viewport: { width: 1440, height: 900 },
// Dark mode
colorScheme: 'dark',
// iPhone simulation
// ...devices['iPhone 15 Pro']
});
const page = await context.newPage();
await page.goto(url, {
waitUntil: 'networkidle',
timeout: 30000
});
// Wait for content to render
await page.waitForLoadState('domcontentloaded');
// Take screenshot
await page.screenshot({
path: outputPath,
fullPage: true,
type: 'png'
});
console.log(`Screenshot saved to ${outputPath}`);
} finally {
await browser.close();
}
}
takeScreenshot('https://github.com', './screenshot.png');
Playwright is cleaner than Puppeteer. The browser.newContext() API makes viewport and device configuration more straightforward. Auto-waiting reduces the need for manual waitForSelector calls. Device emulation with devices['iPhone 15 Pro'] is elegant.
Playwright advantages over Puppeteer
- Better auto-waiting: Playwright waits for elements to be actionable before interacting, reducing timing-related flakiness.
- Multi-browser support: Test against Chromium, Firefox, and WebKit from the same API.
- Browser contexts: Lighter than full browser instances, allowing better parallelization.
- Better device emulation: Built-in device descriptors for dozens of devices.
Same infrastructure problems
However, Playwright has the same fundamental infrastructure challenges as Puppeteer: browser binaries are ~200MB, each instance consumes significant RAM, Docker setup is complex, and you are responsible for managing crashes, retries, and scaling. The code is cleaner, but the operational burden is the same.
Option 3: Screenshot API
A screenshot API handles all browser infrastructure for you. You send a URL, you get back an image. No browser binaries, no Docker, no memory management, no crash recovery.
Screenshot with SnapAPI
// snapapi-screenshot.js
const response = await fetch('https://api.snapapi.pics/v1/screenshot', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: 'https://github.com',
format: 'png',
full_page: true,
viewport: { width: 1440, height: 900 }
})
});
const imageBuffer = await response.arrayBuffer();
fs.writeFileSync('./screenshot.png', Buffer.from(imageBuffer));
Five lines of meaningful code. No browser binary downloads. No Docker configuration. No memory management. The API handles rendering, timeouts, retries, and scaling.
With the SnapAPI Node.js SDK
// Using the official SDK
import { SnapAPI } from 'snapapi-js';
const client = new SnapAPI('YOUR_API_KEY');
// Simple screenshot
const screenshot = await client.screenshot({
url: 'https://github.com',
format: 'png',
full_page: true
});
fs.writeFileSync('./screenshot.png', screenshot);
// Full-featured capture with ad blocking and dark mode
const screenshot = await client.screenshot({
url: 'https://techcrunch.com',
format: 'webp',
full_page: true,
dark_mode: true,
block_ads: true,
block_cookie_banners: true,
viewport: { width: 1440, height: 900 },
delay: 1000 // Wait 1s after load for lazy content
});
Code Complexity Comparison
Let's compare what it takes to implement a production-ready screenshot service with each approach. "Production-ready" means: error handling, retries, timeout management, resource cleanup, and support for full-page capture, dark mode, and ad blocking.
Puppeteer: ~120 lines
// Production Puppeteer: error handling, retries, resource cleanup
import puppeteer from 'puppeteer';
const MAX_RETRIES = 3;
let browser = null;
async function getBrowser() {
if (!browser || !browser.isConnected()) {
browser = await puppeteer.launch({
headless: 'new',
args: ['--no-sandbox', '--disable-setuid-sandbox',
'--disable-dev-shm-usage', '--disable-gpu']
});
}
return browser;
}
async function takeScreenshot(url, options = {}) {
const {
format = 'png',
fullPage = false,
width = 1440,
height = 900,
darkMode = false,
blockAds = false,
timeout = 30000
} = options;
let lastError;
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
let page;
try {
const browser = await getBrowser();
page = await browser.newPage();
await page.setViewport({ width, height });
if (darkMode) {
await page.emulateMediaFeatures([
{ name: 'prefers-color-scheme', value: 'dark' }
]);
}
if (blockAds) {
await page.setRequestInterception(true);
page.on('request', (req) => {
const url = req.url();
if (url.includes('doubleclick') || url.includes('googlesyndication')
|| url.includes('facebook.com/tr') || url.includes('analytics')) {
req.abort();
} else {
req.continue();
}
});
}
await page.goto(url, { waitUntil: 'networkidle2', timeout });
const buffer = await page.screenshot({
fullPage,
type: format === 'jpg' ? 'jpeg' : format,
encoding: 'binary'
});
return buffer;
} catch (error) {
lastError = error;
console.error(`Attempt ${attempt} failed: ${error.message}`);
// If browser crashed, reset it
if (error.message.includes('Target closed') ||
error.message.includes('Protocol error')) {
browser = null;
}
} finally {
if (page) {
try { await page.close(); } catch (e) { /* ignore */ }
}
}
}
throw lastError;
}
// Cleanup on exit
process.on('exit', async () => {
if (browser) await browser.close();
});
SnapAPI: ~10 lines
// Production SnapAPI: the API handles retries, timeouts, and cleanup
import { SnapAPI } from 'snapapi-js';
const client = new SnapAPI('YOUR_API_KEY');
async function takeScreenshot(url, options = {}) {
return client.screenshot({
url,
format: options.format || 'png',
full_page: options.fullPage || false,
dark_mode: options.darkMode || false,
block_ads: options.blockAds || false,
block_cookie_banners: options.blockCookieBanners || false,
viewport: { width: options.width || 1440, height: options.height || 900 }
});
}
The SnapAPI version is not just shorter -- it is also more capable. It includes cookie banner blocking, AVIF format support, 26+ device presets, and OG image generation that would each require additional code and dependencies in the Puppeteer version.
The Infrastructure Problem
Code complexity is only half the story. The bigger challenge with Puppeteer and Playwright is infrastructure.
What you need for a DIY screenshot service
- Server with enough RAM: A t3.medium EC2 instance (4GB RAM) handles about 5-8 concurrent Chrome instances. At $30/month, that is your baseline. For 10,000 screenshots/day with reasonable latency, you need a t3.xlarge (16GB) at $120/month.
- Docker configuration: Chrome in Docker requires
--shm-size=2g, specific Chromium flags, and often a custom Dockerfile with system font packages. Expect half a day debugging this the first time. - Process manager: Chrome processes crash. You need PM2, Supervisor, or a similar tool to restart them automatically. You also need to detect zombie Chrome processes that consume memory without doing work.
- Queue system: If you receive bursts of screenshot requests, you need a queue (Redis + Bull, SQS, etc.) to buffer them and process at a rate your server can handle.
- Monitoring: Memory usage, Chrome crash frequency, screenshot latency, and failure rate all need monitoring. OOM kills on your server will silently drop requests.
- Browser updates: Chrome updates every 4-6 weeks. Each update can break Puppeteer/Playwright compatibility. You need a maintenance window to test and update.
With a screenshot API, none of this is your problem. SnapAPI manages the browser fleet, handles scaling, and gives you a flat monthly rate regardless of how many servers it takes on their end.
When DIY Makes Sense
The API approach is not always the right choice. Use Puppeteer or Playwright when:
- You need browser automation beyond screenshots: Filling forms, clicking buttons, navigating multi-step flows. Screenshot APIs capture static pages; Puppeteer and Playwright can interact with them.
- You are running E2E tests: Playwright's test runner is excellent for end-to-end testing. Using a screenshot API for test screenshots adds network latency and an external dependency to your CI pipeline.
- You need screenshots of localhost: Screenshot APIs cannot access your local development server. For capturing pages on
localhost:3000, you need a local browser. - You have very specific browser requirements: Custom Chrome extensions, specific browser flags, or Firefox/WebKit rendering. APIs typically offer Chromium only.
- Volume is under 100/month and cost matters: If you only need a few screenshots per month, Puppeteer is free (minus server costs) while APIs have monthly minimums.
When to Use a Screenshot API
Use a screenshot API when:
- You need to capture external URLs: The most common use case. Link previews, social cards, monitoring dashboards, portfolio generators -- all external URLs where you just need an image back.
- Volume is over 1,000/month: At this scale, the server costs and maintenance time for DIY start exceeding the API subscription cost.
- You cannot run Chrome on your server: Serverless functions (Lambda, Cloudflare Workers), lightweight containers, and shared hosting cannot run headless Chrome. An API call works anywhere.
- You need built-in features: Ad blocking, cookie banner removal, dark mode, AVIF format, device presets. Building these into a DIY solution requires significant additional code.
- You want scraping and extraction too: If you need both screenshots and web scraping, SnapAPI covers both in one API. With Puppeteer, adding scraping means building a second pipeline.
- Reliability matters: Screenshot APIs have dedicated teams managing browser infrastructure. They handle Chrome updates, crash recovery, and scaling. Your team focuses on your product.
SnapAPI Node.js Quickstart
Get started with SnapAPI in under 2 minutes:
Step 1: Install the SDK
npm install snapapi-js
Step 2: Get a free API key
Sign up at snapapi.pics/register. No credit card required. You get 200 free requests/month.
Step 3: Take your first screenshot
import { SnapAPI } from 'snapapi-js';
import fs from 'fs';
const client = new SnapAPI('YOUR_API_KEY');
// Basic screenshot
const screenshot = await client.screenshot({
url: 'https://linear.app',
format: 'webp',
full_page: true
});
fs.writeFileSync('./linear.webp', screenshot);
console.log('Screenshot saved!');
Step 4: Try advanced features
// Mobile screenshot with ad blocking
const mobile = await client.screenshot({
url: 'https://bbc.com',
format: 'png',
device: 'iPhone 15 Pro',
block_ads: true,
block_cookie_banners: true
});
// Dark mode capture
const dark = await client.screenshot({
url: 'https://github.com/trending',
format: 'avif', // 30% smaller than WebP
dark_mode: true,
viewport: { width: 1920, height: 1080 }
});
// Full-page with delay for lazy loading
const fullPage = await client.screenshot({
url: 'https://producthunt.com',
format: 'png',
full_page: true,
delay: 2000 // Wait 2s for lazy-loaded content
});
Advanced: Express.js Screenshot Endpoint
Here is a complete Express.js endpoint that generates screenshots on demand. This is useful for building a screenshot microservice in your architecture:
import express from 'express';
import { SnapAPI } from 'snapapi-js';
const app = express();
const snapapi = new SnapAPI(process.env.SNAPAPI_KEY);
app.use(express.json());
app.post('/api/screenshot', async (req, res) => {
try {
const {
url,
format = 'webp',
full_page = false,
dark_mode = false,
width = 1440,
height = 900
} = req.body;
if (!url) {
return res.status(400).json({ error: 'url is required' });
}
const screenshot = await snapapi.screenshot({
url,
format,
full_page,
dark_mode,
block_ads: true,
block_cookie_banners: true,
viewport: { width, height }
});
const contentType = {
png: 'image/png',
jpeg: 'image/jpeg',
webp: 'image/webp',
avif: 'image/avif'
}[format] || 'image/png';
res.set('Content-Type', contentType);
res.set('Cache-Control', 'public, max-age=3600');
res.send(screenshot);
} catch (error) {
console.error('Screenshot error:', error.message);
res.status(500).json({ error: 'Failed to capture screenshot' });
}
});
app.listen(3000, () => console.log('Screenshot service running on :3000'));
That is a complete, production-ready screenshot microservice in 45 lines. No browser management, no Docker headaches, no memory tuning. Deploy it anywhere Node.js runs -- including serverless platforms like Vercel or Railway.
Start Taking Screenshots in 2 Minutes
200 free screenshots/month. No credit card. Node.js SDK, ad blocking, dark mode, AVIF, and 26+ device presets included.
Get Your Free API KeyFrequently Asked Questions
Is Puppeteer faster than a screenshot API?
For a single screenshot on a local machine, Puppeteer can be faster because there is no network round-trip. In production, the difference is negligible. SnapAPI's median response time is under 2 seconds, and you avoid the setup overhead of launching and managing Chrome instances.
Can I use Playwright with SnapAPI?
They solve different problems. Use Playwright for local browser automation, E2E testing, and interacting with pages. Use SnapAPI for capturing screenshots of external URLs at scale without managing browser infrastructure. Many teams use both.
Does SnapAPI work in serverless environments (Lambda, Vercel)?
Yes. SnapAPI is just an HTTP API call, so it works anywhere you can make a network request. Puppeteer and Playwright are difficult to impossible to run in serverless environments due to Chrome binary size and memory constraints.
What about Puppeteer with Browserless or BrowserCloud?
Services like Browserless give you hosted Chrome instances that Puppeteer can connect to remotely. This solves the infrastructure problem but not the code complexity problem. You still write Puppeteer code for viewport setup, waiting, ad blocking, etc. A screenshot API is simpler if you just need images back from URLs.
How many concurrent screenshots can SnapAPI handle?
SnapAPI handles concurrent requests automatically. On the Pro plan ($79/month), you can make requests as fast as your application needs them, up to 50,000/month total. There is no need to manage concurrency on your side.
Last updated: .