# Build Agents on Cloudflare URL: https://developers.cloudflare.com/agents/ import { CardGrid, Description, Feature, LinkButton, LinkTitleCard, PackageManagers, Plan, RelatedProduct, Render, TabItem, Tabs, TypeScriptExample, } from "~/components"; The Agents SDK enables you to build and deploy AI-powered agents that can autonomously perform tasks, communicate with clients in real time, call AI models, persist state, schedule tasks, run asynchronous workflows, browse the web, query data from your database, support human-in-the-loop interactions, and [a lot more](/agents/api-reference/). ### Ship your first Agent To use the Agent starter template and create your first Agent with the Agents SDK: ```sh # install it npm create cloudflare@latest agents-starter -- --template=cloudflare/agents-starter # and deploy it npx wrangler@latest deploy ``` Head to the guide on [building a chat agent](/agents/getting-started/build-a-chat-agent) to learn how the starter project is built and how to use it as a foundation for your own agents. If you're already building on [Workers](/workers/), you can install the `agents` package directly into an existing project: ```sh npm i agents ``` And then define your first Agent by creating a class that extends the `Agent` class: ```ts import { Agent, AgentNamespace } from 'agents'; export class MyAgent extends Agent { // Define methods on the Agent: // https://developers.cloudflare.com/agents/api-reference/agents-api/ // // Every Agent has built in state via this.setState and this.sql // Built-in scheduling via this.schedule // Agents support WebSockets, HTTP requests, state synchronization and // can run for seconds, minutes or hours: as long as the tasks need. } ``` Dive into the [Agent SDK reference](/agents/api-reference/agents-api/) to learn more about how to use the Agents SDK package and defining an `Agent`. ### Why build agents on Cloudflare? We built the Agents SDK with a few things in mind: - **Batteries (state) included**: Agents come with [built-in state management](/agents/api-reference/store-and-sync-state/), with the ability to automatically sync state between an Agent and clients, trigger events on state changes, and read+write to each Agent's SQL database. - **Communicative**: You can connect to an Agent via [WebSockets](/agents/api-reference/websockets/) and stream updates back to client in real-time. Handle a long-running response from a reasoning model, the results of an [asynchronous workflow](/agents/api-reference/run-workflows/), or build a chat app that builds on the `useAgent` hook included in the Agents SDK. - **Extensible**: Agents are code. Use the [AI models](/agents/api-reference/using-ai-models/) you want, bring-your-own headless browser service, pull data from your database hosted in another cloud, add your own methods to your Agent and call them. Agents built with Agents SDK can be deployed directly to Cloudflare and run on top of [Durable Objects](/durable-objects/) — which you can think of as stateful micro-servers that can scale to tens of millions — and are able to run wherever they need to. Run your Agents close to a user for low-latency interactivity, close to your data for throughput, and/or anywhere in between. --- ### Build on the Cloudflare Platform Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. Build stateful agents that guarantee executions, including automatic retries, persistent state that runs for minutes, hours, days, or weeks. --- # Changelog URL: https://developers.cloudflare.com/ai-gateway/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # OpenAI Compatibility URL: https://developers.cloudflare.com/ai-gateway/chat-completion/ Cloudflare's AI Gateway offers an OpenAI-compatible `/chat/completions` endpoint, enabling integration with multiple AI providers using a single URL. This feature simplifies the integration process, allowing for seamless switching between different models without significant code modifications. ## Endpoint URL ```txt https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions ``` Replace `{account_id}` and `{gateway_id}` with your Cloudflare account and gateway IDs. ## Parameters Switch providers by changing the `model` and `apiKey` parameters. Specify the model using `{provider}/{model}` format. For example: - `openai/gpt-4o-mini` - `google-ai-studio/gemini-2.0-flash` - `anthropic/claude-3-haiku` ## Examples ### OpenAI SDK ```js import OpenAI from "openai"; const client = new OpenAI({ apiKey: "YOUR_PROVIDER_API_KEY", // Provider API key // NOTE: the OpenAI client automatically adds /chat/completions to the end of the URL, you should not add it yourself. baseURL: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat", }); const response = await client.chat.completions.create({ model: "google-ai-studio/gemini-2.0-flash", messages: [{ role: "user", content: "What is Cloudflare?" }], }); console.log(response.choices[0].message.content); ``` ### cURL ```bash curl -X POST https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions \ --header 'Authorization: Bearer {openai_token}' \ --header 'Content-Type: application/json' \ --data '{ "model": "google-ai-studio/gemini-2.0-flash", "messages": [ { "role": "user", "content": "What is Cloudflare?" } ] }' ``` ### Universal provider You can also use this pattern with the [Universal Endpoint](/ai-gateway/universal/) to add [fallbacks](/ai-gateway/configuration/fallbacks/) across multiple providers. When used in combination, every request will return the same standardized format, whether from the primary or fallback model. This behavior means that you do not have to add extra parsing logic to your app. ```ts title="index.ts" export interface Env { AI: Ai; } export default { async fetch(request: Request, env: Env) { return env.AI.gateway("default").run({ provider: "compat", endpoint: "chat/completions", headers: { authorization: "Bearer", }, query: { model: "google-ai-studio/gemini-2.0-flash", messages: [ { role: "user", content: "What is Cloudflare?", }, ], }, }); }, }; ``` ## Supported Providers The OpenAI-compatible endpoint supports models from the following providers: - [Anthropic](/ai-gateway/providers/anthropic/) - [OpenAI](/ai-gateway/providers/openai/) - [Groq](/ai-gateway/providers/groq/) - [Mistral](/ai-gateway/providers/mistral/) - [Cohere](/ai-gateway/providers/cohere/) - [Perplexity](/ai-gateway/providers/perplexity/) - [Workers AI](/ai-gateway/providers/workersai/) - [Google-AI-Studio](/ai-gateway/providers/google-ai-studio/) - [Grok](/ai-gateway/providers/grok/) - [DeepSeek](/ai-gateway/providers/deepseek/) - [Cerebras](/ai-gateway/providers/cerebras/) --- # Architectures URL: https://developers.cloudflare.com/ai-gateway/demos/ import { GlossaryTooltip, ResourcesBySelector } from "~/components"; Learn how you can use AI Gateway within your existing architecture. ## Reference architectures Explore the following reference architectures that use AI Gateway: --- # Getting started URL: https://developers.cloudflare.com/ai-gateway/get-started/ import { Details, DirectoryListing, LinkButton, Render } from "~/components"; In this guide, you will learn how to create your first AI Gateway. You can create multiple gateways to control different applications. ## Prerequisites Before you get started, you need a Cloudflare account. Sign up ## Create gateway Then, create a new AI Gateway. ## Choosing gateway authentication When setting up a new gateway, you can choose between an authenticated and unauthenticated gateway. Enabling an authenticated gateway requires each request to include a valid authorization token, adding an extra layer of security. We recommend using an authenticated gateway when storing logs to prevent unauthorized access and protect against invalid requests that can inflate log storage usage and make it harder to find the data you need. Learn more about setting up an [Authenticated Gateway](/ai-gateway/configuration/authentication/). ## Connect application Next, connect your AI provider to your gateway. AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. To use AI Gateway, you will need to create your own account with each provider and provide your API key. AI Gateway acts as a proxy for these requests, enabling observability, caching, and more. Additionally, AI Gateway has a [WebSockets API](/ai-gateway/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets. Below is a list of our supported model providers: If you do not have a provider preference, start with one of our dedicated tutorials: - [OpenAI](/ai-gateway/tutorials/deploy-aig-worker/) - [Workers AI](/ai-gateway/tutorials/create-first-aig-workers/) ## View analytics Now that your provider is connected to the AI Gateway, you can view analytics for requests going through your gateway.
:::note[Note] The cost metric is an estimation based on the number of tokens sent and received in requests. While this metric can help you monitor and predict cost trends, refer to your provider's dashboard for the most accurate cost details. ::: ## Next steps - Learn more about [caching](/ai-gateway/configuration/caching/) for faster requests and cost savings and [rate limiting](/ai-gateway/configuration/rate-limiting/) to control how your application scales. - Explore how to specify model or provider [fallbacks](/ai-gateway/configuration/fallbacks/) for resiliency. - Learn how to use low-cost, open source models on [Workers AI](/ai-gateway/providers/workersai/) - our AI inference service. --- # Header Glossary URL: https://developers.cloudflare.com/ai-gateway/glossary/ import { Glossary } from "~/components"; AI Gateway supports a variety of headers to help you configure, customize, and manage your API requests. This page provides a complete list of all supported headers, along with a short description ## Configuration hierarchy Settings in AI Gateway can be configured at three levels: **Provider**, **Request**, and **Gateway**. Since the same settings can be configured in multiple locations, the following hierarchy determines which value is applied: 1. **Provider-level headers**: Relevant only when using the [Universal Endpoint](/ai-gateway/universal/), these headers take precedence over all other configurations. 2. **Request-level headers**: Apply if no provider-level headers are set. 3. **Gateway-level settings**: Act as the default if no headers are set at the provider or request levels. This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for more fine-tuned control, and gateway settings for general defaults. --- # Cloudflare AI Gateway URL: https://developers.cloudflare.com/ai-gateway/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Observe and control your AI applications. Cloudflare's AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started. Check out the [Get started guide](/ai-gateway/get-started/) to learn how to configure your applications with AI Gateway. ## Features View metrics such as the number of requests, tokens, and the cost it takes to run your application. Gain insight on requests and errors. Serve requests directly from Cloudflare's cache instead of the original model provider for faster requests and cost savings. Control how your application scales by limiting the number of requests your application receives. Improve resilience by defining request retry and model fallbacks in case of an error. Workers AI, OpenAI, Azure OpenAI, HuggingFace, Replicate, and more work with AI Gateway. --- ## Related products Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. Build full-stack AI applications with Vectorize, Cloudflare's vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Learn how you can build and deploy ambitious AI applications to Cloudflare's global network. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Universal Endpoint URL: https://developers.cloudflare.com/ai-gateway/universal/ import { Render, Badge } from "~/components"; You can use the Universal Endpoint to contact every provider. ```txt https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id} ``` AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. The Universal Endpoint requires some adjusting to your schema, but supports additional features. Some of these features are, for example, retrying a request if it fails the first time, or configuring a [fallback model/provider](/ai-gateway/configuration/fallbacks/). You can use the Universal endpoint to contact every provider. The payload is expecting an array of message, and each message is an object with the following parameters: - `provider` : the name of the provider you would like to direct this message to. Can be OpenAI, workers-ai, or any of our supported providers. - `endpoint`: the pathname of the provider API you’re trying to reach. For example, on OpenAI it can be `chat/completions`, and for Workers AI this might be [`@cf/meta/llama-3.1-8b-instruct`](/workers-ai/models/llama-3.1-8b-instruct/). See more in the sections that are specific to [each provider](/ai-gateway/providers/). - `authorization`: the content of the Authorization HTTP Header that should be used when contacting this provider. This usually starts with 'Token' or 'Bearer'. - `query`: the payload as the provider expects it in their official API. ## cURL example The above will send a request to Workers AI Inference API, if it fails it will proceed to OpenAI. You can add as many fallbacks as you need, just by adding another JSON in the array. ## WebSockets API The Universal Endpoint can also be accessed via a [WebSockets API](/ai-gateway/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets. ## WebSockets example ```javascript import WebSocket from "ws"; const ws = new WebSocket( "wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/", { headers: { "cf-aig-authorization": "Bearer AI_GATEWAY_TOKEN", }, }, ); ws.send( JSON.stringify({ type: "universal.create", request: { eventId: "my-request", provider: "workers-ai", endpoint: "@cf/meta/llama-3.1-8b-instruct", headers: { Authorization: "Bearer WORKERS_AI_TOKEN", "Content-Type": "application/json", }, query: { prompt: "tell me a joke", }, }, }), ); ws.on("message", function incoming(message) { console.log(message.toString()); }); ``` ## Workers Binding example import { WranglerConfig } from "~/components"; ```toml title="wrangler.toml" [ai] binding = "AI" ``` ```typescript title="src/index.ts" type Env = { AI: Ai; }; export default { async fetch(request: Request, env: Env) { return env.AI.gateway('my-gateway').run({ provider: "workers-ai", endpoint: "@cf/meta/llama-3.1-8b-instruct", headers: { authorization: "Bearer my-api-token", }, query: { prompt: "tell me a joke", }, }); }, }; ``` ## Header configuration hierarchy The Universal Endpoint allows you to set fallback models or providers and customize headers for each provider or request. You can configure headers at three levels: 1. **Provider level**: Headers specific to a particular provider. 2. **Request level**: Headers included in individual requests. 3. **Gateway settings**: Default headers configured in your gateway dashboard. Since the same settings can be configured in multiple locations, AI Gateway applies a hierarchy to determine which configuration takes precedence: - **Provider-level headers** override all other configurations. - **Request-level headers** are used if no provider-level headers are set. - **Gateway-level settings** are used only if no headers are configured at the provider or request levels. This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for fine-tuned control, and gateway settings for general defaults. ## Hierarchy example This example demonstrates how headers set at different levels impact caching behavior: - **Request-level header**: The `cf-aig-cache-ttl` is set to `3600` seconds, applying this caching duration to the request by default. - **Provider-level header**: For the fallback provider (OpenAI), `cf-aig-cache-ttl` is explicitly set to `0` seconds, overriding the request-level header and disabling caching for responses when OpenAI is used as the provider. This shows how provider-level headers take precedence over request-level headers, allowing for granular control of caching behavior. ```bash curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id} \ --header 'Content-Type: application/json' \ --header 'cf-aig-cache-ttl: 3600' \ --data '[ { "provider": "workers-ai", "endpoint": "@cf/meta/llama-3.1-8b-instruct", "headers": { "Authorization": "Bearer {cloudflare_token}", "Content-Type": "application/json" }, "query": { "messages": [ { "role": "system", "content": "You are a friendly assistant" }, { "role": "user", "content": "What is Cloudflare?" } ] } }, { "provider": "openai", "endpoint": "chat/completions", "headers": { "Authorization": "Bearer {open_ai_token}", "Content-Type": "application/json", "cf-aig-cache-ttl": "0" }, "query": { "model": "gpt-4o-mini", "stream": true, "messages": [ { "role": "user", "content": "What is Cloudflare?" } ] } } ]' ``` --- # Getting started URL: https://developers.cloudflare.com/autorag/get-started/ AutoRAG allows developers to create fully managed retrieval-augmented generation (RAG) pipelines to power AI applications with accurate and up-to-date information without needing to manage infrastructure. ## 1. Upload data or use existing data in R2 AutoRAG integrates with R2 for data import. Create an R2 bucket if you do not have one and upload your data. :::note Before you create your first bucket, you must purchase R2 from the Cloudflare dashboard. ::: To create and upload objects to your bucket from the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2) and select **R2**. 2. Select Create bucket, name the bucket, and select **Create bucket**. 3. Choose to either drag and drop your file into the upload area or **select from computer**. Review the [file limits](/autorag/configuration/data-source/) when creating your knowledge base. _If you need inspiration for what document to use to make your first AutoRAG, try downloading and uploading the [RSS](/changelog/rss/index.xml) of the [Cloudflare Changelog](/changelog/)._ ## 2. Create an AutoRAG To create a new AutoRAG: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/ai/autorag) and select **AI** > **AutoRAG**. 2. Select **Create AutoRAG**, configure the AutoRAG, and complete the setup process. 3. Select **Create**. ## 3. Monitor indexing Once created, AutoRAG will create a Vectorize index in your account and begin indexing the data. To monitor the indexing progress: 1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG. 2. Navigate to the **Overview** page to view the current indexing status. ## 4. Try it out Once indexing is complete, you can run your first query: 1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG. 2. Navigate to the **Playground** page. 3. Select **Search with AI** or **Search**. 4. Enter a **query** to test out its response. ## 5. Add to your application There are multiple ways you can create [RAG applications](/autorag/) with Cloudflare AutoRAG: - [Workers Binding](/autorag/usage/workers-binding/) - [REST API](/autorag/usage/rest-api/) --- # Overview URL: https://developers.cloudflare.com/autorag/ import { CardGrid, Description, LinkTitleCard, Plan, RelatedProduct, LinkButton, Feature, } from "~/components"; Create fully-managed RAG applications that continuously update and scale on Cloudflare. AutoRAG lets you create retrieval-augmented generation (RAG) pipelines that power your AI applications with accurate and up-to-date information. Create RAG applications that integrate context-aware AI without managing infrastructure. You can use AutoRAG to build: - **Product Chatbot:** Answer customer questions using your own product content. - **Docs Search:** Make documentation easy to search and use.
Get started Watch AutoRAG demo
--- ## Features Automatically and continuously index your data source, keeping your content fresh without manual reprocessing. Create multitenancy by scoping search to each tenant’s data using folder-based metadata filters. Call your AutoRAG instance for search or AI Search directly from a Cloudflare Worker using the native binding integration. Cache repeated queries and results to improve latency and reduce compute on repeated requests. --- ## Related products Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. --- ## More resources Build and deploy your first Workers AI application. Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Changelog URL: https://developers.cloudflare.com/browser-rendering/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # FAQ URL: https://developers.cloudflare.com/browser-rendering/faq/ import { GlossaryTooltip } from "~/components"; Below you will find answers to our most commonly asked questions. If you cannot find the answer you are looking for, refer to the [Discord](https://discord.cloudflare.com) to explore additional resources. ### I see `Cannot read properties of undefined (reading 'fetch')` when using Browser Rendering. How do I fix this? This error occurs because your Puppeteer launch is not receiving the Browser binding or you are not on a Workers Paid plan. To resolve: Pass your Browser binding into `puppeteer.launch`. ### Will Browser Rendering bypass Cloudflare's Bot Protection? No, Browser Rendering requests are always identified as bots by Cloudflare and do not bypass Bot Protection. Additionally, Browser Rendering respects the robots.txt protocol, ensuring that any disallowed paths specified for user agents are not accessed during rendering. If you are attempting to scan your **own zone** and need Browser Rendering to access areas protected by Cloudflare’s Bot Protection, you can create a [WAF skip rule](/waf/custom-rules/skip/) to bypass the bot protection using a header or a custom user agent. ### Why can't I use an XPath selector when using Browser Rendering with Puppeteer? Currently it is not possible to use Xpath to select elements since this poses a security risk to Workers. As an alternative try to use a css selector or `page.evaluate` for example: ```ts const innerHtml = await page.evaluate(() => { return ( // @ts-ignore this runs on browser context new XPathEvaluator() .createExpression("/html/body/div/h1") // @ts-ignore this runs on browser context .evaluate(document, XPathResult.FIRST_ORDERED_NODE_TYPE).singleNodeValue .innerHTML ); }); ``` :::note Keep in mind that `page.evaluate` can only return primitive types like strings, numbers, etc. Returning an `HTMLElement` will not work. ::: ### What are the usage limits and pricing tiers for Cloudflare Browser Rendering and how do I estimate my costs? You can view the complete breakdown of concurrency caps, request rates, timeouts, and REST API quotas on the [limits page](/browser-rendering/platform/limits/). By default, idle browser sessions close after 60 seconds of inactivity. You can adjust this with the [`keep_alive` option](/browser-rendering/platform/puppeteer/#keep-alive). #### Pricing Browser Rendering is currently free up to the limits above until billing begins. Pricing will be announced in advance. ### Does Browser Rendering rotate IP addresses for outbound requests? No. Browser Rendering requests originate from Cloudflares global network, but you cannot configure per-request IP rotation. All rendering traffic comes from Cloudflare IP ranges and requests include special headers [(`cf-biso-request-id`, `cf-biso-devtools`)](/browser-rendering/reference/automatic-request-headers/) so origin servers can identify them. ### I see `Error processing the request: Unable to create new browser: code: 429: message: Browser time limit exceeded for today`. How do I fix it? This error indicates you have hit the daily browser-instance limit on the Workers Free plan. [Free-plan accounts are capped at free plan limit is 10 minutes of browser use a day](/browser-rendering/platform/limits/#workers-free) once you exceed those, further creation attempts return a 429 until the next UTC day. To resolve:[Upgrade to a Workers Paid plan](/workers/platform/pricing/) - Paid accounts raise these limits to [10 concurrent browsers and 10 new instances per minute](/browser-rendering/platform/limits/#workers-paid). --- # Get started URL: https://developers.cloudflare.com/browser-rendering/get-started/ Browser rendering can be used in two ways: - [Workers Bindings](/browser-rendering/workers-bindings/) for complex scripts. - [REST API](/browser-rendering/rest-api/) for simple actions. --- # Browser Rendering URL: https://developers.cloudflare.com/browser-rendering/ import { CardGrid, Description, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Browser automation for [Cloudflare Workers](/workers/) and [quick browser actions](/browser-rendering/rest-api/). Browser Rendering enables developers to programmatically control and interact with headless browser instances running on Cloudflare’s global network. This facilitates tasks such as automating browser interactions, capturing screenshots, generating PDFs, and extracting data from web pages. ## Integration Methods You can integrate Browser Rendering into your applications using one of the following methods: - **[REST API](/browser-rendering/rest-api/)**: Ideal for simple, stateless tasks like capturing screenshots, generating PDFs, extracting HTML content, and more. - **[Workers Bindings](/browser-rendering/workers-bindings/)**: Suitable for advanced browser automation within [Cloudflare Workers](/workers/). This method provides greater control, enabling more complex workflows and persistent sessions. Choose the method that best fits your use case. For example, use the [REST API endpoints](/browser-rendering/rest-api/) for straightforward tasks from external applications and use [Workers Bindings](/browser-rendering/workers-bindings/) for complex automation within the Cloudflare ecosystem. ## Use Cases Browser Rendering can be utilized for various purposes, including: - Fetch HTML content of a page. - Capture screenshot of a webpage. - Convert a webpage into a PDF document. - Take a webpage snapshot. - Scrape specified HTML elements from a webpage. - Retrieve data in a structured format. - Extract Markdown content from a webpage. - Gather all hyperlinks found on a webpage. ## Related products Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. A globally distributed coordination API with strongly consistent storage. Build and deploy AI-powered agents that can autonomously perform tasks. ## More resources Deploy your first Browser Rendering project using Wrangler and Cloudflare's version of Puppeteer. New to Workers? Get started with the Workers Learning Path. Learn about Browser Rendering limits. Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Cloudflare for Platforms URL: https://developers.cloudflare.com/cloudflare-for-platforms/ import { Description, Feature } from "~/components" Build your own multitenant platform using Cloudflare as infrastructure Cloudflare for Platforms lets you run untrusted code written by your customers, or by AI, in a secure, hosted sandbox, and give each customer their own subdomain or custom domain. ![Figure 1: Cloudflare for Platforms Architecture Diagram](~/assets/images/reference-architecture/programmable-platforms/programmable-platforms-2.svg) You can think of Cloudflare for Platforms as the exact same products and functionality that Cloudflare offers its own customers, structured so that you can offer it to your own customers, embedded within your own product. This includes: - **Isolation and multitenancy** — each of your customers runs code in their own Worker — a [secure and isolated sandbox](/workers/reference/how-workers-works/) - **Programmable routing, ingress, egress and limits** — you write code that dispatches requests to your customers' code, and can control [ingress](/cloudflare-for-platforms/workers-for-platforms/get-started/dynamic-dispatch/), [egress](/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) and set [per-customer limits](/cloudflare-for-platforms/workers-for-platforms/configuration/custom-limits/) - **Databases and storage** — you can provide [databases, object storage and more](/workers/runtime-apis/bindings/) to your customers as APIs they can call directly, without API tokens, keys, or external dependencies - **Custom Domains and Subdomains** — you [call an API](/cloudflare-for-platforms/cloudflare-for-saas/) to create custom subdomains or configure custom domains for each of your customers Cloudflare for Platforms is used by leading platforms big and small to: - Build application development platforms tailored to specific domains, like ecommerce storefronts or mobile apps - Power AI coding platforms that let anyone build and deploy software - Customize product behavior by allowing any user to write a short code snippet - Offer every customer their own isolated database - Provide each customer with their own subdomain *** ## Products Let your customers build and deploy their own applications to your platform, using Cloudflare's developer platform. Give your customers their own subdomain or custom domain, protected and accelerated by Cloudflare. --- # Overview URL: https://developers.cloudflare.com/constellation/ import { CardGrid, Description, LinkTitleCard } from "~/components" Run machine learning models with Cloudflare Workers. Constellation allows you to run fast, low-latency inference tasks on pre-trained machine learning models natively on Cloudflare Workers. It supports some of the most popular machine learning (ML) and AI runtimes and multiple classes of models. Cloudflare provides a curated list of verified models, or you can train and upload your own. Functionality you can deploy to your application with Constellation: * Content generation, summarization, or similarity analysis * Question answering * Audio transcription * Image or audio classification * Object detection * Anomaly detection * Sentiment analysis *** ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Architecture URL: https://developers.cloudflare.com/containers/architecture/ This page describes the architecture of Cloudflare Containers. ## How and where containers run After you deploy a Worker that uses a Container, your image is uploaded to [Cloudflare's Registry](/containers/image-management) and distributed globally to Cloudflare's Network. Cloudflare will pre-schedule instances and pre-fetch images across the globe to ensure quick start times when scaling up the number of concurrent container instances. This allows you to call `env.YOUR_CONTAINER.get(id)` and get a new instance quickly without worrying about the underlying scaling. When a request is made to start a new container instance, the nearest location with a pre-fetched image is selected. Subsequent requests to the same instance, regardless of where they originate, will be routed to this location as long as the instance stays alive. Starting additional container instances will use other locations with pre-fetched images, and Cloudflare will automatically begin prepping additional machines behind the scenes for additional scaling and quick cold starts. Because there are a finite number pre-warmed locations, some container instances may be started in locations that are farther away from the end-user. This is done to ensure that the container instance starts quickly. You are only charged for actively running instances and not for any unused pre-warmed images. Each container instance runs inside its own VM, which provides strong isolation from other workloads running on Cloudflare's network. Containers should be built for the `linux/amd64` architecture, and should stay within [size limits](/containers/platform-details/#limits). Logging, metrics collection, and networking are automatically set up on each container. ## Life of a Container Request When a request is made to any Worker, including one with an associated Container, it is generally handled by a datacenter in a location with the best latency between itself and the requesting user. A different datacenter may be selected to optimize overall latency, if [Smart Placement](/workers/configuration/smart-placement/) is on, or if the nearest location is under heavy load. When a request is made to a Container instance, it is sent through a Durable Object, which can be defined by either using a `DurableObject` or the [`Container` class](/containers/container-package), which extends Durable Objects with Container-specific APIs and helpers. We recommend using `Container`, see the [`Container` class documentation](/containers/container-package) for more details. Each Durable Object is a globally routable isolate that can execute code and store state. This allows developers to easily address and route to specific container instances (no matter where they are placed), define and run hooks on container status changes, execute recurring checks on the instance, and store persistent state associated with each instance. As mentioned above, when a container instance starts, it is launched in the nearest pre-warmed location. This means that code in a container is usually executed in a different location than the one handling the Workers request. :::note Currently, Durable Objects may be co-located with their associated Container instance, but often are not. Cloudflare is currently working on expanding the number of locations in which a Durable Object can run, which will allow container instances to always run in the same location as their Durable Object. ::: Because all Container requests are passed through a Worker, end-users cannot make TCP or UDP requests to a Container instance. If you have a use case that requires inbound TCP or UDP from an end-user, please [let us know](https://forms.gle/AGSq54VvUje6kmKu8). --- # Beta Info & Roadmap URL: https://developers.cloudflare.com/containers/beta-info/ Currently, Containers are in beta. There are several changes we plan to make prior to GA: ## Upcoming Changes and Known Gaps ### Limits Container limits will be raised in the future. We plan to increase both maximum instance size and maximum number of instances in an account. See the [Limits documentation](/containers/platform-details/#limits) for more information. ### Autoscaling and load balancing Currently, Containers are not autoscaled or load balanced. Containers can be scaled manually by calling `get()` on their binding with a unique ID. We plan to add official support for utilization-based autoscaling and latency-aware load balancing in the future. See the [Autoscaling documentation](/containers/scaling-and-routing) for more information. ### Reduction of log noise Currently, the `Container` class uses Durable Object alarms to help manage Container shutdown. This results in unnecessary log noise in the Worker logs. You can filter these logs out in the dashboard by adding a Query, but this is not ideal. We plan to automatically reduce log noise in the future. ### Dashboard Updates The dashboard will be updated to show: - the status of Container rollouts - links from Workers to their associated Containers ### Co-locating Durable Objects and Containers Currently, Durable Objects are not co-located with their associated Container. When requesting a container, the Durable Object will find one close to it, but not on the same machine. We plan to co-locate Durable Objects with their Container in the future. ### More advanced Container placement We currently prewarm servers across our global network with container images to ensure quick start times. There are times in which you may request a new container and it will be started in a location that farther from the end user than is desired. We are optimizing this process to ensure that this happens as little as possible, but it may still occur. ### Atomic code updates across Workers and Containers When deploying a Container with `wrangler deploy`, the Worker code will be immediately updated while the Container code will slowly be updated using a rolling deploy. This means that you must ensure Worker code is backwards compatible with the old Container code. In the future, Worker code in the Durable Object will only update when associated Container code updates. ## Feedback wanted There are several areas where we wish to gather feedback from users: - Do you want to integrate Containers with any other Cloudflare services? If so, which ones and how? - Do you want more ways to interact with a Container via Workers? If so, how? - Do you need different mechanisms for routing requests to containers? - Do you need different mechanisms for scaling containers? (see [scaling documentation](/containers/scaling-and-routing) for information on autoscaling plans) At any point during the Beta, feel free to [give feedback using this form](https://forms.gle/CscdaEGuw5Hb6H2s7). --- # Container Package URL: https://developers.cloudflare.com/containers/container-package/ When writing code that interacts with a container instance, you can either use a Durable Object directly or use the [`Container` module](https://github.com/cloudflare/containers) importable from [`@cloudflare/containers`](https://www.npmjs.com/package/@cloudflare/containers). ```javascript import { Container } from "@cloudflare/containers"; class MyContainer extends Container { defaultPort = 8080; sleepAfter = "5m"; } ``` We recommend using the `Container` class for most use cases. Install it with `npm install @cloudflare/containers`. The `Container` class extends `DurableObject` so all Durable Object functionality is available. It also provides additional functionality and a nice interface for common container behaviors, such as: - sleeping instances after an inactivity timeout - making requests to specific ports - running status hooks on startup, stop, or error - awaiting specific ports before making requests - setting environment variables and secrets See the [Containers GitHub repo](https://github.com/cloudflare/containers) for more details and the complete API. --- # Frequently Asked Questions URL: https://developers.cloudflare.com/containers/faq/ import { WranglerConfig } from "~/components"; Frequently Asked Questions: ## How do Container logs work? To get logs in the Dashboard, including live tailing of logs, toggle `observability` to true in your Worker's wrangler config: ```json { "observability": { "enabled": true } } ``` Logs are subject to the same [limits as Worker logs](/workers/observability/logs/workers-logs/#limits), which means that they are retained for 3 days on Free plans and 7 days on Paid plans. See [Workers Logs Pricing](/workers/observability/logs/workers-logs/#pricing) for details on cost. If you are an Enterprise user, you are able to export container logs via [Logpush](/logs/about) to your preferred destination. ## How are container instance locations selected? When initially deploying a Container, Cloudflare will select various locations across our network to deploy instances to. These locations will span multiple regions. When a Container instance is requested with `this.ctx.container.start`, the nearest free container instance will be selected from the pre-initialized locations. This will likely be in the same region as the external request, but may not be. Once the container instance is running, any future requests will be routed to the initial location. An Example: - A user deploys a Container. Cloudflare automatically readies instances across its Network. - A request is made from a client in Bariloche, Argentia. It reaches the Worker in Cloudflare's location in Neuquen, Argentina. - This Worker request calls `MY_CONTAINER.get("session-1337")` which brings up a Durable Object, which then calls `this.ctx.container.start`. - This requests the nearest free Container instance. - Cloudflare recognizes that an instance is free in Buenos Aires, Argentina, and starts it there. - A different user needs to route to the same container. This user's request reaches the Worker running in Cloudflare's location in San Diego. - The Worker again calls `MY_CONTAINER.get("session-1337")`. - If the initial container instance is still running, the request is routed to the location in Buenos Aires. If the initial container has gone to sleep, Cloudflare will once again try to find the nearest "free" instance of the Container, likely one in North America, and start an instance there. ## How do container updates and rollouts work? When you run `wrangler deploy`, the Worker code is updated immediately and Container instances are updated using a rolling deploy strategy. Container instances are updated in batches, with 25% of instances being updated at a time by default. When a Container instance is ready to be stopped, it is sent a `SIGTERM` signal, which allows it to gracefully shut down. If the instance does not stop within 15 minutes, it is forcefully stopped with a `SIGKILL` signal. If you have cleanup that must occur before a Container instance is stopped, you should do it during this period. Once stopped, the instance is replaced with a new instance running the updated code. When the new instance starts, requests will hang during container startup. ## How does scaling work? See [scaling & routing documentation](/containers/scaling-and-routing/) for details. ## What are cold starts? How fast are they? A cold start is when a container instance is started from a completely stopped state. If you call `env.MY_CONTAINER.get(id)` with a completely novel ID and launch this instance for the first time, it will result in a cold start. This will start the container image from its entrypoint for the first time. Depending on what this entrypoint does, it will take a variable amount of time to start. Container cold starts can often be the 2-3 second range, but this is dependent on image size and code execution time, among other factors. ## How do I use an existing container image? See [image management documentation](/containers/image-management/#using-existing-images) for details. ## Is disk persistent? What happens to my disk when my container sleeps? All disk is ephemeral. When a Container instance goes to sleep, the next time it is started, it will have a fresh disk as defined by its container image. Persistent disk is something the Cloudflare team is exploring in the future, but is not slated for the near term. ## What happens if I run out of memory? If you run out of memory, your instance will throw an Out of Memory (OOM) error and will be restarted. Containers do not use swap memory. ## How long can instances run for? What happens when a host server is shutdown? Cloudflare will not actively shut off a container instance after a specific amount of time. If you do not set `sleepAfter` on your Container class, or stop the instance manually, it will continue to run unless its host server is restarted. This happens on an irregular cadence, but frequently enough where Cloudflare does not guarantee that any instance will run for any set period of time. When a container instance is going to be shut down, it is sent a `SIGTERM` signal, and then a `SIGKILL` signal after 15 minutes. You should perform any necessary cleanup to ensure a graceful shutdown in this time. The container instance will be rebooted elsewhere shortly after this. ## How can I pass secrets to my container? You can use [Worker Secrets](/workers/configuration/secrets/) or the [Secrets Store](/secrets-store/integrations/workers/) to define secrets for your Workers. Then you can pass these secrets to your Container using the `envVars` property: ```javascript class MyContainer extends Container { defaultPort = 5000; envVars = { MY_SECRET: this.env.MY_SECRET, }; } ``` Or when starting a Container instance on a Durable Object: ```javascript this.ctx.container.start({ env: { MY_SECRET: this.env.MY_SECRET, }, }); ``` See [the Env Vars and Secrets Example](/containers/examples/env-vars-and-secrets/) for details. ## How do I allow or disallow egress from my container? When booting a Container, you can specify `enableInternet`, which will toggle internet access on or off. To disable it, configure it on your Container class: ```javascript class MyContainer extends Container { defaultPort = 7000; enableInternet = false; } ``` or when starting a Container instance on a Durable Object: ```javascript this.ctx.container.start({ enableInternet: false, }); ``` --- # Getting started URL: https://developers.cloudflare.com/containers/get-started/ import { WranglerConfig, PackageManagers } from "~/components"; In this guide, you will deploy a Worker that can make requests to one or more Containers in response to end-user requests. In this example, each container runs a small webserver written in Go. This example Worker should give you a sense for simple Container use, and provide a starting point for more complex use cases. ## Prerequisites ### Ensure Docker is running locally In this guide, we will build and push a container image alongside your Worker code. By default, this process uses [Docker](https://www.docker.com/) to do so. You must have Docker running locally when you run `wrangler deploy`. For most people, the best way to install Docker is to follow the [docs for installing Docker Desktop](https://docs.docker.com/desktop/). You can check that Docker is running properly by running the `docker info` command in your terminal. If Docker is running, the command will succeed. If Docker is not running, the `docker info` command will hang or return an error including the message "Cannot connect to the Docker daemon". {/* FUTURE CHANGE: Add some image you can use if you don't have Docker running. */} {/* FUTURE CHANGE: Link to docs on alternative build/push options */} ## Deploy your first Container Run the following command to create and deploy a new Worker with a container, from the starter template: ```sh npm create cloudflare@latest -- --template=cloudflare/templates/containers-template ``` When you want to deploy a code change to either the Worker or Container code, you can run the following command using [Wrangler CLI](/workers/wrangler/: When you run `wrangler deploy`, the following things happen: - Wrangler builds your container image using Docker. - Wrangler pushes your image to a [Container Image Registry](/containers/image-management/) that is automatically integrated with your Cloudflare account. - Wrangler deploys your Worker, and configures Cloudflare's network to be ready to spawn instances of your container The build and push usually take the longest on the first deploy. Subsequent deploys are faster, because they [reuse cached image layers](https://docs.docker.com/build/cache/). :::note After you deploy your Worker for the first time, you will need to wait several minutes until it is ready to receive requests. Unlike Workers, Containers take a few minutes to be provisioned. During this time, requests are sent to the Worker, but calls to the Container will error. ::: ### Check deployment status After deploying, run the following command to show a list of containers containers in your Cloudflare account, and their deployment status: And see images deployed to the Cloudflare Registry with the following command: ### Make requests to Containers Now, open the URL for your Worker. It should look something like `https://hello-containers.YOUR_ACCOUNT_NAME.workers.dev`. If you make requests to the paths `/container/1` or `/container/2`, these requests are routed to specific containers. Each different path after "/container/" routes to a unique container. If you make requests to `/lb`, you will load balanace requests to one of 3 containers chosen at random. You can confirm this behavior by reading the output of each request. ## Understanding the Code Now that you've deployed your first container, let's explain what is happening in your Worker's code, in your configuration file, in your container's code, and how requests are routed. ## Each Container is backed by its own Durable Object Incoming requests are initially handled by the Worker, then passed to a container-enabled [Durable Object](/durable-objects). To simplify and reduce boilerplate code, Cloudflare provides a [`Container` class](https://github.com/cloudflare/containers) as part of the `@cloudflare/containers` NPM package. You don't have to be familiar with Durable Objects to use Containers, but it may be helpful to understand how the basics. Each Durable Object runs alongside an individual container instance, manages starting and stopping it, and can interact with the container through its ports. Containers will likely run near the Worker instance requesting them, but not necessarily. Refer to ["How Locations are Selected"](/containers/platform-details/#how-are-locations-are-selected) for details. In a simple app, the Durable Object may just boot the container and proxy requests to it. In a more complex app, having container-enabled Durable Objects allows you to route requests to individual stateful container instances, manage the container lifecycle, pass in custom starting commands and environment variables to containers, run hooks on container status changes, and more. See the [documentation for Durable Object container methods](/durable-objects/api/container/) and the [`Container` class repository](https://github.com/cloudflare/containers) for more details. ### Configuration Your [Wrangler configuration file](/workers/wrangler/configuration/) defines the configuration for both your Worker and your container: ```toml [[containers]] max_instances = 10 name = "hello-containers" class_name = "MyContainer" image = "./Dockerfile" [[durable_objects.bindings]] name = "MY_CONTAINER" class_name = "MyContainer" [[migrations]] tag = "v1" new_sqlite_classes = ["MyContainer"] ``` Important points about this config: - `image` points to a Dockerfile or to a directory containing a Dockerfile. - `class_name` must be a [Durable Object class name](/durable-objects/api/base/). - `max_instances` declares the maximum number of simultaneously running container instances that will run. - The Durable Object must use [`new_sqlite_classes`](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) not `new_classes`. ### The Container Image Your container image must be able to run on the `linux/amd64` architecture, but aside from that, has few limitations. In the example you just deployed, it is a simple Golang server that responds to requests on port 8080 using the `MESSAGE` environment variable that will be set in the Worker and an [auto-generated environment variable](/containers/platform-details/#environment-variables) `CLOUDFLARE_DEPLOYMENT_ID.` ```go func handler(w http.ResponseWriter, r *http.Request) { message := os.Getenv("MESSAGE") instanceId := os.Getenv("CLOUDFLARE_DEPLOYMENT_ID") fmt.Fprintf(w, "Hi, I'm a container and this is my message: %s, and my instance ID is: %s", message, instanceId) } ``` :::note After deploying the example code, to deploy a different image, you can replace the provided image with one of your own. ::: ### Worker code #### Container Configuration First note `MyContainer` which extends the [`Container`](https://github.com/cloudflare/containers) class: ```js export class MyContainer extends Container { defaultPort = 8080; sleepAfter = '10s'; envVars = { MESSAGE: 'I was passed in via the container class!', }; override onStart() { console.log('Container successfully started'); } override onStop() { console.log('Container successfully shut down'); } override onError(error: unknown) { console.log('Container error:', error); } } ``` This defines basic configuration for the container: - `defaultPort` sets the port that the `fetch` and `containerFetch` methods will use to communicate with the container. It also blocks requests until the container is listening on this port. - `sleepAfter` sets the timeout for the container to sleep after it has been idle for a certain amount of time. - `envVars` sets environment variables that will be passed to the container when it starts. - `onStart`, `onStop`, and `onError` are hooks that run when the container starts, stops, or errors, respectively. See the [Container class documentation](/containers/container-package) for more details and configuration options. #### Routing to Containers When a request enters Cloudflare, your Worker's [`fetch` handler](/workers/runtime-apis/handlers/fetch/) is invoked. This is the code that handles the incoming request. The fetch handler in the example code, launches containers in two ways, on different routes: - Making requests to `/container/` passes requests to a new container for each path. This is done by spinning up a new Container instance. You may note that the first request to a new path takes longer than subsequent requests, this is because a new container is booting. ```js if (pathname.startsWith("/container")) { const id = env.MY_CONTAINER.idFromName(pathname); const container = env.MY_CONTAINER.get(id); return await container.fetch(request); } ``` - Making requests to `/lb` will load balance requests across several containers. This uses a simple `getRandom` helper method, which picks an ID at random from a set number (in this case 3), then routes to that Container instance. You can replace this with any routing or load balancing logic you choose to implement: ```js if (pathname.startsWith("/lb")) { const container = await getRandom(env.MY_CONTAINER, 3); return await container.fetch(request); } ``` This allows for multiple ways of using Containers: - If you simply want to send requests to many stateless and interchangeable containers, you should load balance. - If you have stateful services or need individually addressable containers, you should request specific Container instances. - If you are running short-lived jobs, want fine-grained control over the container lifecycle, want to parameterize container entrypoint or env vars, or want to chain together multiple container calls, you should request specific Container instances. :::note Currently, routing requests to one of many interchangeable Container instances is accomplished with the `getRandom` helper. This is temporary — we plan to add native support for latency-aware autoscaling and load balancing in the coming months. ::: ## View Containers in your Dashboard The [Containers Dashboard](http://dash.cloudflare.com/?to=/:account/workers/containers) shows you helpful information about your Containers, including: - Status and Health - Metrics - Logs - A link to associated Workers and Durable Objects After launching your Worker, navigate to the Containers Dashboard by clicking on "Containers" under "Workers & Pages" in your sidebar. ## Next Steps To do more: - Modify the image by changing the Dockerfile and calling `wrangler deploy` - Review our [examples](/containers/examples) for more inspiration - Get [more information on the Containers Beta](/containers/beta-info) --- # Containers (Beta) URL: https://developers.cloudflare.com/containers/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, TabItem, Tabs, Badge, WranglerConfig, LinkButton, } from "~/components"; Enhance your Workers with serverless containers Run code written in any programming language, built for any runtime, as part of apps built on [Workers](/workers). Deploy your container image to Region:Earth without worrying about managing infrastructure - just define your Worker and `wrangler deploy`. With Containers you can run: - Resource-intensive applications that require CPU cores running in parallel, large amounts of memory or disk space - Applications and libraries that require a full filesystem, specific runtime, or Linux-like environment - Existing applications and tools that have been distributed as container images Container instances are spun up on-demand and controlled by code you write in your [Worker](/workers). Instead of chaining together API calls or writing Kubernetes operators, you just write JavaScript: ```js import { Container, getContainer } from "@cloudflare/containers"; export class MyContainer extends Container { defaultPort = 4000; // Port the container is listening on sleepAfter = "10m"; // Stop the instance if requests not sent for 10 minutes } async fetch(request, env) { const { "session-id": sessionId } = await request.json(); // Get the container instance for the given session ID const containerInstance = getContainer(env.MY_CONTAINER, sessionId) // Pass the request to the container instance on its default port return containerInstance.fetch(request); } ``` ```json { "name": "container-starter", "main": "src/index.js", "containers": [ { "class_name": "MyContainer", "image": "./Dockerfile", "instances": 5, "name": "hello-containers-go" } ], "durable_objects": { "bindings": [ { "class_name": "MyContainer", "name": "MY_CONTAINER" } ] } "migrations": [ { "new_sqlite_classes": [ "MyContainer" ], "tag": "v1" } ], } ``` Get started Containers dashboard --- ## Next Steps Build and push an image, call a Container from a Worker, and understand scaling and routing. See examples of how to use a Container with a Worker, including stateless and stateful routing, regional placement, Workflow and Queue integrations, AI-generated code execution, and short-lived workloads. --- ## More resources Learn about the Containers Beta and upcoming features. Learn more about the commands to develop, build and push images, and deploy containers with Wrangler. Learn about what limits Containers have and how to work within them. Connect with other users of Containers on Discord. Ask questions, show what you are building, and discuss the platform with other developers. --- # Image Management URL: https://developers.cloudflare.com/containers/image-management/ import { WranglerConfig, PackageManagers } from "~/components"; ## Pushing images during `wrangler deploy` When running `wrangler deploy`, if you set the `image` attribute in you [Wranlger configuration](/workers/wrangler/configuration/#containers) file to a path, wrangler will build your container image locally using Docker, then push it to a registry run by Cloudflare. This registry is integrated with your Cloudflare account and is backed by [R2](/r2/). All authentication is handled automatically by Cloudflare both when pushing and pulling images. Just provide the path to your Dockerfile: ```json { "containers": { "image": "./Dockerfile" // ...rest of config... } } ``` And deploy your Worker with `wrangler deploy`. No other image management is necessary. On subsequent deploys, Wrangler will only push image layers that have changed, which saves space and time on `wrangler deploy` calls after the initial deploy. :::note Docker or a Docker-compatible CLI tool must be running for Wrangler to build and push images. ::: ## Using pre-built container images If you wish to use a pre-built image, first, push it to the Cloudflare Registry: Wrangler provides a command to push images to the Cloudflare Registry: Additionally, you can use the `-p` flag with `wrangler containers build` to build and push an image in one step: Then you can specify the URL in the image attribute: ```json { "containers": { "image": "registry.cloudflare.com/your-namespace/your-image:tag" // ...rest of config... } } ``` Currently, all images must use `registry.cloudflare.com`, which is the default registry for Wrangler. To use an existing image from another repo, you can pull it, tag it, then push it to the Cloudflare Registry: ```bash docker pull docker tag : wrangler containers push : ``` :::note We plan to allow configuring public images directly in wrangler config. Cloudflare will download your image, optionally using auth credentials, then cache it globally in the Cloudflare Registry. This is not yet available. ::: ## Pushing images with CI To use an image built in a continuous integration environment, install `wrangler` then build and pushi images using either `wrangler containers build` with the `--push` flag, or using the `wrangler containers push` command. ## Registry Limits Images are limited to 2 GB in size and you are limited to 50 total GB in your account's registry. :::note These limits will likely increase in the future. ::: Delete images with `wrangler containers delete` to free up space, but note that reverting a Worker to a previous version that uses a deleted image will then error. --- # Local Development URL: https://developers.cloudflare.com/containers/local-dev/ You can run both your container and your Worker locally, without additional configuration, by running [`npx wrangler dev`](/workers/wrangler/commands/#dev) in your project's directory. To develop Container-enabled Workers locally, you will need to first ensure that a Docker compatible CLI tool and Engine are installed. For instance, you can use [Docker Desktop](https://docs.docker.com/desktop/) on Mac, Windows, or Linux. When you run `wrangler dev`, your container image will be built or downloaded. If your [wrangler configuration](/workers/wrangler/configuration/#containers) sets the `image` attribute to a local path, the image will be built using the local Dockerfile. If the `image` attribute is set to a URL, the image will be pulled from the associated registry. Container instances will be launched locally when your Worker code calls to create a new container. This may happen when calling `.get()` on a `Container` instance or by calling `start()` if `manualStart` is set to `true`. Wrangler will boot new instances and automatically route requests to the correct local container. When `wrangler dev` stops, all associated container instances are stopped, but local images are not removed, so that they can be reused in subsequent calls to `wrangler dev` or `wrangler deploy`. :::note If your Worker app creates many container instances, your local machine may not be able to run as many containers concurrently as is possible when you deploy to Cloudflare. Additionally, if you regularly rebuild containers locally, you may want to clear out old container images (using `docker image prune` or similar) to reduce disk used. ::: ## Iterating on Container code When you use `wrangler dev`, your Worker's code is automatically reloaded by Wrangler each time you save a change, but code running within the container is not. To rebuild your container with new code changes, you can hit the `[r]` key on your keyboard, which triggers a rebuild. Container instances will then be restarted with the newly built images. You may prefer to set up your own code watchers and reloading mechanisms, or mount a local directory into the local container images to sync code changes. This can be done, but there is no built-in mechanism for doing so in Wrangler, and best-practices will depend on the languages and frameworks you are using in your container code. --- # Platform URL: https://developers.cloudflare.com/containers/platform-details/ import { WranglerConfig } from "~/components"; ## Instance Types The memory, vCPU, and disk space for Containers are set through predefined instance types. Three instance types are currently available: | Instance Type | Memory | vCPU | Disk | | ------------- | ------- | ---- | ---- | | dev | 256 MiB | 1/16 | 2 GB | | basic | 1 GiB | 1/4 | 4 GB | | standard | 4 GiB | 1/2 | 4 GB | These are specified using the [`instance_type` property](/workers/wrangler/configuration/#containers) in your Worker's Wrangler configuration file. Looking for larger instances? [Give us feedback here](/containers/beta-info/#feedback-wanted) and tell us what size instances you need, and what you want to use them for. ## Limits While in open beta, the following limits are currently in effect: | Feature | Workers Paid | | ------------------------------------------------- | ------------ | | GB Memory for all concurrent live Container instances | 40GB [^1] | | vCPU for all concurrent live Container instances | 20 [^1] | | GB Disk for all concurrent live Container instances | 100GB [^1] | | Image size | 2 GB | | Total image storage per account | 50 GB [^2] | [^1]: This limit will be raised as we continue the beta. [^2]: Delete container images with `wrangler containers delete` to free up space. Note that if you delete a container image and then [roll back](/workers/configuration/versions-and-deployments/rollbacks/) your Worker to a previous version, this version may no longer work. ## Environment variables The container runtime automatically sets the following variables: - `CLOUDFLARE_COUNTRY_A2` - a two-letter code of a country the container is placed in - `CLOUDFLARE_DEPLOYMENT_ID` - the ID of the container instance - `CLOUDFLARE_LOCATION` - a name of a location the container is placed in - `CLOUDFLARE_NODE_ID` - an ID of a machine the container runs on - `CLOUDFLARE_PLACEMENT_ID` - a placement ID - `CLOUDFLARE_REGION` - a region name :::note If you supply environment variables with the same names, supplied values will override predefined values. ::: Custom environment variables can be set when defining a Container in your Worker: ```javascript class MyContainer extends Container { defaultPort = 4000; envVars = { MY_CUSTOM_VAR: "value", ANOTHER_VAR: "another_value", }; } ``` --- # Pricing URL: https://developers.cloudflare.com/containers/pricing/ ## vCPU, Memory and Disk Containers are billed for every 10ms that they are actively running at the following rates, with included monthly usage as part of the $5 USD per month [Workers Paid plan](/workers/platform/pricing/): | | Memory | CPU | Disk | | ---------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------ | -------------------------------------------------------------- | | **Free** | N/A | N/A | | | **Workers Paid** | 25 GiB-hours/month included
+$0.0000025 per additional GiB-second | 375 vCPU-minutes/month
+ $0.000020 per additional vCPU-second | 200 GB-hours/month
+$0.00000007 per additional GB-second | You only pay for what you use — charges start when a request is sent to the container or when it is manually started. Charges stop after the container instance goes to sleep, which can happen automatically after a timeout. This makes it easy to scale to zero, and allows you to get high utilization even with bursty traffic. #### Instance Types When you add containers to your Worker, you specify an [instance type](/containers/platform-details/#instance-types). The instance type you select will impact your bill — larger instances include more vCPUs, memory and disk, and therefore incur additional usage costs. The following instance types are currently available, and larger instance types are coming soon: | Name | Memory | CPU | Disk | | -------- | ------- | --------- | ---- | | dev | 256 MiB | 1/16 vCPU | 2 GB | | basic | 1 GiB | 1/4 vCPU | 4 GB | | standard | 4 GiB | 1/2 vCPU | 4 GB | ## Network Egress Egress from Containers is priced at the following rates: | Region | Price per GB | Included Allotment per month | | ---------------------- | ------------ | ---------------------------- | | North America & Europe | $0.025 | 1 TB | | Oceania, Korea, Taiwan | $0.05 | 500 GB | | Everywhere Else | $0.04 | 500 GB | ## Workers and Durable Objects Pricing When you use Containers, incoming requests to your containers are handled by your [Worker](/workers/platform/pricing/), and each container has its own [Durable Object](/durable-objects/platform/pricing/). You are billed for your usage of both Workers and Durable Objects. ## Logs and Observability Containers are integrated with the [Workers Logs](/workers/observability/logs/workers-logs/) platform, and billed at the same rate. Refer to [Workers Logs pricing](/workers/observability/logs/workers-logs/#pricing) for details. When you [enable observability for your Worker](/workers/observability/logs/workers-logs/#enable-workers-logs) with a binding to a container, logs from your container will show in both the Containers and Observability sections of the Cloudflare dashboard. --- # Scaling and Routing URL: https://developers.cloudflare.com/containers/scaling-and-routing/ ### Scaling container instances with `get()` Currently, Containers are only scaled manually by calling `BINDING.get()` with a unique ID, then starting the container. Unless `manualStart` is set to `true` on the Container class, each instance will start when `get()` is called. ``` // gets 3 container instances env.MY_CONTAINER.get(idOne) env.MY_CONTAINER.get(idTwo) env.MY_CONTAINER.get(idThree) ``` Each instance will run until its `sleepAfter` time has elapsed, or until it is manually stopped. This behavior is very useful when you want explicit control over the lifecycle of container instances. For instance, you may want to spin up a container backend instance for a specific user, or you may briefly run a code sandbox to isolate AI-generated code, or you may want to run a short-lived batch job. #### The `getRandom` helper function However, sometimes you want to run multiple instances of a container and easily route requests to them. Currently, the best way to achieve this is with the _temporary_ `getRandom` helper function: ```javascript import { Container, getRandom } from "@cloudflare/containers"; const INSTANCE_COUNT = 3; class Backend extends Container { defaultPort = 8080; sleepAfter = "2h"; } export default { async fetch(request: Request, env: Env): Promise { // note: "getRandom" to be replaced with latency-aware routing in the near future const containerInstance = getRandom(env.BACKEND, INSTANCE_COUNT) return containerInstance.fetch(request); }, }; ``` We have provided the getRandom function as a stopgap solution to route to multiple stateless container instances. It will randomly select one of N instances for each request and route to it. Unfortunately, it has two major downsides: - It requires that the user set a fixed number of instances to route to. - It will randomly select each instance, regardless of location. We plan to fix these issues with built-in autoscaling and routing features in the near future. ### Autoscaling and routing (unreleased) :::note This is an unreleased feature. It is subject to change. ::: You will be able to turn autoscaling on for a Container, by setting the `autoscale` property to on the Container class: ```javascript class MyBackend extends Container { autoscale = true; defaultPort = 8080; } ``` This instructs the platform to automatically scale instances based on incoming traffic and resource usage (memory, CPU). Container instances will be launched automatically to serve local traffic, and will be stopped when they are no longer needed. To route requests to the correct instance, you will use the `getContainer()` helper function to get a container instance, then pass requests to it: ```javascript export default { async fetch(request, env) { return getContainer(env.MY_BACKEND).fetch(request); }, }; ``` This will send traffic to the nearest ready instance of a container. If a container is overloaded or has not yet launched, requests will be routed to potentially more distant container. Container readiness can be automatically determined based on resource use, but will also be configurable with custom readiness checks. Autoscaling and latency-aware routing will be available in the near future, and will be documented in more detail when released. Until then, you can use the `getRandom` helper function to route requests to multiple container instances. --- # Demos and architectures URL: https://developers.cloudflare.com/d1/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Learn how you can use D1 within your existing application and architecture. ## Featured Demos - [Starter code for D1 Sessions API](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template): An introduction to D1 Sessions API. This demo simulates purchase orders administration. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) :::note[Tip: Place your database further away for the read replication demo] To simulate how read replication can improve a worst case latency scenario, select your primary database location to be in a farther away region (one of the deployment steps). You can find this in the **Database location hint** dropdown. ::: ## Demos Explore the following demo applications for D1. ## Reference architectures Explore the following reference architectures that use D1: --- # Getting started URL: https://developers.cloudflare.com/d1/get-started/ import { Render, PackageManagers, Steps, FileTree, Tabs, TabItem, TypeScriptExample, WranglerConfig } from "~/components"; This guide instructs you through: - Creating your first database using D1, Cloudflare's native serverless SQL database. - Creating a schema and querying your database via the command-line. - Connecting a [Cloudflare Worker](/workers/) to your D1 database using bindings, and querying your D1 database programmatically. You can perform these tasks through the CLI or through the Cloudflare dashboard. :::note If you already have an existing Worker and an existing D1 database, follow this tutorial from [3. Bind your Worker to your D1 database](/d1/get-started/#3-bind-your-worker-to-your-d1-database). ::: ## Quick start If you want to skip the steps and get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/d1-get-started/d1/d1-get-started) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. Use this option if you are familiar with Cloudflare Workers, and wish to skip the step-by-step guidance. You may wish to manually follow the steps if you are new to Cloudflare Workers. ## Prerequisites ## 1. Create a Worker Create a new Worker as the means to query your database. 1. Create a new project named `d1-tutorial` by running: This creates a new `d1-tutorial` directory as illustrated below. - d1-tutorial - node_modules/ - test/ - src - **index.ts** - package-lock.json - package.json - testconfig.json - vitest.config.mts - worker-configuration.d.ts - **wrangler.jsonc** Your new `d1-tutorial` directory includes: - A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) in `index.ts`. - A [Wrangler configuration file](/workers/wrangler/configuration/). This file is how your `d1-tutorial` Worker accesses your D1 database. :::note If you are familiar with Cloudflare Workers, or initializing projects in a Continuous Integration (CI) environment, initialize a new project non-interactively by setting `CI=true` as an [environmental variable](/workers/configuration/environment-variables/) when running `create cloudflare@latest`. For example: `CI=true npm create cloudflare@latest d1-tutorial --type=simple --git --ts --deploy=false` creates a basic "Hello World" project ready to build on. ::: 1. Log in to your [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Go to your account > **Compute (Workers)** > **Workers & Pages**. 3. Select **Create**. 4. Under **Start from a template**, select **Hello world**. 5. Name your Worker. For this tutorial, name your Worker `d1-tutorial`. 6. Select **Deploy**. ## 2. Create a database A D1 database is conceptually similar to many other SQL databases: a database may contain one or more tables, the ability to query those tables, and optional indexes. D1 uses the familiar [SQL query language](https://www.sqlite.org/lang.html) (as used by SQLite). To create your first D1 database: 1. Change into the directory you just created for your Workers project: ```sh cd d1-tutorial ``` 2. Run the following `wrangler@latest d1` command and give your database a name. In this tutorial, the database is named `prod-d1-tutorial`: :::note The [Wrangler command-line interface](/workers/wrangler/) is Cloudflare's tool for managing and deploying Workers applications and D1 databases in your terminal. It was installed when you used `npm create cloudflare@latest` to initialize your new project. While Wrangler gets installed locally to your project, you can use it outside the project by using the command `npx wrangler`. ::: ```sh npx wrangler@latest d1 create prod-d1-tutorial ``` ```sh output ✅ Successfully created DB 'prod-d1-tutorial' in region WEUR Created your new D1 database. { "d1_databases": [ { "binding": "DB", "database_name": "prod-d1-tutorial", "database_id": "" } ] } ``` This creates a new D1 database and outputs the [binding](/workers/runtime-apis/bindings/) configuration needed in the next step. 1. Go to **Storage & Databases** > **D1 SQL Database**. 2. Select **Create Database**. 3. Name your database. For this tutorial, name your D1 database `prod-d1-tutorial`. 4. (Optional) Provide a location hint. Location hint is an optional parameter you can provide to indicate your desired geographical location for your database. Refer to [Provide a location hint](/d1/configuration/data-location/#provide-a-location-hint) for more information. 5. Select **Create**. :::note For reference, a good database name: - Uses a combination of ASCII characters, shorter than 32 characters, and uses dashes (-) instead of spaces. - Is descriptive of the use-case and environment. For example, "staging-db-web" or "production-db-backend". - Only describes the database, and is not directly referenced in code. ::: ## 3. Bind your Worker to your D1 database You must create a binding for your Worker to connect to your D1 database. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to access resources, like D1, on the Cloudflare developer platform. To bind your D1 database to your Worker: You create bindings by updating your Wrangler file. 1. Copy the lines obtained from [step 2](/d1/get-started/#2-create-a-database) from your terminal. 2. Add them to the end of your Wrangler file. ```toml [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "prod-d1-tutorial" database_id = "" ``` Specifically: - The value (string) you set for `binding` is the **binding name**, and is used to reference this database in your Worker. In this tutorial, name your binding `DB`. - The binding name must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding. - Your binding is available in your Worker at `env.` and the D1 [Workers Binding API](/d1/worker-api/) is exposed on this binding. :::note When you execute the `wrangler d1 create` command, the client API package (which implements the D1 API and database class) is automatically installed. For more information on the D1 Workers Binding API, refer to [Workers Binding API](/d1/worker-api/). ::: You can also bind your D1 database to a [Pages Function](/pages/functions/). For more information, refer to [Functions Bindings for D1](/pages/functions/bindings/#d1-databases). You create bindings by adding them to the Worker you have created. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select the `d1-tutorial` Worker you created in [step 1](/d1/get-started/#1-create-a-worker). 3. Select **Settings**. 4. Scroll to **Bindings**, then select **Add**. 5. Select **D1 database**. 6. Name your binding in **Variable name**, then select the `prod-d1-tutorial` D1 database you created in [step 2](/d1/get-started/#2-create-a-database) from the dropdown menu. For this tutorial, name your binding `DB`. 7. Select **Deploy** to deploy your binding. When deploying, there are two options: - **Deploy:** Immediately deploy the binding to 100% of your audience. - **Save version:** Save a version of the binding which you can deploy in the future. For this tutorial, select **Deploy**. ## 4. Run a query against your D1 database ### Populate your D1 database After correctly preparing your [Wrangler configuration file](/workers/wrangler/configuration/), set up your database. Create a `schema.sql` file using the SQL syntax below to initialize your database. 1. Copy the following code and save it as a `schema.sql` file in the `d1-tutorial` Worker directory you created in step 1: ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 2. Initialize your database to run and test locally first. Bootstrap your new D1 database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --file=./schema.sql ``` ```output ⛅️ wrangler 4.13.2 ------------------- 🌀 Executing on local database prod-d1-tutorial () from .wrangler/state/v3/d1: 🌀 To execute on your remote database, add a --remote flag to your wrangler command. 🚣 3 commands executed successfully. ``` :::note The command `npx wrangler d1 execute` initializes your database locally, not on the remote database. ::: 3. Validate that your data is in the database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --command="SELECT * FROM Customers" ``` ```sh output 🌀 Mapping SQL input into an array of statements 🌀 Executing on local database production-db-backend () from .wrangler/state/v3/d1: ┌────────────┬─────────────────────┬───────────────────┐ │ CustomerId │ CompanyName │ ContactName │ ├────────────┼─────────────────────┼───────────────────┤ │ 1 │ Alfreds Futterkiste │ Maria Anders │ ├────────────┼─────────────────────┼───────────────────┤ │ 4 │ Around the Horn │ Thomas Hardy │ ├────────────┼─────────────────────┼───────────────────┤ │ 11 │ Bs Beverages │ Victoria Ashworth │ ├────────────┼─────────────────────┼───────────────────┤ │ 13 │ Bs Beverages │ Random Name │ └────────────┴─────────────────────┴───────────────────┘ ``` Use the Dashboard to create a table and populate it with data. 1. Go to **Storage & Databases** > **D1 SQL Database**. 2. Select the `prod-d1-tutorial` database you created in [step 2](/d1/get-started/#2-create-a-database). 3. Select **Console**. 4. Paste the following SQL snippet. ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 5. Select **Execute**. This creates a table called `Customers` in your `prod-d1-tutorial` database. 6. Select **Tables**, then select the `Customers` table to view the contents of the table. ### Write queries within your Worker After you have set up your database, run an SQL query from within your Worker. 1. Navigate to your `d1-tutorial` Worker and open the `index.ts` file. The `index.ts` file is where you configure your Worker's interactions with D1. 2. Clear the content of `index.ts`. 3. Paste the following code snippet into your `index.ts` file: ```typescript export interface Env { // If you set another name in the Wrangler config file for the value for 'binding', // replace "DB" with the variable name you defined. DB: D1Database; } export default { async fetch(request, env): Promise { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?", ) .bind("Bs Beverages") .all(); return Response.json(results); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages", ); }, } satisfies ExportedHandler; ``` In the code above, you: 1. Define a binding to your D1 database in your code. This binding matches the `binding` value you set in the [Wrangler configuration file](/workers/wrangler/configuration/) under `d1_databases`. 2. Query your database using `env.DB.prepare` to issue a [prepared query](/d1/worker-api/d1-database/#prepare) with a placeholder (the `?` in the query). 3. Call `bind()` to safely and securely bind a value to that placeholder. In a real application, you would allow a user to pass the `CompanyName` they want to list results for. Using `bind()` prevents users from executing arbitrary SQL (known as "SQL injection") against your application and deleting or otherwise modifying your database. 4. Execute the query by calling `all()` to return all rows (or none, if the query returns none). 5. Return your query results, if any, in JSON format with `Response.json(results)`. After configuring your Worker, you can test your project locally before you deploy globally. You can query your D1 database using your Worker. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select the `d1-tutorial` Worker you created. 3. Select the **Edit code** icon (**\<\/\>**). 4. Clear the contents of the `worker.js` file, then paste the following code: ```js export default { async fetch(request, env) { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?" ) .bind("Bs Beverages") .all(); return new Response(JSON.stringify(results), { headers: { 'Content-Type': 'application/json' } }); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages" ); }, }; ``` 5. Select **Save**. ## 5. Deploy your application Deploy your application on Cloudflare's global network. To deploy your Worker to production using Wrangler, you must first repeat the [database configuration](/d1/get-started/#populate-your-d1-database) steps after replacing the `--local` flag with the `--remote` flag to give your Worker data to read. This creates the database tables and imports the data into the production version of your database. 1. Create tables and add entries to your remote database with the `schema.sql` file you created in step 4. Enter `y` to confirm your decision. ```sh npx wrangler d1 execute prod-d1-tutorial --remote --file=./schema.sql ``` ```sh output ✔ ⚠️ This process may take some time, during which your D1 database will be unavailable to serve queries. Ok to proceed? y 🚣 Executed 3 queries in 0.00 seconds (5 rows read, 6 rows written) Database is currently at bookmark 00000002-00000004-00004ef1-ad4a06967970ee3b20860c86188a4b31. ┌────────────────────────┬───────────┬──────────────┬────────────────────┐ │ Total queries executed │ Rows read │ Rows written │ Database size (MB) │ ├────────────────────────┼───────────┼──────────────┼────────────────────┤ │ 3 │ 5 │ 6 │ 0.02 │ └────────────────────────┴───────────┴──────────────┴────────────────────┘ ``` 2. Validate the data is in production by running: ```sh npx wrangler d1 execute prod-d1-tutorial --remote --command="SELECT * FROM Customers" ``` ```sh output ⛅️ wrangler 4.13.2 ------------------- 🌀 Executing on remote database prod-d1-tutorial (): 🌀 To execute on your local development database, remove the --remote flag from your wrangler command. 🚣 Executed 1 command in 0.4069ms ┌────────────┬─────────────────────┬───────────────────┐ │ CustomerId │ CompanyName │ ContactName │ ├────────────┼─────────────────────┼───────────────────┤ │ 1 │ Alfreds Futterkiste │ Maria Anders │ ├────────────┼─────────────────────┼───────────────────┤ │ 4 │ Around the Horn │ Thomas Hardy │ ├────────────┼─────────────────────┼───────────────────┤ │ 11 │ Bs Beverages │ Victoria Ashworth │ ├────────────┼─────────────────────┼───────────────────┤ │ 13 │ Bs Beverages │ Random Name │ └────────────┴─────────────────────┴───────────────────┘ ``` 3. Deploy your Worker to make your project accessible on the Internet. Run: ```sh npx wrangler deploy ``` ```sh output ⛅️ wrangler 4.13.2 ------------------- Total Upload: 0.19 KiB / gzip: 0.16 KiB Your worker has access to the following bindings: - D1 Databases: - DB: prod-d1-tutorial () Uploaded d1-tutorial (3.76 sec) Deployed d1-tutorial triggers (2.77 sec) https://d1-tutorial..workers.dev Current Version ID: ``` You can now visit the URL for your newly created project to query your live database. For example, if the URL of your new Worker is `d1-tutorial..workers.dev`, accessing `https://d1-tutorial..workers.dev/api/beverages` sends a request to your Worker that queries your live database directly. 4. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `https://d1-tutorial..workers.dev/api/beverages`. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select your `d1-tutorial` Worker. 3. Select **Deployments**. 4. From the **Version History** table, select **Deploy version**. 5. From the **Deploy version** page, select **Deploy**. This deploys the latest version of the Worker code to production. ## 6. (Optional) Develop locally with Wrangler If you are using D1 with Wrangler, you can test your database locally. While in your project directory: 1. Run `wrangler dev`: ```sh npx wrangler dev ``` When you run `wrangler dev`, Wrangler provides a URL (most likely `localhost:8787`) to review your Worker. 2. Go to the URL. The page displays `Call /api/beverages to see everyone who works at Bs Beverages`. 3. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `localhost:8787/api/beverages`. If successful, the browser displays your data. :::note You can only develop locally if you are using Wrangler. You cannot develop locally through the Cloudflare dashboard. ::: ## 7. (Optional) Delete your database To delete your database: Run: ```sh npx wrangler d1 delete prod-d1-tutorial ``` 1. Go to **Storages & Databases** > **D1 SQL Database**. 2. Select your `prod-d1-tutorial` D1 database. 3. Select **Settings**. 4. Select **Delete**. 5. Type the name of the database (`prod-d1-tutorial`) to confirm the deletion. :::caution Note that deleting your D1 database will stop your application from functioning as before. ::: If you want to delete your Worker: Run: ```sh npx wrangler delete d1-tutorial ``` 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select your `d1-tutorial` Worker. 3. Select **Settings**. 4. Scroll to the bottom of the page, then select **Delete**. 5. Type the name of the Worker (`d1-tutorial`) to confirm the deletion. ## Summary In this tutorial, you have: - Created a D1 database - Created a Worker to access that database - Deployed your project globally ## Next steps If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com). - See supported [Wrangler commands for D1](/workers/wrangler/commands/#d1). - Learn how to use [D1 Worker Binding APIs](/d1/worker-api/) within your Worker, and test them from the [API playground](/d1/worker-api/#api-playground). - Explore [community projects built on D1](/d1/reference/community-projects/). --- # Cloudflare D1 URL: https://developers.cloudflare.com/d1/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components" Create new serverless SQL databases to query from your Workers and Pages projects. D1 is Cloudflare's managed, serverless database with SQLite's SQL semantics, built-in disaster recovery, and Worker and HTTP API access. D1 is designed for horizontal scale out across multiple, smaller (10 GB) databases, such as per-user, per-tenant or per-entity databases. D1 allows you to build applications with thousands of databases at no extra cost for isolating with multiple databases. D1 pricing is based only on query and storage costs. Create your first D1 database by [following the Get started guide](/d1/get-started/), learn how to [import data into a database](/d1/best-practices/import-export-data/), and how to [interact with your database](/d1/worker-api/) directly from [Workers](/workers/) or [Pages](/pages/functions/bindings/#d1-databases). *** ## Features Create your first D1 database, establish a schema, import data and query D1 directly from an application [built with Workers](/workers/). Execute SQL with SQLite's SQL compatibility and D1 Client API. Time Travel is D1’s approach to backups and point-in-time-recovery, and allows you to restore a database to any minute within the last 30 days. *** ## Related products Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Deploy dynamic front-end applications in record time. *** ## More resources Learn about D1's pricing and how to estimate your usage. Learn about what limits D1 has and how to work within them. Browse what developers are building with D1. Learn more about the storage and database options you can build on with Workers. Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Developer Platform. --- # Wrangler commands URL: https://developers.cloudflare.com/d1/wrangler-commands/ import { Render, Type, MetaInfo } from "~/components" D1 Wrangler commands use REST APIs to interact with the control plane. This page lists the Wrangler commands for D1. ## Global commands ## Experimental commands ### `insights` Returns statistics about your queries. ```sh npx wrangler d1 insights --