← All posts

TanStack Start on Cloudflare: Part 2 - Database Setup

This is the third article in the series that explores how to build full-stack applications using TanStack Start hosted on Cloudflare.

Published: Updated:

Articles in the series:

Where we left off

In the previous article we set up linting and formatting for our project. We also cleaned up the boilerplate code generated by the Cloudflare CLI. Now, it is time to move on and configure database access for our project. As a reminder, you can see an implementation behind this article series in the related GitHub repository.

Intro

When it comes to setting up a database, it usually boils down to two steps:

In this article we will work with PostgreSQL as database and Kysely as a lightweight client for it. The reason for choosing PostgreSQL is because it is one of the most popular open-source databases out there which has a very wide support among all major cloud providers. Moreover, it is very feature-rich which makes it an attractive choice for many open-source and commercial projects (another popular alternative being MySQL).

The choice of the client library might seem less obvious. I have experience working with object relational mapping (ORM) libraries in multiple programming languages and the more I use them the more I think that they add unnecessary overhead and can even negatively impact your professional growth. ORMs create an abstraction layer over SQL which might seem convenient when you get started. You are writing code in the same language that your business logic is written in, you get all the benefits of the modern tooling - linting and intellisense. This is a good thing. The flip side of it is that you get used to these abstractions and after some time cannot write SQL anymore. I have a strong belief that any professional software engineer who works with database must be proficient in SQL. Moreover, ORMs usually do not cover all features offered by SQL, which means that even if you use an ORM, at some point you will encounter a situation when you will have to write a raw SQL query.

If there was good tooling for writing SQL embedded into other programming languages (TypeScript, Python, Golang, etc.) with syntax highlighting, linting, type safety, and intellisense I would stick with writing SQL and not bother with adding any abstractions on top of it.

Unfortunately, at the time of this writing at least, embedding SQL queries into other programming languages usually boils down to writing them in plain text, which can be very error-prone. So, what should we use instead? This is exactly the reason why libraries such as Kysely exist. They are called query builders. These are not ORMs, but rather a very thin layer on top of SQL which allows to have the desired properties (syntax highlighting, linting, type safety, and intellisense) while not deviating from the raw SQL structure. This is my motivation for using such libraries, with Kysely being one of the most popular and actively maintained with full TypeScript support.

Spinning up PostgreSQL instance

We will start with spinning up a database instance. When developing new features, especially working in a team, it is important to be able to work fully locally as well as in a hybrid setup (running business logic locally against a remote database). In this section we will set up both a local PostgreSQL instance running in a container and a remote instance hosted on Neon.

Local database container

We start with spinning up a local container. The simplest way to do it, which I usually use, is to write a docker-compose.yaml file and a .env file in the root of the project. The former defines all the services we want to run locally (like a database instance, in this case), while the latter contains the environment variables used to configure these services. This is exactly how we are going to do it now. Create a file called docker-compose.yaml with the following content:

services:
  db:
    image: postgres:17.7-bookworm
    container_name: fullstack_cloudflare_db
    restart: always
    env_file:
      - .env
    ports:
      - 5432:5432
    networks:
      - db-network
    volumes:
      - db-volume:/var/lib/postgresql

volumes:
  db-volume:
    driver: local

networks:
  db-network:
    driver: bridge

Next, create a file called .env beside it with the content:

POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=fullstack_cloudflare
PGDATA=/var/lib/postgresql/17/docker

Note that the values used above are just an example and you may modify them as you like. If you want to have a better understanding of the configuration specified above, I encourage you to check the documentation of the official PostgreSQL image.

If you want to start the container in a background simply run:

docker compose up -d

If you want to connect to the currently running container run:

docker exec -it fullstack_cloudflare_db psql -U postgres

Note that here the parameters of the command match the values specified above. Finally, in order to stop the container run:

docker compose down

The commands above require that you have Docker installed. Please refer to the official Getting started guide, if you need help with setting it up.

Neon database

For the remote database setup I chose Neon. It is a serverless database-as-a-service solution which has a good free tier and is easy to work with. It is worth mentioning that there are many other alternatives out there, like Supabase (more than just a database, more backend-as-a-service offering), Aiven, or PlanetScale, not mentioning the database services offered by many cloud providers.

Given that I encourage you to host this solution on Cloudflare, a legitimate question would be - why not use D1 on Cloudflare? It is an interesting product, but its SQL support is built around compatibility with SQLite. If this is what you need, by all means do try it out, but check the documentation for the service limitations. If SQLite compatibility is something you are interested in, I would also recommend to take a loot at Turso. Now, since we are interested in using PostgreSQL features, I will move on with Neon.

If you follow along and do not have a Neon account yet, go ahead and create one. It is completely free and they have a pretty good free tier to try the product out. When you log in and see your dashboard, you will be able to create a project. Creating a project means creating a PostgreSQL database instance on Neon. Pick a project name, PostgreSQL version, the cloud provider, and the region where your instance will be hosted. By default, Neon will create 2 branches - production and development. The names are self-explanatory, and you can read about the concept of database branches in their official documentation. The important part is that in no time you have fully set-up remote database instance you can communicate with. If you want to see the credentials for accessing each branch, simply click on “Connect” while looking at the project’s dashboard.

Cloudflare Hyperdrive

In the previous section we set up a remote database instance with Neon. When creating it, you chose a region where it was deployed. Since your users may be anywhere in the world, not necessarily in the region where the database is, it is important to ensure that they experience short response times from your system, regardless of their location. This is not an easy task but Cloudflare offers a service that assists with exactly this task - Hyperdrive. It accelerates the database queries using the high-speed global edge network owned by Cloudflare. This is the last piece of the puzzle we need to prepare, before focusing on accessing the database from the client code.

We have to connect the newly created Neon project to Hyperdrive. Luckily, Cloudflare has a documentation page that covers this. The process is rather straightforward and consists of just a few steps.

First, go to you Neon project and create a new role for the Hyperdrive user to connect to the database, e.g. hyperdrive-user. To do this, head to the Overview section of the main branch, usually called production by default. Then choose Roles & Databases. There you can create a new role. Next, go back to the project Dashboard and click on the Connect button. You will see a pop-up window which allows you to pick the new role, the database, and the branch to get your connection string. You will need it at the next step.

Now we will provide the connection string to the Hyperdrive service in our Cloudflare account. This can be done using the UI dashboard or using the Wrangler CLI. We will use the latter method, since the CLI is already installed for our project. Copy the connection string as discussed above (make sure you uncheck the connection pooling checkbox because Hyperdrive is responsible for providing this feature for us). With the connection string ready, open the terminal in the root of the project and run the following command:

pnpx wrangler hyperdrive create <name-of-hyperdrive-config> --connection-string "<neon-connection-string>"

This will create the Hyperdrive config with the provided name and prompt you if you want to add a binding for it in your project, and you should agree to that (otherwise you will have to add it manually). For the question about local development, answer no, becuase we will use our docker-compose.yaml file created above. We will discuss local development setup in more details below.

At this point, we have set up both the local PostgreSQL in a container and a remote database on Neon. We have also set up access to the remote database via Cloudflare Hyperdrive for better performance. The latter manages connection pooling for us, while we can safely access our database from the serverless Cloudflare Workers environment.

Setting up Kysely

Now it is time to set up the client code to access the database within our full-stack application. First, let us install the driver libraries,

pnpm add pg

and the corresponding typing support

pnpm add -D @types/pg

Next, install Kysely

pnpm add kysely

Finally, install a companion package that helps maintain TypeScript representation of the database schema for us

pnpm add -D kysely-codegen

When all packages are installed, we start we setting up the codegen npm command by adding a new line to the package.json file:

@@ -12,6 +12,7 @@
     "check": "prettier --write . && eslint --fix",
     "deploy": "pnpm run build && wrangler deploy",
     "preview": "pnpm run build && vite preview",
+    "kysely-codegen": "kysely-codegen --dialect postgres --out-file ./src/db/types.ts",
     "cf-typegen": "wrangler types"
   },
   "dependencies": {

Add the following environment variable to your .env file (this is taken from the documentation of kysely-codegen):

DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB}"

Now you can run the codegen command:

pnpm kysely-codegen

You will notice that a new file is generated in your src/ directory, as specified in the kysely-codegen command we set up above. At this initial run, since we do not have any data models defined yet, the generated DB interface will be empty. However, from now on you should not have to write any database types by hand. Whenever you make a database schema change, you should be able to generate the latest TypeScript types for your schema using this command. In fact, we will set up database migrations below and make sure that the codegen command runs automatically immediately after the database migration is complete.

When the database types file is generated, we are ready to create a database client for our application. To do this, create a new file src/db/client.ts with the content:

import { Pool } from 'pg'
import { Kysely, PostgresDialect } from 'kysely'
import type { DB } from './types'

function createDb(connectionString: string): Kysely<DB> {
  const pool = new Pool({
    connectionString: connectionString,
    // Hyperdrive handles pooling for you, increase this number
    // if you plan to make parallel queries in the same worker.
    // Before increasing this number check the documentation:
    // https://developers.cloudflare.com/hyperdrive/concepts/connection-pooling/
    max: 1,
  })

  return new Kysely<DB>({
    dialect: new PostgresDialect({ pool }),
  })
}

// NOTE: We are using the dabatabse connection in a serverless environment, while
// Hyperdrive manages connection pooling for us. In order to return the borrowed
// back to the pool, it is important to destroy the client at the end of each request.
export async function withDb<T>(
  connectionString: string,
  callback: (db: ReturnType<typeof createDb>) => Promise<T>,
): Promise<T> {
  const db = createDb(connectionString)
  try {
    return await callback(db)
  } finally {
    await db.destroy()
  }
}

As mentioned above, Hyperdrive manages a connection pool for us, but we have to make sure to return a borrowed connection to the pool when the Worker handler has completed a request. This is exactly what the withDb function is defined for.

Local development

As we mentioned briefly above, it is important to discuss the local development setup for our project, especially when it comes to the database access. At this point, we have our local PostgreSQL running in a container and a remote Neon project. On Neon, we have at least two branches - production and development. When working on a new feature your default choice will often be the local container, but sometimes you may want to test your changes on the remote database too (using the development branch, or any other branch you create for working on your feature). Let us discuss this options now.

As stated in the Hyperdrive documentation, in order to connect to a database in the local development environment, it is convenient to set up a special environment variable. In your .env file add the following line:

CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE="${DATABASE_URL}"

Here _HYPERDRIVE is _<BINDING_NAME>, i.e. it should match the binding we set up for Hyperdrive in the wrangler.jsonc file.

This will use the local PostgreSQL container in development (see DATABASE_URL defined above for codegen). If you want to use one of the branches of the remote Neon database, just replace the value of this environment variable with the connection string retrieved from Neon:

NEON_DEV_URL="..."
CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE="${NEON_DEV_URL}"

Now you can start the development server with:

pnpm run dev

If you want to test the connection to our newly setup database in our so far barebones project, we can quickly make a simple query to see that it works. For this, open up the file src/routes/index.tsx and modify it as follows:

@@ -1,7 +1,39 @@
 import { createFileRoute } from '@tanstack/react-router'
+import { createServerFn } from '@tanstack/react-start'
+import { env } from 'cloudflare:workers'
+import { sql } from 'kysely'
+import { withDb } from '@/db/client'

-export const Route = createFileRoute('/')({ component: App })
+const getDbTime = createServerFn().handler(async () => {
+  return withDb(env.HYPERDRIVE.connectionString, async (db) => {
+    const time = await sql<{
+      now: Date
+    }>`SELECT CURRENT_TIMESTAMP AS now`.execute(db)
+
+    if (time.rows.length === 0) {
+      throw new Error('Failed to retrieve time')
+    }
+
+    return time.rows[0].now
+  })
+})
+
+export const Route = createFileRoute('/')({
+  component: App,
+  pendingComponent: AppLoading,
+  errorComponent: AppError,
+  loader: async () => await getDbTime(),
+})
+
+function AppLoading() {
+  return <p>Loading...</p>
+}
+
+function AppError() {
+  return <p>Something went wrong 😞</p>
+}

 function App() {
-  return null
+  const dbTime = Route.useLoaderData()
+  return <div>Current DB time is: {dbTime.toISOString()}</div>
 }

This adds a very simple server function to get the current timestamp from the database to show it on the index page. Now, when you visit you local dev server URL, you should see the current time fetched from the database.

Database migrations

The final topic I want to discuss in this article is database migrations. We already can access the database from our application code, but we need a good way to evolve the database schema. In any full-stack application with a database, this is a very important topic that should be addressed at the beginning. So, let’s dive into it.

As I mentioned above, Kysely is a query builder and not an ORM. However, this does not mean that it does not have convenience tools to help us manage migrations. According to the documentation, there are a few ways to approach it. I prefer to have a CLI command to manage all database migrations. But if you need more control, it is possible to write a migrations script manually.

I will show how to set up kysely-ctl for this. We start by installing the package:

pnpm add -D kysely-ctl

Now, before we can use it we have to configure the CLI. First, let us add an npm script:

@@ -13,6 +13,7 @@
     "deploy": "pnpm run build && wrangler deploy",
     "preview": "pnpm run build && vite preview",
     "kysely-codegen": "kysely-codegen --dialect postgres --out-file ./src/database/types.ts",
+    "kysely-ctl": "kysely",
     "cf-typegen": "wrangler types"
   },
   "dependencies": {

Next, let us bootstrap the CLI configuration running the following command:

pnpm run kysely-ctl init

This command should generate a new file .config/kysely.config.ts which looks as follows:

import {
  DummyDriver,
  PostgresAdapter,
  PostgresIntrospector,
  PostgresQueryCompiler,
} from 'kysely'
import { defineConfig } from 'kysely-ctl'

export default defineConfig({
  // replace me with a real dialect instance OR a dialect name + `dialectConfig` prop.
  dialect: {
    createAdapter() {
      return new PostgresAdapter()
    },
    createDriver() {
      return new DummyDriver()
    },
    createIntrospector(db) {
      return new PostgresIntrospector(db)
    },
    createQueryCompiler() {
      return new PostgresQueryCompiler()
    },
  },
  //   migrations: {
  //     migrationFolder: "migrations",
  //   },
  //   plugins: [],
  //   seeds: {
  //     seedFolder: "seeds",
  //   }
})

In order to fix the linting errors when you open this file, we have to add it to the project’s tsconfig.json file:

@@ -2,6 +2,7 @@
   "include": [
     "**/*.ts",
     "**/*.tsx",
+    ".config/*.ts",
     "eslint.config.js",
     "prettier.config.js",
     "vite.config.ts"

The last thing we have to do is to tweak the configuration to use our database setup. Open up the configuration file and modify it to look like this:

import { PostgresDialect } from 'kysely'
import { defineConfig } from 'kysely-ctl'
import { Pool } from 'pg'

export default defineConfig({
  // This configuration mirrors the configuration from src/db/client.ts, but with
  // an important nuance - instead of using the connection string for Hyperdrive,
  // the DATABASE_URL is used directly.
  dialect: new PostgresDialect({
    pool: new Pool({ connectionString: process.env.DATABASE_URL, max: 1 }),
  }),
  migrations: {
    migrationFolder: '../migrations',
  },
  //   plugins: [],
  //   seeds: {
  //     seedFolder: "seeds",
  //   }
})

With these changes in place we can test our CLI tool by running the following commands:

pnpm kysely-ctl --help
pnpm kysely-ctl migrate list

The last command should output a message saying there are no migrations found, which is correct since we have not created any yet. All the migrations for your database will be stored in a folder called migrations in the root of your project. This is configurable, of course, see the content of the configuration file above.

In later articles we will create database migrations, but now I can briefly mention the workflow we are going to use. Every time you want to change your database schema (add a new table or update an existing one, for example), you will follow these steps:

  1. Run pnpm kysely-ctl migrate make <migration-name>. This will generate an empty migration file with a name starting with a timestamp, e.g. migrations/1766261569451_new-table.ts.

  2. Complete the up (migration) and down (rollback) functions in the file with the changes you want to make.

  3. Run pnpm kysely-ctl migrate up.

  4. Update the TypeScript DB type for the database by running pnpm kysely-codegen.

The last two steps can be combined into one by creating an npm script, something like:

@@ -14,6 +14,8 @@
     "preview": "pnpm run build && vite preview",
     "kysely-codegen": "kysely-codegen --dialect postgres --out-file ./src/database/types.ts",
     "kysely-ctl": "kysely",
+    "kysely:up": "pnpm kysely-ctl migrate up && pnpm run kysely-codegen",
+    "kysely:down": "pnpm kysely-ctl migrate down && pnpm run kysely-codegen",
     "cf-typegen": "wrangler types"
   },
   "dependencies": {

Outro

In this article we have set up a database for our project. For local development we created a docker-compose.yaml file for running PostgreSQL in a container. For the remote database we set up a Neon project, which can be used for both production and development thanks to the branching feature. Finally, we created utility functions to access the database from our full-stack application and set up convenience tools to manage database migrations. In the next article, we will pick an example application we want to build to better demonstrate the new concepts on practice. The project will be gradually built until the end of this article series, and deployed to Cloudflare. Stay tuned!