The Book of Babar
Ergonomic Postgres for Rust. Typed, async, no surprises.

babar is a typed, async Postgres driver for Tokio that speaks the wire
protocol directly. No libpq. No magic. Just queries, codecs, and clear
errors — composed the way you’d compose any other Rust value.
cargo add babar
Why babar
| Pillar | Headline | What you get |
|---|---|---|
| Ergonomic by Design | Read it once, understand it forever. | Queries are typed values. Codecs are imported by name. There is one way to start a transaction, one way to bind a parameter, one way to run a migration. |
| Postgres at Heart | The wire protocol, faithfully. | Extended-protocol prepares, binary results, SCRAM-SHA-256, channel binding over TLS, and binary COPY FROM STDIN for bulk ingest. No translation layer between you and the server. |
| Built for the Herd | Predictable under load. | A single background task owns the socket and serializes wire I/O, so every public call is cancellation-safe. Pool, statement cache, and tracing spans are first-class — not bolted on later. |
Connect, type, query
Three values: a Config, a Command, and a Query. Codecs come in by
name so the compiler can read your intent.
use babar::codec::{int4, text};
use babar::query::Query;
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("secret")
.application_name("hello-babar"),
)
.await?;
let select: Query<(), (i32, String)> = // type: Query<(), (i32, String)>
Query::raw(
"SELECT 1::int4 AS id, 'Ada'::text AS name",
(),
(int4, text),
);
let rows: Vec<(i32, String)> = session.query(&select, ()).await?; // type: Vec<(i32, String)>
println!("{rows:?}");
session.close().await?;
Ok(())
}
You wrote three things: a Config describing where to connect, a
Query<A, B> describing the round-trip (parameters in, rows out), and
the call that ties them together. The codec tuple (int4, text)
is the schema of the rows you’ll get back.
Where to go next
New here? Read What makes babar babar → first — a one-page tour of where babar sits and what makes it distinctive.
- Prerequisites → — one
docker runfor a Postgres that logs every byte back at you. - Your first query → — the same flow, walked one line at a time, with that Postgres handy.
- The Book of Babar → — thirteen short chapters covering connecting, querying, transactions, pooling, COPY, migrations, errors, codecs, web services, TLS, and observability.
- Reference → — codec catalog, error catalog, feature flags, configuration knobs.
- Why babar → — the design notes.
Prerequisites
Before you connect, you need a Postgres to connect to. The cheapest debugger you’ll get on this whole journey is a Postgres that prints every byte it does back at you, so let’s run one of those.
A Postgres that talks back
Open a terminal, paste this, and leave it running. It’s a throwaway
container — --rm means it disappears when you Ctrl-C, so nothing
leaks past your tutorial session.
docker run --rm -it \
--name babar-pg \
-p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
postgres:17 \
-c log_statement=all \
-c log_min_duration_statement=0 \
-c log_connections=on \
-c log_disconnections=on
What each flag is doing for you:
--rm -it— foreground, throwaway,Ctrl-Cto stop. No daemon, no cleanup chores later.-p 5432:5432— Postgres’ default port, exposed onlocalhost.-e POSTGRES_PASSWORD=postgres— sets the password for the defaultpostgressuperuser. Thepostgres:17image already creates that role and a database of the same name on first boot, so we just need to give it a password.-c log_statement=all— every SQL statement gets logged.-c log_min_duration_statement=0— every statement also gets a duration logged, no threshold.-c log_connections=on/-c log_disconnections=on— connection lifecycle in the same stream.
The connection string for everything that follows is:
postgres://postgres:postgres@localhost:5432/postgres
…which in Config form is:
#![allow(unused)]
fn main() {
use babar::Config;
let cfg = Config::new("localhost", 5432, "postgres", "postgres") // type: Config
.password("postgres")
.application_name("first-query");
}
Why foreground?
Because the second window — the one tailing those logs — is where
you’ll see exactly what babar sent on the wire. Prepared-statement
names, parameter values, every BEGIN and COMMIT. When something
surprises you in chapter 3 or chapter 7, your first move is to glance
at that window. It is faster than any println you will ever write.
Stop it
Ctrl-C in the Postgres window. --rm cleans up the container; the
data goes with it. That’s the point — every tutorial run is a fresh
database.
Next
- Your first query → — connect, query, decode.
Your first query
In this chapter we’ll connect to a Postgres server, run a single query,
and decode the response into Rust values you can pattern-match
on. Three values do the work: a Config, a Query, and a Session.
Setup
Add babar and a Tokio runtime to your Cargo.toml, then drop the
following into src/main.rs.
use babar::codec::{int4, text};
use babar::query::Query;
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
// 1. Describe the connection.
let cfg = Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("first-query");
// 2. Open a Session. The Session owns one Postgres connection.
let session: Session = Session::connect(cfg).await?; // type: Session
// 3. Build a typed Query. () means "no parameters"; the codec
// tuple at the end describes each column in the result row.
let q: Query<(), (i32, String)> = Query::raw( // type: Query<(), (i32, String)>
"SELECT 1::int4 AS id, 'Ada'::text AS name",
(),
(int4, text),
);
// 4. Run it. `query` returns Vec<B> — one decoded tuple per row.
let rows: Vec<(i32, String)> = session.query(&q, ()).await?; // type: Vec<(i32, String)>
for (id, name) in &rows {
println!("id={id} name={name}");
}
session.close().await?;
Ok(())
}
Run it with a Postgres reachable on localhost:5432:
cargo run
# id=1 name=Ada
Breaking this down
Config::new(host, port, user, database) is a constructor
that takes the four required fields by position. Optional fields are
chained on after: .password(...), .application_name(...),
.connect_timeout(...). There is no Config::from_env() and no DSN
parser — Config is a plain struct, and you set its fields. This is a
deliberate choice: the credentials your program uses should be visible
in code review, not hidden in a connection string.
Session::connect(cfg) returns a Session. A Session owns one
Postgres connection plus a background task that owns the socket. Every
method you call on Session is cancellation-safe: dropping the future
won’t leave the connection half-spoken-to.
Query<(), (i32, String)> is the heart of the typed surface. The
two type parameters are the input (parameters you bind) and the
output (the row shape after decoding). Here we pass () because the
SQL has no parameters, and (i32, String) because the codec tuple
(int4, text) decodes each row into (i32, String).
Query::raw(sql, encoder, decoder) is the most direct way to build
a Query. The sql! macro produces a different thing — a Fragment
that knows about named placeholders — and you’d build a Query from it
with Query::from_fragment(fragment, decoder). The chain is always:
fragment → query → run. You cannot pass a Fragment straight to
session.query — the phrase to remember is “sql! is the schema, Query is the call”.
session.query(&q, args) is the run step. It returns
Vec<B> — fully decoded rows, where each B is whatever your decoder
tuple produces. babar does not expose an intermediate Row type and
there is no .get::<T, _>() accessor: by the time you have the Vec,
the bytes are already typed Rust values.
What happened
You spoke the Postgres wire protocol, prepared a statement, bound zero
parameters, fetched one row, decoded int4 into i32 and text into
String, and closed the session.
Next
Head into Chapter 1: Connecting to see what
else lives on Config, what the background driver task is doing, and
how to recover when the server is unreachable.
1. Connecting
In this chapter we’ll use Config, Session::connect, and the
background driver task that keeps every call you make
cancellation-safe.
Setup
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let cfg = Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch01-connecting")
.connect_timeout(std::time::Duration::from_secs(5));
let session: Session = Session::connect(cfg).await?; // type: Session
println!(
"server_version = {}",
session.params().get("server_version").unwrap_or("?"),
);
session.close().await?;
Ok(())
}
Config is a struct, not a string
Config::new(host, port, user, database) takes the four required
fields by position. Optional fields are added by chained methods —
.password(...), .application_name(...), .connect_timeout(...),
TLS settings, and so on. Because Config is a plain struct you can
build it from any source you like (env vars, a config file, a
clap::Parser); babar deliberately doesn’t ship a DSN parser or a
Config::from_env(). Connection details should be visible and explicit in code.
What Session::connect actually does
Session::connect(cfg) opens one TCP connection to Postgres,
negotiates TLS if you asked for it, runs the SCRAM-SHA-256 handshake,
exchanges startup parameters, and hands you back a Session. From
that moment on, the Session is a thin handle: the real socket
ownership lives in a background Tokio task that the Session spawns.
That background task is the reason every public call on Session is
cancellation-safe. If you tokio::select! away from a query midway
through, the protocol stays in a consistent state — the driver task
finishes reading the in-flight messages even if you don’t await the
result. The shape of the model is sketched in
What makes babar babar;
we dive into the details in
explanation/driver-task.md.
Reading server parameters
#![allow(unused)]
fn main() {
let v = session.params().get("server_version").unwrap_or("?");
let tz = session.params().get("TimeZone").unwrap_or("?");
println!("server_version={v}, TimeZone={tz}");
}
session.params() returns the ParameterStatus map Postgres sent
during startup. It’s read-only and updated by the server when it
issues a ParameterStatus message.
Closing politely
session.close().await sends a Terminate and waits for the driver
task to drain. If you drop the Session without calling close, the
background task is still cancelled cleanly — but close lets you
observe a final Result if the server objected to anything.
Recovering when the server is unreachable
Session::connect returns babar::Result<Session>. The error is the
same babar::Error enum reviewed in
Chapter 9; for connection failures you’ll
typically see Error::Io(_) (DNS, TCP, TLS) or Error::Server { code, .. } (auth rejected, database missing). Inspect the variant
directly — there’s no Error::kind() classifier.
Next
Chapter 2: Selecting walks through reading rows back into typed Rust values.
2. Selecting
In this chapter we’ll go from a connected Session to typed Rust
values: a SELECT, a decoder tuple, and a Vec<B> you can iterate.
Setup
use babar::codec::{bool, int4, nullable, text};
use babar::query::Query;
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch02-selecting"),
)
.await?;
// No parameters; one row of three columns.
let q: Query<(), (i32, String, bool)> = Query::raw( // type: Query<(), (i32, String, bool)>
"SELECT 1::int4 AS id, 'alice'::text AS name, true AS active",
(),
(int4, text, bool),
);
let rows: Vec<(i32, String, bool)> = session.query(&q, ()).await?; // type: Vec<(i32, String, bool)>
for (id, name, active) in &rows {
println!("{id}\t{name}\t{active}");
}
session.close().await?;
Ok(())
}
The shape of a query
Every Query<A, B> carries two type parameters:
A— the parameter tuple you bind at call time.()if there are no$Nplaceholders.B— the row tuple you’ll get back, one per row.
The codec tuple at the end of Query::raw decides B. (int4, text, bool) decodes columns into (i32, String, bool). There is no
intermediate Row type and no .get::<T, _>() accessor: by the time
session.query(...).await? returns, the bytes are already typed
values.
Nullable columns
Postgres columns are nullable by default. babar refuses to guess: if
the column might be NULL, wrap its codec in nullable(...) and let
the row tuple use Option<T>.
#![allow(unused)]
fn main() {
use babar::codec::{int4, nullable, text};
let q: Query<(), (i32, Option<String>)> = Query::raw(
"SELECT id, note FROM users ORDER BY id",
(),
(int4, nullable(text)),
);
}
If you forget the nullable(...) wrapper and Postgres sends a NULL,
the codec returns a clear decode error rather than a panic or a silent
String::default(). For example, decoding the note column as plain
text against a row where note IS NULL:
#![allow(unused)]
fn main() {
use babar::codec::{int4, text};
// Wrong: `text` (not `nullable(text)`) and `String` (not `Option<String>`).
let q: Query<(), (i32, String)> = Query::raw(
"SELECT id, note FROM users WHERE id = 1",
(),
(int4, text),
);
match session.query(&q, ()).await {
Ok(rows) => println!("{rows:?}"),
Err(e) => eprintln!("decode failed: {e}"),
}
}
…prints something like:
decode failed: decode error at column 1 ("note"): unexpected NULL for non-nullable codec `text`;
wrap it in `nullable(text)` and decode into `Option<String>`
The fix is the one-line change shown above: swap text for nullable(text)
and String for Option<String> in the row tuple. babar would rather make
you spell it out than quietly hand you an empty string.
Multiple rows
session.query(&q, args) always returns Vec<B> — one tuple per row,
in server order. For one-row reads it’s perfectly idiomatic to write:
#![allow(unused)]
fn main() {
let row = session.query(&q, (id,)).await?.into_iter().next();
}
…and treat None as “no such row”. For large result sets, prefer
streaming — see Chapter 4.
When a row doesn’t fit your tuple
If your decoder asks for (i32, String) but the SQL returns three
columns, decoding fails with a clear Error::ColumnAlignment { expected, actual, .. }
before any rows are decoded. Make the
column list explicit (SELECT id, name FROM ...) so the row shape and
the codec tuple stay in lockstep — SELECT * is allowed but a
liability for typed code.
Next
Chapter 3: Parameterized commands
introduces Command<A>, the sql! macro, and the Encoder<A> /
Decoder<A> traits at a user level.
3. Parameterized commands
In this chapter we’ll bind parameters, write to the database, and meet
the Encoder<A> / Decoder<A> codec traits behind the scenes.
Setup
use babar::codec::{bool, int4, text};
use babar::query::{Command, Query};
use babar::{sql, Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch03-params"),
)
.await?;
// CREATE TABLE — no parameters, no rows back.
let create: Command<()> = Command::raw( // type: Command<()>
"CREATE TEMP TABLE todo (id int4 PRIMARY KEY, title text NOT NULL, done bool NOT NULL DEFAULT false)",
(),
);
session.execute(&create, ()).await?;
// INSERT — bind (i32, String).
let insert: Command<(i32, String)> = Command::raw( // type: Command<(i32, String)>
"INSERT INTO todo (id, title) VALUES ($1, $2)",
(int4, text),
);
session.execute(&insert, (1, "buy milk".into())).await?;
// UPDATE — bind one parameter; capture rows-affected.
let mark_done: Command<(i32,)> = Command::raw(
"UPDATE todo SET done = true WHERE id = $1",
(int4,),
);
let affected: u64 = session.execute(&mark_done, (1,)).await?;
println!("updated {affected} row(s)");
// SELECT it back, this time with the sql! macro and named placeholders.
let lookup: Query<(bool,), (i32, String, bool)> =
Query::from_fragment(
sql!(
"SELECT id, title, done FROM todo WHERE done = $done ORDER BY id",
done = bool,
),
(int4, text, bool),
);
for (id, title, done) in session.query(&lookup, (true,)).await? {
println!("{id}\t{title}\t{done}");
}
session.close().await?;
Ok(())
}
Command<A> vs Query<A, B>
A Command<A> describes a round-trip that doesn’t return rows —
DDL, INSERT, UPDATE, DELETE. session.execute(&cmd, args).await?
returns a u64 rows-affected count.
A Query<A, B> describes a round-trip that returns typed rows.
session.query(&q, args).await? returns Vec<B>.
Both take the same A type parameter for parameters: a tuple of
encoders for Command::raw / Query::raw, or a fragment that knows
its own parameter shape if you use the sql! macro.
Two ways to spell the SQL
Command::raw and Query::raw
The most direct form. You write Postgres positional placeholders
($1, $2, …) and pass an explicit codec tuple in matching order.
This is what the todo_cli example uses.
The sql! macro
sql! lets you write named placeholders ($id, $title) and pair
each name with its codec inline. It produces a Fragment<A> whose
parameter type A is derived from the names you used. Then you wrap
the fragment in either Command::from_fragment(...) or
Query::from_fragment(fragment, decoder_tuple) to get the runnable
value:
#![allow(unused)]
fn main() {
let f = sql!(
"INSERT INTO todo (id, title) VALUES ($id, $title)",
id = int4,
title = text,
);
let insert: Command<(i32, String)> = Command::from_fragment(f);
}
A Fragment on its own is not runnable — you cannot call
session.execute(sql!(...)) or session.query(sql!(...)) directly.
The chain is always fragment → command/query → run.
What the codec types are doing
When you write (int4, text) you’re constructing a tuple of
Encoder<A> / Decoder<A> values. Each one knows two things:
- the Postgres OID it speaks for (
int4↔ OID 23,text↔ OID 25), - how to encode/decode that OID’s binary representation to/from its
Rust counterpart (
i32,String, …).
The Encoder<A> trait turns a Rust A into the parameter byte
buffer; the Decoder<A> trait turns one column’s bytes back into a
Rust A. Both traits are generic over the value type, which is why
the row tuple in Query<(), (i32, String, bool)> is the codec
tuple’s value-type, not some opaque Row shape.
Codecs you’ll reach for first: int4, int8, text, bool,
bytea, float4, float8, nullable(c). The full set lives in
babar::codec; the full set is listed in
reference/codecs.md.
Next
Chapter 4: Prepared queries & streaming shows how to prepare a statement once, run it many times, and stream results in batches.
4. Prepared queries & streaming
In this chapter we’ll prepare a statement on the server, run it many
times without re-parsing, and stream a large result set in batches
instead of buffering it all into a Vec.
Setup
use babar::codec::{int4, text};
use babar::query::{Command, Query};
use babar::{Config, Session};
use futures_util::StreamExt;
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch04-prepared"),
)
.await?;
let create: Command<()> = Command::raw(
"CREATE TEMP TABLE prepared_demo (id int4 PRIMARY KEY, title text NOT NULL)",
(),
);
session.execute(&create, ()).await?;
// Prepare once, execute five times.
let insert: Command<(i32, String)> = Command::raw(
"INSERT INTO prepared_demo (id, title) VALUES ($1, $2)",
(int4, text),
);
let prepared = session.prepare_command(&insert).await?; // type: PreparedCommand<(i32, String)>
for (id, title) in [(1, "alpha"), (2, "beta"), (3, "gamma"), (4, "delta"), (5, "epsilon")] {
prepared.execute((id, title.into())).await?;
}
prepared.close().await?;
// Stream the full table in batches of 2.
let scan: Query<(), (i32, String)> = Query::raw(
"SELECT id, title FROM prepared_demo ORDER BY id",
(),
(int4, text),
);
let mut rows = session.stream_with_batch_size(&scan, (), 2).await?;
while let Some(row) = rows.next().await {
let (id, title) = row?; // type: (i32, String)
println!("streamed {id}: {title}");
}
session.close().await?;
Ok(())
}
prepare_command and prepare_query
When you call session.prepare_command(&cmd).await? (or
prepare_query for a Query<A, B>), babar sends Parse once and
gets back a server-side prepared statement that you can call as many
times as you want. Each call avoids the Parse round-trip — the
server already has the plan, the parameter OIDs, and the result
description cached.
The prepared handle exposes the same execute(args) / query(args)
methods you’d use on Session, just bound to that one statement. When
you’re done, call .close().await to release the server-side name —
or drop the handle and the next prepared statement under the same
name will replace it.
Streaming with stream_with_batch_size
For result sets that don’t fit comfortably in memory, swap
session.query for session.stream_with_batch_size(&q, args, n). It
returns a RowStream<B> (an impl Stream<Item = babar::Result<B>>)
that pulls rows from the server n at a time using a Postgres portal.
A few things to note:
- Back-pressure. The driver task only fetches the next batch when the consumer pulls. If you stop polling the stream, the server stops sending rows; nothing buffers indefinitely on either side.
- Cancellation is safe. Dropping the stream or
tokio::select!ing away closes the portal cleanly. TheSessionis ready for its next call as soon as the portal close completes. - Each
ItemisResult<B, Error>. Decode errors surface per-row, so you can recover from a single bad row without losing the rest of the batch.
When to prepare, when to stream
| Pattern | Use it for |
|---|---|
Command::raw / Query::raw + session.execute / session.query | One-shot statements, ad hoc queries. |
prepare_command / prepare_query + repeated execute / query | Hot paths called many times with different parameters. |
stream_with_batch_size | Result sets larger than you want to materialize at once. |
Next
Chapter 5: Transactions introduces
Session::transaction() and how to compose all of the above inside
BEGIN / COMMIT.
5. Transactions
In this chapter we’ll wrap a sequence of statements in BEGIN /
COMMIT, recover from a partial failure with a savepoint, and let
babar’s closure-based API decide when to commit and when to roll
back.
Setup
use babar::codec::{int4, text};
use babar::query::{Command, Query};
use babar::{Config, Error, Savepoint, Session, Transaction};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch05-tx"),
)
.await?;
let create: Command<()> = Command::raw(
"CREATE TEMP TABLE tx_demo (id int4 PRIMARY KEY, note text NOT NULL)",
(),
);
session.execute(&create, ()).await?;
session.transaction(|tx: Transaction<'_>| async move { // type: Transaction<'_>
let insert: Command<(i32, String)> = Command::raw(
"INSERT INTO tx_demo (id, note) VALUES ($1, $2)",
(int4, text),
);
tx.execute(&insert, (1, "outer-before".into())).await?;
// Savepoint that intentionally rolls back.
let middle = tx.savepoint(|sp: Savepoint<'_>| async move {
sp.execute(&insert, (2, "savepoint".into())).await?;
Err::<(), _>(Error::Config("rolling back inner savepoint".into()))
}).await;
assert!(matches!(middle, Err(Error::Config(_))));
tx.execute(&insert, (3, "outer-after".into())).await?;
Ok(())
}).await?;
let select: Query<(), (i32, String)> = Query::raw(
"SELECT id, note FROM tx_demo ORDER BY id",
(),
(int4, text),
);
for (id, note) in session.query(&select, ()).await? {
println!("{id}: {note}"); // committed: 1, 3
}
session.close().await?;
Ok(())
}
session.transaction is closure-shaped
Session::transaction(body) takes an async closure that receives a
Transaction<'_>. babar opens the transaction with BEGIN, runs
your body, and:
- if the closure returns
Ok(_)— commits. - if the closure returns
Err(_)— rolls back and surfaces your error. - if the closure panics — rolls back and re-raises the panic.
You never write COMMIT or ROLLBACK yourself, and you can’t forget
to. The borrow checker won’t let you call methods on the underlying
Session while the Transaction is alive — there’s exactly one
in-flight request on the connection at a time. (This typestate
discipline is one of the four properties that make babar distinctive;
see
What makes babar babar.)
Savepoints compose the same way
tx.savepoint(body) is the closure-shaped sibling for nested rollback
scopes. Same rules: Ok releases the savepoint, Err rolls back to
the savepoint and propagates the error. Savepoints can nest.
In the example above, the inner savepoint rolls back, but the outer transaction continues and commits rows 1 and 3. Row 2 is gone — as if the savepoint body had never run.
Returning values from a transaction
The closure’s Ok value is the transaction’s return value:
#![allow(unused)]
fn main() {
let next_id: i32 = session.transaction(|tx| async move {
let q: Query<(), (i32,)> = Query::raw(
"SELECT COALESCE(MAX(id), 0) + 1 FROM tx_demo",
(),
(int4,),
);
Ok(tx.query(&q, ()).await?[0].0)
}).await?;
}
tx carries the same execute / query / prepare_* /
stream_with_batch_size methods you’ve used on Session, scoped to
the transaction. When the closure returns, babar commits and you get
your value.
Errors and isolation
If a statement inside the body fails, the closure typically returns
Err, babar rolls back, and the transaction is gone. If you want to
observe an error and keep going, wrap that one statement in a
savepoint — the inner failure rolls the savepoint back without aborting
the outer transaction.
Isolation level isn’t set by babar; if you need SERIALIZABLE or a
read-only transaction, run SET TRANSACTION ... as the first
statement in the body.
Next
Chapter 6: Pooling introduces Pool, which hands
you transaction-capable sessions from a pool of warm connections.
6. Pooling
In this chapter we’ll trade Session::connect for a Pool of warm
connections, discuss the knobs that matter, and see how prepared
statements live alongside pooled connections.
Setup
use std::time::Duration;
use babar::codec::{int4, text};
use babar::query::{Command, Query};
use babar::{Config, HealthCheck, Pool, PoolConfig};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let connect = Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch06-pool");
let pool: Pool = Pool::new( // type: Pool
connect,
PoolConfig::new()
.min_idle(2)
.max_size(8)
.acquire_timeout(Duration::from_secs(2))
.idle_timeout(Duration::from_secs(30))
.max_lifetime(Duration::from_secs(300))
.health_check(HealthCheck::Ping),
)
.await?;
// Each acquire() hands you a connection scoped to the binding.
let conn = pool.acquire().await?; // type: PoolConnection
let create: Command<()> = Command::raw(
"CREATE TEMP TABLE pool_demo (id int4 PRIMARY KEY, note text NOT NULL)",
(),
);
let insert: Command<(i32, String)> = Command::raw(
"INSERT INTO pool_demo (id, note) VALUES ($1, $2)",
(int4, text),
);
let lookup: Query<(i32,), (String,)> = Query::raw(
"SELECT note FROM pool_demo WHERE id = $1",
(int4,),
(text,),
);
conn.execute(&create, ()).await?;
conn.execute(&insert, (1, "first checkout".into())).await?;
let prepared = conn.prepare_query(&lookup).await?;
println!("prepared on server as: {}", prepared.name());
println!("{:?}", prepared.query((1,)).await?);
drop(prepared);
drop(conn); // returns the connection to the pool
pool.close().await;
Ok(())
}
What a pool gives you
Pool::new(config, pool_config) opens up to max_size background
connections, keeping at least min_idle warm and ready. pool.acquire()
hands you a PoolConnection that behaves like a Session —
execute, query, prepare_command, prepare_query,
stream_with_batch_size, transaction, all of it.
Drop the PoolConnection and the pool reclaims it. Drop the
Pool itself and outstanding handles continue working until they’re
dropped, at which point the connections are closed.
The knobs that matter
| Field | What it controls |
|---|---|
min_idle | Minimum number of warm connections kept open. |
max_size | Hard ceiling on simultaneous connections (idle + in-use). |
acquire_timeout | How long pool.acquire() waits before returning PoolError::Timeout. |
idle_timeout | How long an idle connection lingers before being closed. |
max_lifetime | How long any connection (idle or in-use) lives before being recycled. |
health_check | Test to apply when checking out: HealthCheck::None, HealthCheck::Ping, or HealthCheck::ResetQuery(sql) (runs an arbitrary SQL string on every checkout via the simple-query protocol). |
A typical web service starts with min_idle = 2, max_size = 16,
acquire_timeout = 2s, idle_timeout = 30s, max_lifetime = 30min,
health_check = HealthCheck::Ping. Tune by watching p99 acquire times
and Postgres’ own pg_stat_activity for connection churn.
Pooled prepared statements
Each PoolConnection is a real, distinct Postgres connection.
Prepared statements live on the server, attached to that connection.
That has two consequences worth holding in your head:
- A prepared statement you make on
conn_ais not visible fromconn_b. Re-prepare on each connection (cheap — one round-trip), or use a shared statement cache if you build one on top. - When the pool recycles a connection (via
max_lifetimeor a failed health check), all of that connection’s prepared statements go with it. The nextprepare_*call on a fresh connection rebuilds them.
Errors that come from the pool itself
pool.acquire() returns Result<PoolConnection, PoolError>.
PoolError::AcquireFailed(babar::Error) wraps the underlying connect
error; PoolError::Timeout is its own variant. Translate them
into your service’s error type at the boundary — the pool example
shows the pattern.
Next
Chapter 7: Bulk loads with COPY adds the binary COPY FROM STDIN path for ingesting many rows at once.
7. Bulk loads with COPY
In this chapter we’ll ingest many rows in a single round-trip with
binary COPY FROM STDIN. Current limitations are discussed as well.
Setup
use babar::query::Query;
use babar::{Config, CopyIn, Session};
#[derive(Debug, Clone, PartialEq, babar::Codec)]
struct VisitRow {
id: i32,
email: String,
active: bool,
note: Option<String>,
visits: i64,
}
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch07-copy"),
)
.await?;
session
.simple_query_raw(
"CREATE TEMP TABLE bulk_visits (\
id int4 PRIMARY KEY,\
email text NOT NULL,\
active bool NOT NULL,\
note text,\
visits int8 NOT NULL\
)",
)
.await?;
let rows = vec![
VisitRow { id: 1, email: "ada@example.com".into(), active: true, note: Some("first".into()), visits: 7 },
VisitRow { id: 2, email: "bob@example.com".into(), active: false, note: None, visits: 3 },
VisitRow { id: 3, email: "cara@example.com".into(), active: true, note: Some("news".into()), visits: 12 },
];
let copy: CopyIn<VisitRow> = CopyIn::binary( // type: CopyIn<VisitRow>
"COPY bulk_visits (id, email, active, note, visits) FROM STDIN BINARY",
VisitRow::CODEC,
);
let affected: u64 = session.copy_in(©, rows.clone()).await?; // type: u64
println!("copied {affected} rows");
let select: Query<(), VisitRow> = Query::raw(
"SELECT id, email, active, note, visits FROM bulk_visits ORDER BY id",
(),
VisitRow::CODEC,
);
for row in session.query(&select, ()).await? {
println!("{row:?}");
}
session.close().await?;
Ok(())
}
What CopyIn::binary is doing
CopyIn::binary(sql, codec) describes a COPY ... FROM STDIN BINARY
statement plus a codec for one row. session.copy_in(©, rows)
sends Postgres’ binary COPY framing — a header, one length-prefixed
binary tuple per row, and a trailer — and returns the rows-affected
count once the server acknowledges.
The babar::Codec derive on VisitRow expands to an
Encoder<VisitRow> / Decoder<VisitRow> pair, with field order
matching the struct. That same VisitRow::CODEC is reusable for a
SELECT decoder, as the example shows. One row type, one codec, two
directions.
Why “binary” and “STDIN”?
- Binary beats text for throughput: no string parsing on the
server, no escaping rules, exact round-trip for
bytea,numeric, timestamps, and so on. - STDIN is the direction where babar streams into Postgres. The driver task feeds rows as you produce them, so memory usage stays bounded — you can pass an iterator of millions of rows without buffering them all.
What COPY support does not include yet
babar’s COPY support is deliberately narrow at the moment:
COPY ... TO STDOUT(reading rows back via COPY) is not yet implemented — it’s on the roadmap, see explanation/roadmap.md.- Text and CSV formats (
FORMAT text,FORMAT csv) are deferred. UseBINARYfor now. COPY FROM PROGRAMandCOPY ... FROM <file>are server-side; they don’t go through the driver and aren’t part of babar’s surface.
Next
Chapter 8: Migrations introduces Migrator,
FileSystemMigrationSource, and the migrations table.
8. Migrations
In this chapter we’ll point a Migrator at a directory of paired
.up.sql / .down.sql files, ask it for a plan, apply pending
migrations, and roll back when we change our minds.
Setup
use std::path::PathBuf;
use babar::migration::FileSystemMigrationSource;
use babar::{Config, Migrator, MigratorOptions, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch08-migrate"),
)
.await?;
let migrator: Migrator<FileSystemMigrationSource> = // type: Migrator<FileSystemMigrationSource>
Migrator::with_options(
FileSystemMigrationSource::new(PathBuf::from("migrations")),
MigratorOptions::new(),
);
// What's applied? What's pending?
let applied = migrator.applied_migrations(&session).await?;
let status = migrator.status(&applied)?;
println!("{status:?}");
// What would `up` do?
let plan = migrator.plan_apply(&applied)?;
println!("plan: {plan:?}");
// Apply pending migrations.
let applied_plan = migrator.apply(&session).await?;
println!("applied: {applied_plan:?}");
// Roll back the most recent migration.
let rolled = migrator.rollback(&session, 1).await?;
println!("rolled back: {rolled:?}");
session.close().await?;
Ok(())
}
File layout
FileSystemMigrationSource expects pairs of files in one directory:
migrations/
├── 0001__create_users.up.sql
├── 0001__create_users.down.sql
├── 0002__add_email_index.up.sql
└── 0002__add_email_index.down.sql
The naming convention is <version>__<name>.{up,down}.sql. Versions
sort lexicographically — keep them zero-padded so 10 doesn’t sort
before 2. Each .up.sql must have a matching .down.sql; missing
or unpaired files surface as a clear Error at Migrator build
time, not at apply time.
The migrations table
By default Migrator records applied migrations in
public.babar_migrations. The schema and table name are configurable
on MigratorOptions (.table(MigrationTable::new(schema, name)?)),
and there’s an advisory-lock id (.advisory_lock_id(...)) that
serializes concurrent migrators across processes — only one can hold
the lock and apply at a time, so a deploy that races itself won’t
double-apply.
Plan first, apply second
migrator.plan_apply(&applied)? returns a MigrationPlan describing
exactly what it would do — same value apply() would consume — without
touching the database. Use it for dry-runs in CI, for printing a
migration preview, or for human approval gates.
migrator.apply(&session).await? runs the same plan transactionally,
one migration per transaction by default. The transaction mode is
configurable per migration via MigrationTransactionMode for the rare
DDL that can’t run inside a transaction (CREATE INDEX CONCURRENTLY, for example).
Rolling back
migrator.rollback(&session, n).await? runs the .down.sql of the
most recent n applied migrations, in reverse. If you need to undo
just one, pass 1. If you need a planned dry-run first,
plan_rollback(&applied, n)? is its read-only sibling.
The example CLI is just an example
crates/core/examples/migration_cli.rs is a thin, helpful wrapper
around the Migrator API — babar-migrate status, plan, up, down --steps N. It’s an example, not a shipped binary. You can copy it
into your project verbatim, adapt it, or ignore it entirely and call
the Migrator API from your own deploy script.
Next
Chapter 9: Error handling walks through the
babar::Error enum and how to classify failures from apply and
everything else by inspecting the variant directly.
9. Error handling
This chapter covers the babar::Error enum, classifying failures
by inspecting the variant directly, and pulling out the SQLSTATE codes
your retry logic actually wants.
Setup
use babar::codec::{int4, text};
use babar::query::Command;
use babar::{Config, Error, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch09-errors"),
)
.await?;
let create: Command<()> = Command::raw(
"CREATE TEMP TABLE err_demo (id int4 PRIMARY KEY, name text NOT NULL UNIQUE)",
(),
);
session.execute(&create, ()).await?;
let insert: Command<(i32, String)> = Command::raw(
"INSERT INTO err_demo (id, name) VALUES ($1, $2)",
(int4, text),
);
session.execute(&insert, (1, "ada".into())).await?;
// Second insert violates the UNIQUE constraint — classify it.
match session.execute(&insert, (2, "ada".into())).await {
Ok(_) => unreachable!(),
Err(err) => match classify(&err) { // type: Failure
Failure::Duplicate => println!("duplicate name; skipping"),
Failure::ServerOther { code } => println!("server error {code}"),
Failure::IoOrClosed => println!("connection died; retry later"),
Failure::Bug => println!("our bug, not the server's: {err}"),
},
}
session.close().await?;
Ok(())
}
#[derive(Debug)]
enum Failure {
Duplicate,
ServerOther { code: String },
IoOrClosed,
Bug,
}
fn classify(err: &Error) -> Failure {
match err {
Error::Server { code, .. } if code == "23505" => Failure::Duplicate,
Error::Server { code, .. } => Failure::ServerOther { code: code.clone() },
Error::Io(_) | Error::Closed { .. } => Failure::IoOrClosed,
_ => Failure::Bug,
}
}
The babar::Error enum, in one breath
There is no Error::kind() accessor. Classification is by match on
the variant:
| Variant | When you see it |
|---|---|
Error::Io(io::Error) | Socket-level failure — DNS, TCP reset, TLS handshake. |
Error::Closed { sql, origin } | Server hung up or the driver task shut down with an in-flight request. |
Error::Protocol(String) | The server (or driver) sent a wire-protocol message that doesn’t fit the state machine. Always a bug somewhere. |
Error::Auth(String) | SCRAM rejected, password wrong, role can’t log in. |
Error::UnsupportedAuth(String) | Server asked for an auth method babar doesn’t speak (e.g. gss, sspi). |
Error::Server { code, severity, message, detail, hint, position, sql, origin } | ErrorResponse from Postgres. code is SQLSTATE — match on it. |
Error::Config(String) | Configuration problem caught before any I/O. |
Error::Codec(String) | An encoder or decoder rejected a value. |
Error::ColumnAlignment { expected, actual, sql, origin } | Decoder column count ≠ server’s RowDescription. |
Error::SchemaMismatch { position, expected_oid, actual_oid, column_name, sql, origin } | Decoder OID ≠ server’s column type. |
Error::Migration(MigrationError) | The migrator’s planning or apply step failed. |
That’s eleven. They cover everything. You can build a small classify
function once per service, and call it everywhere.
Why SQLSTATE matters more than the message
Error::Server.message is for humans. Error::Server.code (a
five-character SQLSTATE) is for code. A few you may see often:
| SQLSTATE | Class | Meaning |
|---|---|---|
23505 | unique_violation | Duplicate key. |
23503 | foreign_key_violation | Missing FK target. |
23502 | not_null_violation | NULL into a NOT NULL column. |
40001 | serialization_failure | Serializable transaction must retry. |
40P01 | deadlock_detected | Deadlock; retry the whole transaction. |
57014 | query_canceled | Statement timeout fired. |
57P01 | admin_shutdown | Server is going away. |
The full list is in reference/errors.md. For
a retry budget on serialization failures, match on 40001 and run
the transaction body again with backoff.
origin and sql for diagnostics
Several variants carry sql: Option<String> and origin: Option<Origin>. The sql! macro captures its callsite as an
Origin, so when an error fires from inside a fragment-built query,
the Display impl can point you back to the macro invocation —
file, line, column. Surface those in your logs and you’ll spend a lot
less time bisecting which INSERT blew up.
Translating to your service’s error type
At the boundary of your application, fold babar::Error into your
domain error. The pattern from the Axum example is a good starting
shape:
#![allow(unused)]
fn main() {
fn db_error(err: babar::Error) -> (StatusCode, String) {
match err {
babar::Error::Server { code, .. } if code == "23505" => {
(StatusCode::CONFLICT, "already exists".into())
}
babar::Error::Auth(_) | babar::Error::UnsupportedAuth(_) => {
(StatusCode::UNAUTHORIZED, "auth failed".into())
}
other => (StatusCode::INTERNAL_SERVER_ERROR, other.to_string()),
}
}
}
Next
Chapter 10: Custom codecs shows how to write
your own Encoder<A> / Decoder<A> for types babar doesn’t know
about out of the box.
10. Custom codecs
In this chapter we’ll go from “I want to read widgets.id as a
uuid::Uuid” to a working Encoder<Uuid> / Decoder<Uuid> pair, and
see when to reach for #[derive(babar::Codec)] instead of writing
the traits by hand.
Setup
#![allow(unused)]
fn main() {
use babar::codec::{Decoder, Encoder};
use babar::types::Type;
use bytes::Bytes;
use uuid::Uuid;
const UUID_OID: u32 = 2950;
struct UuidCodec;
impl Encoder<Uuid> for UuidCodec { // type: impl Encoder<Uuid>
fn encode(&self, value: &Uuid, params: &mut Vec<Option<Vec<u8>>>) -> babar::Result<()> {
params.push(Some(value.as_bytes().to_vec()));
Ok(())
}
fn oids(&self) -> &'static [u32] { &[UUID_OID] }
fn format_codes(&self) -> &'static [i16] { &[1] } // binary
}
impl Decoder<Uuid> for UuidCodec { // type: impl Decoder<Uuid>
fn decode(&self, columns: &[Option<Bytes>]) -> babar::Result<Uuid> {
let bytes = columns[0]
.as_ref()
.ok_or_else(|| babar::Error::Codec("uuid: NULL".into()))?;
let arr: [u8; 16] = bytes.as_ref().try_into()
.map_err(|_| babar::Error::Codec("uuid: wrong length".into()))?;
Ok(Uuid::from_bytes(arr))
}
fn n_columns(&self) -> usize { 1 }
fn oids(&self) -> &'static [u32] { &[UUID_OID] }
fn format_codes(&self) -> &'static [i16] { &[1] }
}
const UUID: UuidCodec = UuidCodec;
}
What you have to implement
Both traits are generic over a Rust value type A. Encoder<A> turns
an &A into one or more parameter byte buffers; Decoder<A> turns
N column buffers back into an A.
The Encoder<A> methods (format_codes and types have sensible
defaults — implement them only when you need to override):
encode(&self, value, params)— push exactlyoids().len()entries ontoparams.Some(bytes)for a value,Nonefor SQLNULL.oids()— the Postgres OIDs of the parameter slots, in order.format_codes()—0for text format,1for binary; defaults to text. Use binary for everything you can.types()— richer type metadata; default implementation derives this fromoids().
The Decoder<A> methods (format_codes and types again have
defaults you can usually skip):
decode(&self, columns)— consume the firstn_columns()entries ofcolumnsand produce anA.n_columns()— how many columns this decoder consumes.oids()— column OIDs, in order.oids().len() == n_columns().format_codes()— same convention as the encoder.
The driver checks the top-level decoder’s n_columns() against the
server’s RowDescription for you; that’s how you get
Error::ColumnAlignment instead of a panic when shapes don’t line
up.
Use it just like a built-in codec
#![allow(unused)]
fn main() {
use babar::query::Query;
let q: Query<(Uuid,), (Uuid, String)> = Query::raw(
"SELECT id, name FROM widgets WHERE id = $1",
(UUID,),
(UUID, babar::codec::text),
);
}
Codec values compose: the tuple (UUID, text) is itself a
Decoder<(Uuid, String)>, because Decoder<A> is implemented for
tuples whose elements implement Decoder<_>.
When to derive instead
If you have a Postgres composite type or a row-shaped struct, skip
the trait impls entirely and use #[derive(babar::Codec)]:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq, babar::Codec)]
struct UserRow {
id: i32,
name: String,
note: Option<String>,
#[pg(codec = "varchar")]
handle: String,
}
}
The derive expands to an Encoder<UserRow> / Decoder<UserRow> pair
whose column order matches the struct. #[pg(codec = "...")] lets
you override the codec per field — useful when the column type is
varchar instead of text, for example. The generated codec is
exposed as UserRow::CODEC and works in Command::raw,
Query::raw, and CopyIn::binary exactly like any other.
The full example lives in crates/core/examples/derive_codec.rs.
Tips you’ll want before your first round-trip fails
- Match the OID exactly. If your
oids()saysint4(23) but the column isint8(20), the driver returnsError::SchemaMismatchwith both OIDs. Look them up withSELECT oid, typname FROM pg_type WHERE typname = 'uuid'. - Binary first, text only as a last resort. The binary
representation is exact; the text representation involves Postgres’
IN/OUTfunctions and locale settings. - Handle NULL explicitly. A NULL column arrives as
Noneincolumns. If your type can’t be NULL, decode it directly. If it can, expose anullable(...)wrapper or useOption<A>from your caller. encodeerrors are user errors, not panics. ReturnErr(Error::Codec(...))for unrepresentable values rather than panicking — the driver propagates it cleanly.
Next
Chapter 11: Building a web service wires a
pool, custom codecs, and tracing together inside an Axum service.
11. Building a web service
In this chapter you’ll wire babar into an Axum HTTP service: a connection pool in your shared state, JSON in / JSON out handlers, and clean error mapping at the boundary.
Setup
use std::net::SocketAddr;
use axum::extract::{Path, State};
use axum::http::StatusCode;
use axum::routing::{get, post};
use axum::{Json, Router};
use babar::codec::{int4, text};
use babar::query::{Command, Query};
use babar::{Config, Pool, PoolConfig};
use serde::{Deserialize, Serialize};
#[derive(Clone)]
struct AppState {
pool: Pool, // type: Pool
}
#[derive(Debug, Serialize)]
struct Widget { id: i32, name: String }
#[derive(Debug, Deserialize)]
struct CreateWidget { id: i32, name: String }
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
tracing_subscriber::fmt()
.with_env_filter(std::env::var("RUST_LOG").unwrap_or_else(|_| "babar=info".into()))
.try_init()
.ok();
let cfg = Config::new("127.0.0.1", 5432, "postgres", "postgres")
.password("postgres")
.application_name("babar-axum-service");
let pool = Pool::new(cfg, PoolConfig::new().max_size(8)).await?;
initialize(&pool).await?;
let app = Router::new()
.route("/healthz", get(|| async { "ok" }))
.route("/widgets", post(create_widget))
.route("/widgets/:id", get(get_widget))
.with_state(AppState { pool });
let addr: SocketAddr = "127.0.0.1:3000".parse()?;
println!("listening on http://{addr}");
axum::serve(tokio::net::TcpListener::bind(addr).await?, app).await?;
Ok(())
}
The handler shape
#![allow(unused)]
fn main() {
async fn create_widget(
State(state): State<AppState>,
Json(payload): Json<CreateWidget>,
) -> Result<(StatusCode, Json<Widget>), (StatusCode, String)> {
let conn = state.pool.acquire().await.map_err(pool_http)?;
let insert: Command<(i32, String)> = Command::raw(
"INSERT INTO widgets (id, name) VALUES ($1, $2)",
(int4, text),
);
conn.execute(&insert, (payload.id, payload.name.clone())).await.map_err(db_http)?;
Ok((StatusCode::CREATED, Json(Widget { id: payload.id, name: payload.name })))
}
async fn get_widget(
State(state): State<AppState>,
Path(id): Path<i32>,
) -> Result<Json<Widget>, (StatusCode, String)> {
let conn = state.pool.acquire().await.map_err(pool_http)?;
let select: Query<(i32,), (i32, String)> = Query::raw(
"SELECT id, name FROM widgets WHERE id = $1",
(int4,),
(int4, text),
);
let rows = conn.query(&select, (id,)).await.map_err(db_http)?;
rows.into_iter().next()
.map(|(id, name)| Json(Widget { id, name }))
.ok_or((StatusCode::NOT_FOUND, format!("widget {id} not found")))
}
}
Each handler:
- Pulls a connection from the pool with
pool.acquire(). The handle is dropped at the end of the function and returns to the pool automatically. - Builds a typed
CommandorQueryand runs it. - Maps
babar::Errorandbabar::PoolErrorto(StatusCode, String)at the boundary.
Drop the connection between handlers — Axum will get a fresh one for
the next request. Don’t pass a PoolConnection through your service’s
own types; pass the Pool and acquire when you need to. That’s how
you keep request handlers cheap to spin up.
Errors at the boundary
#![allow(unused)]
fn main() {
fn pool_http(err: babar::PoolError) -> (StatusCode, String) {
(StatusCode::SERVICE_UNAVAILABLE, err.to_string())
}
fn db_http(err: babar::Error) -> (StatusCode, String) {
match err {
babar::Error::Server { code, .. } if code == "23505" => {
(StatusCode::CONFLICT, "already exists".into())
}
babar::Error::Server { code, .. } if code == "23503" => {
(StatusCode::UNPROCESSABLE_ENTITY, "foreign key violation".into())
}
other => (StatusCode::INTERNAL_SERVER_ERROR, other.to_string()),
}
}
}
Use the SQLSTATE table from
Chapter 9 to expand this map. Resist the
temptation to expose Error’s Display directly — it’s great for
logs, but it leaks internals to clients.
Where the spans come from
Once tracing_subscriber is initialized (any subscriber will do —
fmt, tracing-opentelemetry, etc.), every Session::connect,
Session::execute, Session::query, prepared statement, and
transaction call records a span:
| Span name | Fields |
|---|---|
db.connect | db.system, db.user, db.name, net.peer.name, net.peer.port |
db.prepare | db.system, db.statement, db.operation |
db.execute | db.system, db.statement, db.operation |
db.transaction | db.system, db.operation |
Field names follow OpenTelemetry semantic conventions, so any exporter that understands OTel naming gets useful signal for free. There’s no babar-specific subscriber to register; configure the subscriber you’d configure anyway.
What this gets you
The full axum_service example in
crates/core/examples/axum_service.rs is a few dozen lines longer
(env var parsing, two more routes), but it’s the same shape. Once you
have a Pool plus a couple of helper functions for error mapping,
adding a new endpoint is just another typed Query and another
acquire().
Next
Chapter 12: TLS & security covers TlsMode, root
certificates, and the SCRAM-SHA-256 channel-binding handshake.
12. TLS & security
In this chapter we’ll turn TLS on, point at a custom root certificate, pick a backend, and understand what SCRAM-SHA-256 channel binding buys us.
Setup
use std::path::PathBuf;
use babar::config::{TlsBackend, TlsMode};
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
let cfg = Config::new("db.example.com", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch12-tls")
.tls_mode(TlsMode::Require) // type: Config (chained)
.tls_backend(TlsBackend::Rustls)
.tls_server_name("db.example.com")
.tls_root_cert_path(PathBuf::from("/etc/ssl/certs/internal-ca.pem"));
let session: Session = Session::connect(cfg).await?; // type: Session
println!(
"negotiated TLS — server_version = {}",
session.params().get("server_version").unwrap_or("?"),
);
session.close().await?;
Ok(())
}
Three modes, pick one
TlsMode controls babar’s handshake posture:
TlsMode | What babar does |
|---|---|
Disable | Never attempt TLS. Plain TCP. |
Prefer | Ask for TLS; if the server refuses, fall back to plain TCP. |
Require | Demand TLS. A server that refuses is a connection failure. |
For anything outside localhost, use TlsMode::Require. Prefer
is convenient for development against a server you don’t control;
it’s also the mode an attacker would love your production deploy to
use.
Two backends, pick one
TlsBackend::Rustls is the pure-Rust default; the cargo feature is
rustls (and it’s in the default feature set). TlsBackend::NativeTls (cargo feature native-tls)
uses the platform’s TLS stack (Schannel on Windows, Secure Transport
on macOS, OpenSSL on Linux). Pick Rustls unless you have a specific
reason — system roots, FIPS mode, smartcard support — to reach for the
platform native-tls stack. See
reference/feature-flags.md for the
exact flag names.
Custom roots
tls_root_cert_path(path) reads a PEM bundle from disk and adds
those certificates to the trusted root set for this connection. This
is the right knob for self-signed dev CAs, internal CAs, and
“corporate-root-of-trust”-style deployments. Without it, babar uses
the backend’s default root store (system roots for NativeTls,
webpki-roots for Rustls).
tls_server_name(name) overrides the SNI hostname babar sends in
the handshake. Useful when you connect by IP but the certificate has a
DNS name; useful when you tunnel through ssh -L. Leave it unset
when the connection host already matches the certificate.
SCRAM-SHA-256 and channel binding
babar speaks Postgres’ modern auth handshake, SCRAM-SHA-256, with optional channel binding when TLS is in play. The short version:
- Your password never crosses the wire — the client and server prove knowledge of the salted hash via challenge/response.
- With channel binding (
SCRAM-SHA-256-PLUS), the proof is bound to the TLS channel, so a man-in-the-middle who terminates TLS can’t reuse the proof against the real server. Postgres advertisesSCRAM-SHA-256-PLUSover TLS connections; babar uses it automatically when both sides offer it.
babar also supports MD5 and cleartext-password auth for legacy
servers, but if the server selects something babar doesn’t speak —
gss, sspi, or any auth code babar hasn’t implemented — you get
Error::UnsupportedAuth(_). The fix is almost always to update the
server’s pg_hba.conf to use scram-sha-256 rather than weakening
the client.
A “what could go wrong?” checklist
Error::Io(_)during connect with TLS on — usually a bad root cert, a hostname mismatch, or the server isn’t actually serving TLS on that port.Error::UnsupportedAuth(_)— server’spg_hba.confselected an auth method babar doesn’t speak. Switch the role toscram-sha-256.Error::Auth(_)— wrong password, role can’t log in, or password expired.Error::Server { code: "28P01", .. }— invalid password, sent by the server instead of anAuthfailure.
Next
Chapter 13: Observability zooms out from TLS to the spans, fields, and logs that make a production-running babar service legible.
13. Observability
In this chapter we’ll see what babar emits via tracing out of the
box, attach a subscriber, and pick the fields you want flowing into
your aggregator.
Setup
use babar::codec::{int4, text};
use babar::query::Query;
use babar::{Config, Session};
#[tokio::main(flavor = "current_thread")]
async fn main() -> babar::Result<()> {
tracing_subscriber::fmt() // type: Subscriber
.with_env_filter(
std::env::var("RUST_LOG").unwrap_or_else(|_| "babar=info".into()),
)
.with_target(false)
.try_init()
.ok();
let session: Session = Session::connect( // type: Session
Config::new("localhost", 5432, "postgres", "postgres")
.password("postgres")
.application_name("ch13-observability"),
)
.await?;
let q: Query<(), (i32, String)> = Query::raw(
"SELECT 1::int4, 'hello'::text",
(),
(int4, text),
);
let _ = session.query(&q, ()).await?;
session.close().await?;
Ok(())
}
What babar emits
There is no babar-specific subscriber to register. Initialize any
tracing subscriber and you’ll start seeing spans:
| Span | Where it fires | Useful fields |
|---|---|---|
db.connect | Session::connect | db.system, db.user, db.name, net.peer.name, net.peer.port |
db.prepare | prepare_command / prepare_query | db.statement, db.operation |
db.execute | session.execute, command.execute | db.statement, db.operation |
db.transaction | session.transaction, tx.savepoint | db.operation |
Field names follow OpenTelemetry’s database semantic conventions, so
exporters (Jaeger, Tempo, Datadog APM, Honeycomb, …) understand them
without translation. db.operation is the first SQL keyword
(SELECT, INSERT, BEGIN, SAVEPOINT, …) — coarse but cheap to
group by.
Picking a subscriber
| Subscriber | When to reach for it |
|---|---|
tracing_subscriber::fmt | Local development, structured logs to stdout. |
tracing-bunyan-formatter | JSON logs your aggregator already understands. |
tracing-opentelemetry + an OTLP exporter | Distributed tracing alongside the rest of your services. |
The Axum example uses tracing_subscriber::fmt with an env filter:
#![allow(unused)]
fn main() {
tracing_subscriber::fmt()
.with_env_filter(std::env::var("RUST_LOG").unwrap_or_else(|_| "babar=info".into()))
.try_init()
.ok();
}
That’s enough to see span enter/exit lines for every connect, query, and transaction — handy when something is stalling and you want to know whether it’s the pool, the prepare, or the server.
Setting application_name
Config::new(...).application_name("billing-svc") is the cheapest
piece of observability babar offers. Postgres records it in
pg_stat_activity.application_name, so your DBA can see which
service is holding a long-running query open. Use a stable
service-level name; don’t include a hostname or PID — the pool will
multiplex many connections from one process.
What about metrics?
babar doesn’t ship metrics directly — there’s no built-in
pool_acquire_latency_seconds histogram, for example. You assemble
those at the boundary:
- Pool acquire latency: time
pool.acquire().awaityourself and feed it intometrics::histogram!(or whichever crate you use). - Query latency: derive from the
db.executespan duration viatracing-opentelemetry, or wrap your handlers in your service’s metrics layer. - Server-side stats (
pg_stat_statements,pg_stat_activity): query them yourself with a periodicQueryand push to your aggregator. babar gives you the round-trip; the policy is yours.
What you can answer once this is wired up
- “Which endpoint’s
db.executep99 spiked at 14:32?” — span histograms from your tracing backend. - “Was that an in-flight query or a connect-time stall?” —
db.connectvsdb.preparevsdb.executespan breakdown. - “Which service held that connection open?” — the
application_nameyou set, surfaced bypg_stat_activity.
You’re done
That’s the Book. From Connecting to here, you have the entire user-facing surface of babar — and a sense for how to operate it in production.
For the precise types and methods, head to the Reference. For the why — design choices, the background driver task, comparisons with other Rust Postgres drivers — head to the Explanation section.
Codec catalog
Generated rustdoc: https://docs.rs/babar/latest/babar/codec/index.html
See also: Book Chapter 10 — Custom codecs.
Every codec babar ships, grouped by module. OIDs are the Postgres
type OIDs the codec advertises in Bind / RowDescription. All
codecs use the binary wire format unless noted.
babar::codec (always on)
| Postgres type | OID | Rust type | Codec value | Module |
|---|---|---|---|---|
int2 / smallint | 21 | i16 | int2 | primitive |
int4 / integer | 23 | i32 | int4 | primitive |
int8 / bigint | 20 | i64 | int8 | primitive |
float4 / real | 700 | f32 | float4 | primitive |
float8 / double precision | 701 | f64 | float8 | primitive |
bool | 16 | bool | bool | primitive |
text | 25 | String | text | primitive |
varchar | 1043 | String | varchar | primitive |
bpchar / char(n) | 1042 | String | bpchar | primitive |
bytea | 17 | Vec<u8> | bytea | primitive |
| any (NULL-aware wrapper) | n/a | Option<T> | nullable(C) | nullable |
T[] | array OID | Vec<T> | array(C) | array (feature array) |
Codec constants are lowercase to match Postgres type names — int4,
text, bool shadow the Rust primitives inside babar::codec.
That’s deliberate; import the constants explicitly
(use babar::codec::{int4, text};) and the prim names remain visible
elsewhere.
Optional types — feature-gated
| Postgres type | OID | Rust type | Codec value | Module | Feature |
|---|---|---|---|---|---|
uuid | 2950 | uuid::Uuid | uuid | uuid | uuid |
date | 1082 | time::Date | date | time | time |
time | 1083 | time::Time | time | time | time |
timestamp | 1114 | time::PrimitiveDateTime | timestamp | time | time |
timestamptz | 1184 | time::OffsetDateTime | timestamptz | time | time |
date | 1082 | chrono::NaiveDate | chrono_date | chrono | chrono |
time | 1083 | chrono::NaiveTime | chrono_time | chrono | chrono |
timestamp | 1114 | chrono::NaiveDateTime | chrono_timestamp | chrono | chrono |
timestamptz | 1184 | chrono::DateTime<Utc> | chrono_timestamptz | chrono | chrono |
interval | 1186 | babar::codec::Interval | interval | interval | interval |
numeric | 1700 | rust_decimal::Decimal | numeric | numeric | numeric |
json | 114 | serde_json::Value / T: Deserialize | json / typed_json::<T>() | json | json |
jsonb | 3802 | serde_json::Value / T: Deserialize | jsonb / typed_json::<T>() | json | json |
inet | 869 | std::net::IpAddr | inet | net | net |
cidr | 650 | babar::codec::Cidr | cidr | net | net |
macaddr | 829 | babar::codec::MacAddr | macaddr | macaddr | macaddr |
macaddr8 | 774 | babar::codec::MacAddr8 | macaddr8 | macaddr | macaddr |
bit(n) | 1560 | babar::codec::BitString | bit | bits | bits |
varbit | 1562 | babar::codec::BitString | varbit | bits | bits |
hstore | server-assigned | babar::codec::Hstore | hstore | hstore | hstore |
citext | server-assigned | String | citext | citext | citext |
tsvector | 3614 | babar::codec::TsVector | tsvector | text_search | text-search |
tsquery | 3615 | babar::codec::TsQuery | tsquery | text_search | text-search |
vector | server-assigned | babar::codec::Vector | vector | pgvector | pgvector |
geometry (PostGIS) | server-assigned | T: geo_types::* | geometry::<T>() | postgis | postgis |
geography (PostGIS) | server-assigned | T: geo_types::* | geography::<T>() | postgis | postgis |
range<T> | range OID | babar::codec::Range<T> | range(C) | range | range |
multirange<T> | mr OID | babar::codec::Multirange<T> | multirange(C) | multirange | multirange (implies range) |
Composing codecs
Most type-system muscle lives in combinators, not new codec modules:
| Combinator | What it does |
|---|---|
nullable(C) | Adds NULL → Option<T> handling. Required for any column that can be NULL. |
array(C) | One-dimensional Postgres arrays as Vec<T>. |
range(C) | Postgres ranges over T. |
multirange(C) | Postgres multiranges (Postgres 14+). |
(C1, C2, …) | A row tuple — Decoder<(A, B, …)> is auto-implemented for tuples of decoders. |
For non-'static user types, write your own
Encoder<A> / Decoder<A> (Chapter 10) — the codec module’s
Encoder<UnitStruct> glue is small.
Next
For the cargo features that gate these codecs, see feature-flags.md. For the error variants codecs return on bad bytes, see errors.md.
Error catalog
Generated rustdoc: https://docs.rs/babar/latest/babar/enum.Error.html
See also: Book Chapter 9 — Error handling.
Variants
Every babar::Error variant. There is no Error::kind() — match on
the variant directly.
| Variant | Shape | When it fires |
|---|---|---|
Io | Io(std::io::Error) | TCP, TLS, or socket I/O failure (DNS, refused, reset, EOF). |
Closed | Closed { sql: Option<String>, origin: Option<Origin> } | The session was closed and the call lost its connection. sql and origin carry the in-flight statement. |
Protocol | Protocol(String) | The server sent something babar can’t make sense of (framing error, unexpected message). |
Auth | Auth(String) | SCRAM rejected, password wrong, role can’t log in, no password configured. |
UnsupportedAuth | UnsupportedAuth(String) | The server selected an auth method babar doesn’t speak (e.g. gss, sspi, or any code babar hasn’t implemented). |
Server | Server { code, severity, message, detail, hint, position, sql, origin } | An ErrorResponse from Postgres. code is the five-character SQLSTATE. |
Config | Config(String) | Bad client-side configuration (malformed TLS settings, bad timeouts, …). |
Codec | Codec(String) | An Encoder / Decoder rejected the bytes — wrong column count, NULL where not expected, malformed wire bytes. |
ColumnAlignment | ColumnAlignment { expected, actual, sql, origin } | A Decoder was expecting expected columns but RowDescription advertised actual. |
SchemaMismatch | SchemaMismatch { position, expected_oid, actual_oid, column_name, sql, origin } | The Decoder’s declared OID at position doesn’t match the OID Postgres returned. |
Migration | Migration(MigrationError) | A migration step failed; the inner enum carries the migration-specific cause. |
Closed, Server, ColumnAlignment, and SchemaMismatch carry an
origin field that, with the sql! macro, points at the call site
(file:line:col). Surfacing it in your logs almost always pays for
itself the first time.
SQLSTATE patterns
The code field on Error::Server is a five-character SQLSTATE.
This editorial section lists the codes most worth recognizing
explicitly — it is guidance for application code, not a
machine-extracted list. The full registry is in the Postgres docs
(https://www.postgresql.org/docs/current/errcodes-appendix.html).
Constraint and concurrency
| SQLSTATE | Class | Common cause | Typical reaction |
|---|---|---|---|
23505 | unique_violation | Duplicate key on insert/upsert. | Map to a 409 in your service; consider INSERT ... ON CONFLICT. |
23503 | foreign_key_violation | Inserting a row whose parent doesn’t exist. | 422 / validation error. |
23502 | not_null_violation | Missing required column. | 422 / validation error. |
23514 | check_violation | A CHECK constraint rejected the row. | 422 / validation error. |
40001 | serialization_failure | Conflicting concurrent transactions at SERIALIZABLE. | Retry with backoff. |
40P01 | deadlock_detected | The deadlock detector aborted your transaction. | Retry; investigate the lock order. |
Authentication and resource
| SQLSTATE | Class | Common cause |
|---|---|---|
28P01 | invalid_password | Wrong password. |
28000 | invalid_authorization_specification | Role can’t log in / pg_hba.conf rejected. |
53300 | too_many_connections | Server max_connections reached. Tune your pool. |
57P03 | cannot_connect_now | Server in startup or recovery; retry shortly. |
Schema
| SQLSTATE | Class | Common cause |
|---|---|---|
42P01 | undefined_table | Missing table — typically a missing migration. |
42703 | undefined_column | Missing column — schema drift. |
42P07 | duplicate_table | A migration that already ran. |
Choosing what to retry
A starting policy:
| Variant / code | Retry? |
|---|---|
Error::Io(_) | Yes, with backoff. The connection is gone; the pool will reconnect. |
Error::Server { code: "40001", .. } | Yes — the whole transaction. |
Error::Server { code: "40P01", .. } | Yes — the whole transaction. |
Error::Server { code: "57P03", .. } | Yes, after a delay. |
Error::Auth(_) / UnsupportedAuth(_) | No. Surface to operator. |
Error::Codec(_) / ColumnAlignment / SchemaMismatch | No. Fix the code. |
Other Error::Server | No by default; classify per SQLSTATE. |
Next
For the codec inputs that produce Error::Codec / SchemaMismatch,
see codecs.md. For the Config / PoolConfig knobs
that produce Error::Config, see configuration.md.
Cargo features
Generated rustdoc: https://docs.rs/babar/latest/babar/index.html
See also: Book Chapter 12 — TLS & security and Chapter 10 — Custom codecs.
Every feature flag the babar crate (and its core crate
babar-core) exposes. All features are off by default except the
ones listed in default = [...].
TLS backends
| Feature | What it enables | Default? |
|---|---|---|
rustls | The pure-Rust TLS backend (TlsBackend::Rustls). Pulls in rustls, tokio-rustls, and rustls-native-certs. | yes |
native-tls | Platform TLS via native-tls + tokio-native-tls (Schannel / Secure Transport / OpenSSL). Selectable via TlsBackend::NativeTls. | no |
Only one TLS backend is needed at runtime; you can enable both if you
want to pick at runtime. Config::tls_mode(TlsMode::Disable) opts
out of TLS entirely without touching features.
Codec features
Each row turns on a codec module under babar::codec. Disabling
unused codec features is the most effective way to keep babar’s
compile time and binary size small.
| Feature | Codec module | Headline types | Extra deps |
|---|---|---|---|
uuid | babar::codec::uuid | uuid::Uuid ↔ Postgres uuid | uuid |
time | babar::codec::time | time::Date / Time / PrimitiveDateTime / OffsetDateTime | time |
chrono | babar::codec::chrono | chrono::NaiveDate / NaiveTime / NaiveDateTime / DateTime<Utc> | chrono |
numeric | babar::codec::numeric | rust_decimal::Decimal ↔ Postgres numeric | rust_decimal |
json | babar::codec::json | serde_json::Value and typed_json::<T>() for Serialize + Deserialize | serde, serde_json |
array | babar::codec::array | array(C) combinator for one-dimensional arrays | fallible-iterator |
range | babar::codec::range | range(C) combinator over discrete and continuous ranges | — |
multirange | babar::codec::multirange | multirange(C) (Postgres 14+); implies range | — |
interval | babar::codec::interval | babar::codec::Interval | — |
net | babar::codec::net | inet, cidr (IpAddr, Cidr) | — |
macaddr | babar::codec::macaddr | MacAddr, MacAddr8 | — |
bits | babar::codec::bits | BitString for bit / varbit | — |
hstore | babar::codec::hstore | Hstore (BTreeMap<String, Option<String>>) | — |
citext | babar::codec::citext | String ↔ citext extension type | — |
text-search | babar::codec::text_search | TsVector, TsQuery | — |
pgvector | babar::codec::pgvector | Vector for the pgvector extension | — |
postgis | babar::codec::postgis | geometry::<T>() / geography::<T>() over geo-types | geo-types |
Pick what your schema actually uses. A common starting set for an HTTP service:
babar = { version = "...", features = ["rustls", "uuid", "time", "json", "numeric"] }
Default features
default = ["rustls"]. Disable defaults if you want to ship with
native-tls, or with TLS off entirely:
babar = { version = "...", default-features = false, features = ["native-tls", "uuid"] }
babar-macros
The proc-macro crate (babar-macros, exposed via babar::Codec and
babar::sql) currently exposes no cargo features of its own — it’s
unconditionally on when you depend on babar.
Next
For the runtime configuration of TLS, see configuration.md. For Postgres types and the codec values they map to, see codecs.md.
Configuration
Generated rustdoc: https://docs.rs/babar/latest/babar/struct.Config.html
See also: Book Chapter 1 — Connecting, Chapter 6 — Pooling, Chapter 12 — TLS & security.
babar::Config
Config holds everything Session::connect needs. Required fields
are positional in the constructor; optional fields are chained
methods. Build it from any source — env vars, a config file, a
clap::Parser. babar deliberately doesn’t ship a DSN parser.
Constructors
| Method | Required arguments |
|---|---|
Config::new(host, port, user, dbname) | impl Into<String> for host/user/dbname, u16 for port. Resolves host via DNS at connect time. |
Config::with_addr(addr, port, user, dbname) | addr: IpAddr, port: u16, user/dbname as impl Into<String>. Skips DNS — useful for IP-direct deployments. |
Optional fields (chained, value-returning)
| Method | Type | Default | Notes |
|---|---|---|---|
.password(p) | impl Into<String> | none | Sent to the server only as part of the auth handshake. |
.application_name(n) | impl Into<String> | none | Surfaces in pg_stat_activity.application_name. Cheapest observability win. |
.connect_timeout(d) | Duration | none | Wall-clock cap on Session::connect. |
.tls_mode(m) | TlsMode | Disable | Disable / Prefer / Require. Opt in to Prefer or Require explicitly. See ch12. |
.require_tls() | — | — | Sugar for .tls_mode(TlsMode::Require). |
.tls_backend(b) | TlsBackend | Rustls (with rustls feature) | Rustls or NativeTls. |
.tls_server_name(n) | impl Into<String> | host | Override SNI / certificate-name match. |
.tls_root_cert_path(p) | impl Into<PathBuf> | system roots / webpki-roots | PEM bundle of additional root CAs. |
TLS-mode and backend enums
| Enum | Variants | Re-exported as |
|---|---|---|
TlsMode | Disable, Prefer, Require | babar::config::TlsMode |
TlsBackend | Rustls, NativeTls | babar::config::TlsBackend |
babar::PoolConfig
PoolConfig is everything Pool::new needs that isn’t a Config.
Constructor
PoolConfig::new() — conservative defaults. All knobs are chained,
value-returning methods.
Knobs
| Method | Type | Default | Notes |
|---|---|---|---|
.min_idle(n) | usize | 0 | Keep at least n warm connections when traffic permits. |
.max_size(n) | usize | 16 | Hard cap on total connections in the pool. |
.acquire_timeout(d) | Duration | 30 seconds | How long pool.acquire() waits before returning PoolError::Timeout. |
.idle_timeout(d) | Duration | unset (no idle timeout) | Close idle connections older than this. |
.max_lifetime(d) | Duration | unset (no lifetime cap) | Recycle connections after this age regardless of idle state. |
.health_check(h) | HealthCheck | HealthCheck::None | Per-acquire validation policy (off by default). |
PoolError
| Variant | When |
|---|---|
PoolError::Timeout | acquire_timeout elapsed before a slot freed up. |
PoolError::AcquireFailed(babar::Error) | The pool tried to open a fresh connection and the underlying Session::connect failed. |
PoolError::PoolClosed | The pool itself has been closed. |
Picking values
Some tested starting points:
| Service shape | max_size | acquire_timeout | min_idle |
|---|---|---|---|
| HTTP service, low/medium traffic | 8–16 | 5–10s | 0 |
| HTTP service, high traffic | ≈ #worker threads × 2 | 1–3s | ≥ 2 |
| Long-running batch / ETL | 1–4 | 30s+ | 0 |
Beyond that, watch:
pg_stat_activityfor connection count vs server’smax_connections.- Pool acquire latency (you wrap it yourself; see Chapter 13).
- p99 query latency vs pool size — if increasing
max_sizedoesn’t move p99, the pool isn’t the bottleneck.
Next
For the cargo features that gate TLS backends and codec types, see feature-flags.md. For the errors these knobs can produce, see errors.md.
What makes babar babar
See also: Why babar, Design principles, Comparisons.
If you only read one explanation page, read this one. This page describes where babar sits, what makes it distinctive, what it deliberately is not, and when it is the right tool to reach for.
Where babar sits
┌─────────────────────────────────────────┐
│ your app │
├─────────────────────────────────────────┤
│ babar (typed Query/Command, codecs, │
│ pool, COPY, migrations) │
├─────────────────────────────────────────┤
│ tokio (TcpStream, tasks, cancellation) │
├─────────────────────────────────────────┤
│ Postgres wire protocol v3 │
└─────────────────────────────────────────┘
There is no libpq, no tokio-postgres underneath, and no abstraction
layer that pretends Postgres is a generic SQL backend. babar speaks the
Postgres v3 protocol directly on top of Tokio. That is the whole stack.
This is a deliberate choice. A driver that supports four databases has
to find the lowest common denominator across four protocols. babar
picks one protocol and exposes its shape — extended-protocol prepare,
binary results, channel binding, binary COPY FROM STDIN — without
flattening it.
What’s distinctive
Four properties show up everywhere in the API and why I created it.
1. The background driver task
#![allow(unused)]
fn main() {
let session: Session = Session::connect(cfg).await?; // type: Session
}
session is a thin handle. The TCP socket lives in a Tokio task that
Session::connect spawned for you. Every public call on Session
sends a request down an mpsc channel and awaits a oneshot reply;
the driver task is the only thing that ever reads or writes the
socket.
Two things fall out of that.
First, every public call is cancellation-safe. If you
tokio::select! away from a query halfway through, the driver task
keeps reading the in-flight messages and returns the protocol to a
consistent state. You don’t end up with a half-parsed RowDescription
hanging off your socket the next time you ask for a query.
Second, there is exactly one writer to the socket. You can clone
the Session handle, share it across tasks, and the driver still
serializes commands. There is no locking on top of
the socket — the channel is the lock. The
Driver task page goes into more depth on what the
task owns and how shutdown works.
2. Typestate at the boundary
The shape of every database operation is in the type signature.
#![allow(unused)]
fn main() {
use babar::codec::{int4, text, nullable};
use babar::query::Query;
use babar::query::Command;
let select: Query<(i32,), (String, Option<i32>)> = // type: Query<(i32,), (String, Option<i32>)>
Query::raw(
"SELECT name, parent_id FROM users WHERE id = $1",
(int4,),
(text, nullable(int4)),
);
let insert: Command<(String, i32)> = // type: Command<(String, i32)>
Command::raw(
"INSERT INTO users(name, parent_id) VALUES ($1, $2)",
(text, int4),
);
}
Query<P, R> says “I take parameters of shape P and produce rows of
shape R.” Command<P> says “I take parameters of shape P and
produce nothing readable.” You cannot accidentally call
session.query(&insert, ...) — it doesn’t compile.
Transactions extend the same idea. session.transaction(|tx| ...)
hands you a Transaction<'_> whose lifetime is tied to the closure
body, and the borrow checker prevents you from using the underlying
Session while the Transaction is alive. There is no “did I forget
to commit?” question because the compiler verifies it for you. See
Transactions for the full pattern,
including savepoints.
Prepared queries are a separate type:
#![allow(unused)]
fn main() {
let prepared: PreparedQuery<(i32,), (String,)> = // type: PreparedQuery<(i32,), (String,)>
session.prepare_query(&select).await?;
}
A PreparedQuery is not a Query. The compiler knows it has been
sent to the server, and once you have one you can stream rows from it
without re-prepare overhead. Streaming COPY FROM STDIN ingest works
the same way: CopyIn<T> has its own type, and the compiler tracks
when you’ve finalized it.
3. Codecs are values you import by name
#![allow(unused)]
fn main() {
use babar::codec::{int4, text, nullable};
let row_codec = (int4, text, nullable(int4)); // type: (Int4Codec, TextCodec, Nullable<Int4Codec>)
}
Codecs are runtime values, not derived types. The tuple (int4, text, nullable(int4)) is the schema of the row, written by hand, sitting
in your source file where you can read it. The i32, String, and
Option<i32> that come back are determined by the codec, not by
inference from a SQL string.
This means three things in practice:
- You don’t need a live database at compile time to write a query.
- Adding a new type — say, an enum with a custom OID — means writing a
Codecimpl and importing the value. There is no proc-macro to re-run, noschema.rsto regenerate. - The codec tuple is the documentation. You can read a
Queryvalue and know exactly what wire types it expects and what Rust types it produces, without leaving the file.
The trade-off is honest: the cost is paid once per query and the legibility is paid back every time you read it.
4. Validate early
babar pushes “is this query well-formed?” as far left as it can.
- At bind time, the parameter codec tuple is statically the same
shape as
PinQuery<P, R>. You cannot under- or over-bind. - At prepare time
Session::preparecross-checks the row codec tuple(int4, text, nullable(int4))against theRowDescriptionPostgres sends back. If the column types or order drifted, you get anError::SchemaMismatch { position, expected_oid, actual_oid, column_name, sql, origin }at prepare time, not when you decode a row in production. - At display time, errors carry the
sqlandorigin(file + line where you wrote the SQL). TheDisplayimpl renders a^caret under the offending byte forError::Server { position, .. }so you don’t have to re-count columns by hand.
The net effect is that “compiles + prepares” is a strong signal. You still have to test, but you don’t have to test for “did I bind two parameters when the SQL wants three” — the type system already knows.
What babar deliberately is not
A short list, because every “not” saves us from a feature you didn’t want.
- Not multi-database. No MySQL, no SQLite, no MSSQL. If you need
multi-database, reach for a multi-database driver. We point at
sqlxin Comparisons. - Not synchronous. babar is async-only on Tokio.
- Not an ORM. There is no
Queryablederive, noInsertable, no schema-aware DSL. SQL is SQL. - Not a query builder.
Query::rawand thesql!macro give you composable SQL fragments; we do not provide a typed AST you build up with.select().from().where_(...). - Not a migration tool. babar ships a small migration runner for
the
embed_migrations!workflow, but if you want a full migration CLI with rollbacks and squashing,refineryorsqlx-cliare better-fit tools.
When babar is the right pick
Reach for babar when:
- You target Postgres specifically and you’d rather see protocol
features (channel binding, binary
COPY, prepared statements as a type) than have them hidden behind a generic abstraction. - You want types on the query —
Query<P, R>,Command<P>,Transaction<'_>. - You want
validate-earlysemantics: schema drift surfaces at prepare time asError::SchemaMismatch, not at row 4,723.
Reach for something else when you need multi-database support, a mature ORM, or a feature babar has deferred — those are real needs and there are good answers for them.
Where to read next
- Why babar — the elevator pitch.
- Design principles — the rule book.
- The background driver task — how the task, channels, and shutdown work.
- Comparisons — a trade-off-focused comparison table
for
tokio-postgres,sqlx, anddiesel. - Roadmap — what’s shipped, what’s next, what’s deferred by design.
Why babar?
See also: Get started, the Book.
babar is a Rust client for Postgres. There are several already. Why another one?
The short answer is one obvious way to do each thing. Connect, run a typed query, run a command, stream a result, manage a transaction, hold a pool, ingest with COPY, run migrations — there is one shape per task, and the codecs are values you import by name. The next time you read your own code, you can read it.
Three pillars
Ergonomic by design
Read it once, understand it forever. Queries are typed values. Codecs are imported by name. There is one way to start a transaction, one way to bind a parameter, one way to run a migration. You will not spend an afternoon learning which of seven options to use.
Postgres at heart
The wire protocol, faithfully. babar speaks Postgres directly —
extended-protocol prepares, binary results, SCRAM-SHA-256, channel
binding over TLS, and binary COPY FROM STDIN for bulk ingest. There
is no translation layer between you and the server.
Built for the herd
Predictable under load. A single background task owns the socket and
serializes wire I/O, so every public call is cancellation-safe. Pool,
statement cache, and tracing spans are first-class — not bolted on
later.
What “typed query” actually means
In babar, a Query<Params, Row> is a runtime value. It carries:
- The SQL text.
- A parameter encoder (
Encoder<Params>). - A row decoder (
Decoder<Row>).
When the type system says Query<(i32,), (Uuid, String, i64)>, the
compiler knows the parameter shape, the row shape, and which codecs
participate. There is no magic — Query::raw constructs one
explicitly, and the query! macro builds the same thing with optional
compile-time SQL verification.
What babar deliberately does not do
- It does not require a compile-time database.
query!againstBABAR_DATABASE_URLis opt-in; the defaultQuery::rawpath runs without any dev-loop infrastructure. - It does not hide errors behind
&dyn Error.babar::Erroris a plain enum with eleven variants, each carrying the fields you need to decide what to do.
Where to read next
- Design principles — typed, async, native protocol, validate-early, no-unsafe.
- The driver task — the per-connection background task that makes every call cancellation-safe.
- Comparisons — a trade-off-focused comparison table
for
tokio-postgres,sqlx, anddiesel. - Roadmap — what’s in, what’s deferred, and where the project is going.
Design principles
This page collects the principles babar is built around. They are not abstract — every one of them produces a concrete API choice you can point at.
1. Typed at the boundary
Every public call carries the parameter and row types. Query<P, R>
and Command<P> are values, not phantom decorations on a string. That
means:
- The compiler can reject
query.bind((1, 2))against aQuery<(i32, String), _>long before any wire I/O. - A new reader of your code can see
Query<(i64,), (Uuid, String)>and know the column shape without running anything. - Refactoring a column type is a typecheck – use the compiler.
The codecs are values too — int4, text, bool — not associated
methods on a trait object you have to remember.
2. Async
Every Session is backed by a background task that owns the
TcpStream. All public API calls send messages to that task over
channels and await the reply. This is the foundation of babar’s
cancellation safety: if your await is cancelled,
the task still finishes the in-flight protocol exchange before
servicing the next request.
3. Native protocol
babar speaks the Postgres v3 wire protocol directly via
postgres-protocol. It does not wrap libpq; it does not call out to
a C library; it does not translate through a higher-level abstraction.
That means:
- Binary results by default.
- Extended-protocol prepared statements with parameter codecs.
- SCRAM-SHA-256 (and SCRAM-SHA-256-PLUS with channel binding over TLS).
- Binary
COPY FROM STDINas a first-class API. RowDescriptionis parsed, the OIDs are checked, and theDecoderis given the bytes — no string-to-string conversion, no magic re-parsing.
If Postgres ships a new wire-level capability, the work to expose it in babar is Postgres-shaped, not abstraction-shaped.
4. Validate don’t parse
We would rather fail in your test suite than in production at 3am. That is the validate-early principle in operation. Concretely:
- Every codec advertises its OIDs. When
RowDescriptionarrives, babar checks that each declared OID matches what the server is about to send. Mismatches surface asError::SchemaMismatchcarrying the position, the expected OID, and the actual OID — at prepare time, before any rows are decoded. - Every decoder advertises its column count. If
RowDescriptionadvertises a different count, you getError::ColumnAlignmentimmediately, again before any rows are processed. - The
query!macro can validate SQL against a live database whenBABAR_DATABASE_URLis set for opt-in compile-time validation.
The cost is one round-trip on each prepare. The benefit is that schema drift surfaces as a Rust error at the boundary, with a caret-rendered message pointing at the offending fragment, rather than as a cryptic decode panic on row 47.
5. No unsafe
babar’s source contains no unsafe blocks. The macro crate sets
unsafe_code = "forbid" and the core crate is held to the same line
in CI (Miri).
6. Minimal dependencies, small features
The default feature set is small. Codec families (uuid, time,
chrono, json, numeric, postgis, pgvector, …) are gated
behind cargo features so that a pool-and-text service does not
have to compile a geo-types dependency it will never use. The TLS
backend is selectable at compile time (rustls by default,
native-tls available). Reduce footprint, reduce blast radius,
reduce compile time.
7. Operability is the API
Pool, statement cache, and tracing spans are first-class citizens,
not afterthoughts. Session::connect emits a db.connect span;
prepares emit db.prepare; executes emit db.execute. The fields
are OpenTelemetry’s database semantic conventions out of the box, so
your existing tracing backend already understands them. Setting
application_name on Config puts your service name in
pg_stat_activity for free. The point is not that babar provides a
metrics dashboard — it does not — but that the seams a production
team needs are deliberately exposed.
Where to read next
- The driver task for the cancellation-safety story.
- Comparisons for the trade-offs against other Rust Postgres clients.
- Book Chapter 9 — Error handling for
what
validate-earlylooks like at runtime.
Comparisons
See also: Why babar, Design principles.
Trade-offs, not scorekeeping. These tools solve overlapping problems from different angles. The useful question is which shape fits your team, database scope, and operating model.
The table below compares babar with three common Rust choices:
tokio-postgres, sqlx, and diesel.
| Dimension | babar | tokio-postgres | sqlx | diesel |
|---|---|---|---|---|
| Primary shape | Typed Postgres client | Async Postgres driver | Async SQL toolkit | ORM / query DSL |
| Database scope | Postgres only | Postgres only | Multiple databases | Multiple databases |
| Query API | Typed runtime Query<P, R> / Command<P> values | Raw SQL strings plus codec traits | Raw SQL, macros, row mapping helpers | Schema-aware DSL and derives |
| SQL checking style | Optional online verification plus prepare-time validation | Mostly runtime | Strong compile-time emphasis | Schema-driven compile-time DSL |
| Explicit codec model | Yes, codecs are imported values | Usually trait-based (ToSql / FromSql) | Mostly inferred / mapped through traits and macros | Mostly hidden behind derives / schema mapping |
| Current maturity | Newer, intentionally focused surface | Most battle-tested async Postgres option | Large ecosystem and polished tooling | Mature ORM ecosystem |
| Strong fit | Postgres-specific apps that want explicit typed values and protocol visibility | Teams that want established async Postgres coverage today | Teams that want compile-time SQL workflows or multi-database support | Teams that want an ORM and schema-driven query construction |
Reading the trade-offs
babar and tokio-postgres
These two are the closest in scope: both are Postgres-specific async clients. The trade-off is mostly about API shape.
- Choose
babarwhen you want query and row shape visible in the type signature, explicit codec values, prepare-time schema checks, and richer SQL-origin error rendering. - Choose
tokio-postgreswhen you want the most established async Postgres driver in Rust today, broader production history, or a feature babar still defers such as broaderCOPY,LISTEN/NOTIFY, or cancellation surface.
babar and sqlx
These overlap most for teams that like hand-written SQL but care about types and validation.
- Choose
babarwhen you want Postgres-specific APIs, explicit runtime codecs, and normal builds that do not depend on compile-time database connectivity. - Choose
sqlxwhen compile-time SQL checking is the center of your workflow, you want offline-cache tooling, or you need a single client across multiple databases.
babar and diesel
Here the trade-off is more architectural than incremental.
- Choose
babarwhen you want SQL to stay SQL and prefer the protocol seam — codecs, prepare, COPY, transactions, pooling — to be the visible API. - Choose
dieselwhen you want an ORM, schema-driven query construction, and a workflow built around derives, generated schema, and migration tooling.
Summary
| If you want… | Reach for |
|---|---|
| A typed Postgres client with one obvious way to do each thing | babar |
| The most battle-tested async Postgres driver in Rust | tokio-postgres |
| Compile-time-verified SQL, multi-database support | sqlx |
| A schema-aware ORM with a strong DSL | diesel |
Where to read next
- Roadmap — what’s deferred (and therefore what
tokio-postgrescovers today that babar doesn’t). - Design principles — the why behind the trade-offs above.
The driver task
See also: Book Chapter 1 — Connecting, Design principles.
Every Session in babar is backed by a single background task that
owns the underlying TcpStream. This page explains what that task is,
what it does, and why it exists.
Shape of the model
When you call Session::connect, babar:
- Opens the TCP connection and runs the startup + auth handshake.
- Spawns a background task (
tokio::spawn) and gives it the read half and write half of the now-authenticated stream. - Hands you back a
Sessionvalue that holds anmpsc::Sender<Command>— the channel into the driver task — plus a small amount of cached server state (parameters, backend keys).
Every public call on Session — query, execute, prepare_query,
prepare_command, transaction, copy_in, close — translates to a Command enum
sent over that channel. Each Command carries a oneshot::Sender
for its reply. The driver task pulls commands off the inbox, performs
the protocol exchange against the server, and replies on the
oneshot.
There is exactly one task per connection. The mpsc channel is the
single point of serialization for everything that talks to that
socket.
Why a task
Postgres’ wire protocol is asynchronous in the responses-arrive-as-
they-arrive sense, but it is rigorously serial in the one
request/response sequence at a time per connection sense. You cannot
interleave two Bind/Execute/Sync cycles on the same socket —
the server’s responses are in order and any client that pipelines them
must consume the responses in order too.
If the public API directly wrote and read on the socket, every public
call would need to lock against every other public call, and Tokio
cancellation would tear half-finished protocol exchanges apart.
Instead, babar puts the protocol state machine inside the task, and
the public API becomes “send a Command, await the reply.” The cost
of an extra mpsc hop buys two large benefits.
Cancellation safety
If you tokio::select! on session.execute(&cmd, args) and the other
branch wins, the future you abandon is just a oneshot::Receiver
being dropped. The driver task notices the receiver is gone only after
it finishes the in-flight Execute/Sync cycle — it never abandons
the protocol mid-message. The next command waiting in the mpsc
inbox runs after a clean protocol boundary.
That’s what we mean when we say every public call in babar is cancellation-safe. You don’t need to hold the future to its end.
Concurrency on one connection
You can spawn many tasks all calling into the same Session. They
all hit the same mpsc channel; the driver task processes them in
arrival order. Throughput is bounded by the connection, not by an
arbitrary lock policy. Pipelining multiple short queries against one
session is reasonable; if you need true concurrency, that’s what the
Pool is for.
What lives on the task
The driver task owns:
- The
TcpStreamhalves and anoneshotper pending request. - The framing buffer (writes to
tx_buf, reads chunked frames). - Parameter status updates as the server announces them.
- The internal prepared-statement cache.
It explicitly does not own:
- User-level types like
Query<P, R>— those live in your code. - The
Pool, which is a layer above sessions. - Codec implementations — codecs run on the calling task; the
driver task only deals in
Vec<Option<Bytes>>columns.
Shutdown
Session::close() sends a Close command, waits for the
acknowledgement, and joins the task. Dropping a Session without
calling close() causes the mpsc::Sender to be dropped; the driver
task notices, sends Terminate, and exits cleanly. There is no
detached task that outlives the Session value.
Why not async fn directly on the socket?
Two reasons.
First, cancellation correctness. If Session::execute were a plain
async fn writing and reading on the socket, abandoning that future
mid-Execute would leave the connection desynchronized — half a
message sent, no Sync paired, the server still responding to the
last frame. There is no clean way to recover from that without
closing the connection. The driver-task model means the future is
just a oneshot::Receiver, and abandoning it does not endanger
anything.
Second, single-writer guarantees. Postgres’ protocol benefits from
write coalescing (a Parse/Bind/Execute/Sync is one
writev of small frames). With one task owning the writer, that
coalescing is trivial; with many tasks, it requires either locks or
a lock-free SPSC ring per worker — and at that point you’ve
re-invented the driver task with extra steps.
Where to read next
- Book Chapter 6 — Pooling — for the layer above the driver task.
- Book Chapter 13 — Observability — for the spans the driver task emits.
- Design principles — for why this fits the rest of babar’s shape.
Roadmap
See also:
MILESTONES.mdin the repository for the authoritative milestone list.
This page summarizes how babar’s roadmap is organized, what is currently in scope per milestone, and what has been intentionally deferred so the surface area stays honest.
How milestones work
MILESTONES.md (in the repo root) breaks development into
sequentially numbered milestones — M0, M1, … — each with:
- A scope statement (what the milestone covers).
- Concrete deliverables.
- A test policy (unit, integration, property-based, where relevant).
- Acceptance criteria — the milestone is not done until every box is checked and CI is green against every supported Postgres version.
The point is to keep “shipped” honest: a milestone you are inside is work-in-progress; a milestone that is checked off ships exactly what its acceptance list said it would.
What’s in (high level)
Across the early milestones, babar has shipped:
- Wire protocol foundation: framing, startup, parameter status, graceful shutdown, the driver task.
- Authentication: cleartext, MD5, SCRAM-SHA-256, SCRAM-SHA-256-PLUS (channel binding over TLS).
- The typed core:
Session,Query<P, R>,Command<P>,Fragment<A>, theEncoder/Decodertraits, and codec combinators (nullable, tuples,array,range,multirange). - The primitive codec set and the optional codec families (reference/codecs.md).
- Prepared statements with a per-session cache, portal-backed
streaming, and
prepare_command/prepare_query. - Closure-shaped transactions and savepoints.
- Binary
COPY FROM STDINfor bulk ingest. - Pool with health checks, idle timeouts, and lifetime caps.
- A library-first migration engine with advisory locking and checksums.
- TLS via
rustls(default) ornative-tls. tracingspans with OpenTelemetry semantic conventions.
For the day-to-day surface, the Book is the right entry point.
What’s deferred (and why)
Some things are deliberately not in babar — yet, or by design. Calling them out here keeps the trade-offs visible.
| Capability | Status | Notes |
|---|---|---|
LISTEN / NOTIFY | Deferred | A streaming-notifications API is on the roadmap but not yet shipped. Use a polling loop or a sidecar service in the meantime. |
COPY TO (server → client) | Deferred | Only COPY FROM STDIN ingest is shipped. Read-side bulk export will land in a later milestone. |
Text/CSV COPY | Deferred | Binary COPY is the supported path; text/CSV variants are tracked but not yet on the public surface. |
| Out-of-band cancellation | Deferred | tokio::select! and Session::close cover most cases; an explicit cancel-request channel is on the roadmap. |
DSN parsing / Config::from_env() | By design | babar deliberately does not ship a DSN parser. Config::new(host, port, user, db) plus chained methods is the only configured path; build it from whichever source fits your service. |
| ORM / query DSL | By design | babar is a typed Postgres client, not an ORM. Fragment<A> and sql! give you composable SQL; row mapping is a Decoder<R>. |
| Multi-database backends | By design | babar is Postgres only. The wire protocol is the abstraction; we are not chasing MySQL or SQLite. |
Where work is heading
The next-milestone work tends to be one of three shapes:
- Surface gaps in shipped Postgres capabilities —
LISTEN/NOTIFY,COPY TO, out-of-band cancel. - Codec breadth — more extension support, more
geo-typesshapes, moretime/chronoround-trip cases. - Operability polish — metrics surfaces, more ergonomic
tracingspans, statement-cache observability.
The authoritative list is MILESTONES.md. If something on this page
disagrees with the repo’s MILESTONES.md, trust the repo.
How to follow along
- The repo’s
MILESTONES.mdandCHANGELOG.mdtrack shipped work. - GitHub issues and milestones map roughly to the same scheme.
- Pull requests are tagged with the milestone they belong to where applicable.
Where to read next
- Why babar — the high-level pitch.
- Comparisons — honest trade-offs vs other Rust Postgres clients.
Postgres API from Scratch
This tutorial walks through a small Postgres-backed HTTP API built with:
- Tokio for async execution
- Axum for HTTP routing
- babar for typed Postgres access
It assumes you already know basic Rust syntax, structs, and Result, but have
not spent much time with Tokio yet.
We will start from an empty directory, bootstrap a tiny server, then grow it
into a coherent one-resource JSON API for tracking elephant herds and their grazing grounds.
1. Before we write code
What we are building
By the end of this walkthrough you will have:
- a new Rust binary project
- an Axum server listening on
127.0.0.1:3000 - a shared
babar::Poolstored in application state - startup code that creates a
herdstable if it does not exist yet - a
GET /healthzendpoint so you can prove the service is alive - a
POST /herdsendpoint to register a herd - a
GET /herdsendpoint to list herds - a
GET /herds/:idendpoint to fetch one herd
We will build that in two stages:
- get the runtime, router, and database bootstrap in place
- add JSON handlers on top of that working foundation
Prerequisites
You need:
- Rust stable and
cargo - a running PostgreSQL server
- a shell where you can set environment variables
- basic Rust familiarity
Helpful but optional:
psqlso you can inspect the database manually- the companion examples in this repository:
crates/core/examples/quickstart.rscrates/core/examples/todo_cli.rscrates/core/examples/axum_service.rs
Why these tools
- Tokio runs async Rust code and handles network I/O.
- Axum gives us routing, request extraction, and JSON responses.
- babar gives us a typed Postgres client and pool that fit naturally into a Tokio application.
The main service path uses a Pool, not a single Session, because a web
server may handle many requests at once. Each request can borrow a database
connection from the pool when it needs one.
2. Start from an empty directory
Create a new project:
cargo init herd-api --bin
cd herd-api
Add the dependencies we need for the bootstrap and the API:
cargo add axum
cargo add tokio --features macros,rt-multi-thread,net
cargo add babar
cargo add serde --features derive
cargo add serde_json
cargo add tracing
cargo add tracing-subscriber --features fmt,env-filter
Why add serde now even though the first endpoint is plain text? Because the
next sections accept and return JSON, so it is simpler to install the full set
once.
Configuration: keep it boring and explicit
For a beginner tutorial, environment variables are a good fit:
- they keep secrets like passwords out of source code
- they work the same in local dev, CI, and containers
- they avoid adding a config framework before we need one
Export these values before running the server:
export PGHOST=127.0.0.1
export PGPORT=5432
export PGUSER=postgres
export PGPASSWORD=postgres
export PGDATABASE=postgres
export API_ADDR=127.0.0.1:3000
If your local Postgres uses different values, change them here. PGPASSWORD is
the one most likely to differ.
We will also write the Rust code so local defaults exist for the whole local-dev setup. That keeps the first run easy while still making the connection settings obvious.
3. Tokio in one mental model
If you are new to Tokio, this is the shortest useful mental model:
- an
async fndoes not run by itself; it returns a value called a future - a runtime polls that future and wakes it back up when it can make progress
- Tokio is the runtime that does that work for us
Why does that matter here?
- Axum waits for incoming HTTP requests
- babar waits for Postgres network reads and writes
- Tokio lets one process manage all of that waiting efficiently
When an async function hits .await, it is basically saying: “I cannot finish
this step right now; please come back when the socket is ready.” Tokio can then
run other work instead of blocking the whole thread.
That is why the tutorial uses:
#[tokio::main]
async fn main() { /* ... */ }
#[tokio::main] creates a Tokio runtime for the program and lets main be
async, so we can:
- create the Postgres pool with
.await - run startup SQL with
.await - start the Axum server with
.await
You do not need to know every Tokio API before writing a web service. For this tutorial, the important rule is simpler: if something touches the network, it will usually be async, and Tokio is what makes that async code run.
4. Build the bootstrap server
Replace src/main.rs with this:
use std::net::SocketAddr;
use axum::routing::get;
use axum::Router;
use babar::query::Command;
use babar::{Config, Pool, PoolConfig};
#[derive(Clone)]
struct AppState {
pool: Pool,
}
struct Settings {
api_addr: SocketAddr,
pg_host: String,
pg_port: u16,
pg_user: String,
pg_password: String,
pg_database: String,
}
impl Settings {
fn from_env() -> Result<Self, Box<dyn std::error::Error + Send + Sync>> {
let api_addr = std::env::var("API_ADDR")
.unwrap_or_else(|_| "127.0.0.1:3000".into())
.parse()?;
let pg_host = std::env::var("PGHOST").unwrap_or_else(|_| "127.0.0.1".into());
let pg_port = std::env::var("PGPORT")
.ok()
.and_then(|value| value.parse().ok())
.unwrap_or(5432);
let pg_user = std::env::var("PGUSER").unwrap_or_else(|_| "postgres".into());
let pg_password =
std::env::var("PGPASSWORD").unwrap_or_else(|_| "postgres".into());
let pg_database =
std::env::var("PGDATABASE").unwrap_or_else(|_| "postgres".into());
Ok(Self {
api_addr,
pg_host,
pg_port,
pg_user,
pg_password,
pg_database,
})
}
fn database_config(&self) -> Config {
Config::new(
&self.pg_host,
self.pg_port,
&self.pg_user,
&self.pg_database,
)
.password(&self.pg_password)
.application_name("herd-api")
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
tracing_subscriber::fmt()
.with_env_filter(
std::env::var("RUST_LOG")
.unwrap_or_else(|_| "herd_api=info,babar=info".into()),
)
.with_target(false)
.init();
let settings = Settings::from_env()?;
let pool = Pool::new(settings.database_config(), PoolConfig::new().max_size(8)).await?;
initialize_schema(&pool).await?;
let app = Router::new()
.route("/healthz", get(healthz))
.with_state(AppState { pool });
tracing::info!("listening on http://{}", settings.api_addr);
let listener = tokio::net::TcpListener::bind(settings.api_addr).await?;
axum::serve(listener, app).await?;
Ok(())
}
async fn initialize_schema(
pool: &Pool,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let conn = pool.acquire().await?;
let create_herds: Command<()> = Command::raw(
"CREATE TABLE IF NOT EXISTS herds (
id int8 GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name text NOT NULL,
grazing_ground text NOT NULL
)",
(),
);
conn.execute(&create_herds, ()).await?;
Ok(())
}
async fn healthz() -> &'static str {
"ok"
}
What this code is doing
There are a few important ideas packed into a small file.
Settings::from_env
This function keeps configuration loading in one place. That pays off quickly:
mainstays readable- every environment variable has one obvious home
- later, if you want stricter validation, you can add it here
#[tokio::main]
This is the Tokio bridge from regular Rust into async Rust. Without it, none of
the .await calls in main would compile.
Pool::new(...)
This is the first real babar setup step. A pool gives the application a small
set of reusable Postgres connections. In a web service that is almost always a
better starting point than passing around one shared connection handle.
In the API section, each request handler will:
- borrow the pool from
AppState acquire()a connection- run a typed
CommandorQuery - return the connection to the pool automatically when the request finishes
initialize_schema
This tutorial keeps the schema story deliberately simple at first:
- on startup, create the one table we need
- keep the SQL visible
- avoid introducing migrations before the API itself exists
That is good enough for a beginner walkthrough and a single table. Once the app
starts growing, the next step is to move this into babar migrations so
schema changes are tracked explicitly instead of living inside main.rs.
Command<()>
Even though this SQL does not take parameters, we still use a babar
Command. The () means “this command expects no input values.” In the next
section we will keep using typed Command and Query values for herd inserts
and herd lookups.
5. Run the bootstrap
Start the server:
cargo run
You should see a log line like:
listening on http://127.0.0.1:3000
In another shell, confirm the server responds:
curl http://127.0.0.1:3000/healthz
Expected response:
ok
If you have psql, you can also confirm that startup initialization created the
table:
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" -c '\d herds'
If the server starts and /healthz returns ok, your bootstrap is working.
6. Grow the bootstrap into a herd registry API
Now replace src/main.rs with this fuller version:
use std::net::SocketAddr;
use axum::extract::{Path, State};
use axum::http::StatusCode;
use axum::routing::get;
use axum::{Json, Router};
use babar::codec::{int8, text};
use babar::query::{Command, Query};
use babar::{Config, Pool, PoolConfig};
use serde::{Deserialize, Serialize};
#[derive(Clone)]
struct AppState {
pool: Pool,
}
type HttpError = (StatusCode, String);
#[derive(Debug, Deserialize)]
struct CreateHerd {
name: String,
grazing_ground: String,
}
#[derive(Debug, Serialize)]
struct Herd {
id: i64,
name: String,
grazing_ground: String,
}
struct Settings {
api_addr: SocketAddr,
pg_host: String,
pg_port: u16,
pg_user: String,
pg_password: String,
pg_database: String,
}
impl Settings {
fn from_env() -> Result<Self, Box<dyn std::error::Error + Send + Sync>> {
let api_addr = std::env::var("API_ADDR")
.unwrap_or_else(|_| "127.0.0.1:3000".into())
.parse()?;
let pg_host = std::env::var("PGHOST").unwrap_or_else(|_| "127.0.0.1".into());
let pg_port = std::env::var("PGPORT")
.ok()
.and_then(|value| value.parse().ok())
.unwrap_or(5432);
let pg_user = std::env::var("PGUSER").unwrap_or_else(|_| "postgres".into());
let pg_password =
std::env::var("PGPASSWORD").unwrap_or_else(|_| "postgres".into());
let pg_database =
std::env::var("PGDATABASE").unwrap_or_else(|_| "postgres".into());
Ok(Self {
api_addr,
pg_host,
pg_port,
pg_user,
pg_password,
pg_database,
})
}
fn database_config(&self) -> Config {
Config::new(
&self.pg_host,
self.pg_port,
&self.pg_user,
&self.pg_database,
)
.password(&self.pg_password)
.application_name("herd-api")
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
tracing_subscriber::fmt()
.with_env_filter(
std::env::var("RUST_LOG")
.unwrap_or_else(|_| "herd_api=info,babar=info".into()),
)
.with_target(false)
.init();
let settings = Settings::from_env()?;
let pool = Pool::new(settings.database_config(), PoolConfig::new().max_size(8)).await?;
initialize_schema(&pool).await?;
let app = Router::new()
.route("/healthz", get(healthz))
.route("/herds", get(list_herds).post(create_herd))
.route("/herds/:id", get(get_herd))
.with_state(AppState { pool });
tracing::info!("listening on http://{}", settings.api_addr);
let listener = tokio::net::TcpListener::bind(settings.api_addr).await?;
axum::serve(listener, app).await?;
Ok(())
}
async fn initialize_schema(
pool: &Pool,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let conn = pool.acquire().await?;
let create_herds: Command<()> = Command::raw(
"CREATE TABLE IF NOT EXISTS herds (
id int8 GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name text NOT NULL,
grazing_ground text NOT NULL
)",
(),
);
conn.execute(&create_herds, ()).await?;
Ok(())
}
async fn healthz() -> &'static str {
"ok"
}
async fn create_herd(
State(state): State<AppState>,
Json(payload): Json<CreateHerd>,
) -> Result<(StatusCode, Json<Herd>), HttpError> {
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
let insert_herd: Command<(String, String)> = Command::raw(
"INSERT INTO herds (name, grazing_ground) VALUES ($1, $2)",
(text, text),
);
conn.execute(&insert_herd, (payload.name.clone(), payload.grazing_ground.clone()))
.await
.map_err(db_error)?;
let current_herd_id: Query<(), (i64,)> = Query::raw(
"SELECT currval(pg_get_serial_sequence('herds', 'id'))",
(),
(int8,),
);
let herd_id = conn
.query(¤t_herd_id, ())
.await
.map_err(db_error)?
.into_iter()
.next()
.map(|(id,)| id)
.ok_or_else(|| {
(
StatusCode::INTERNAL_SERVER_ERROR,
"insert succeeded but no id was returned".to_string(),
)
})?;
let select_herd: Query<(i64,), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds WHERE id = $1",
(int8,),
(int8, text, text),
);
let herd = conn
.query(&select_herd, (herd_id,))
.await
.map_err(db_error)?
.into_iter()
.next()
.map(herd_from_row)
.ok_or_else(|| {
(
StatusCode::INTERNAL_SERVER_ERROR,
"inserted herd could not be loaded back".to_string(),
)
})?;
Ok((StatusCode::CREATED, Json(herd)))
}
async fn list_herds(State(state): State<AppState>) -> Result<Json<Vec<Herd>>, HttpError> {
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
let list_herds: Query<(), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds ORDER BY id",
(),
(int8, text, text),
);
let herds = conn
.query(&list_herds, ())
.await
.map_err(db_error)?
.into_iter()
.map(herd_from_row)
.collect();
Ok(Json(herds))
}
async fn get_herd(
State(state): State<AppState>,
Path(id): Path<i64>,
) -> Result<Json<Herd>, HttpError> {
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
let get_herd: Query<(i64,), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds WHERE id = $1",
(int8,),
(int8, text, text),
);
let herd = conn
.query(&get_herd, (id,))
.await
.map_err(db_error)?
.into_iter()
.next()
.map(herd_from_row)
.ok_or_else(|| (StatusCode::NOT_FOUND, format!("herd {id} not found")))?;
Ok(Json(herd))
}
fn herd_from_row((id, name, grazing_ground): (i64, String, String)) -> Herd {
Herd { id, name, grazing_ground }
}
#[allow(clippy::needless_pass_by_value)]
fn pool_error_http(err: babar::PoolError) -> HttpError {
(StatusCode::SERVICE_UNAVAILABLE, err.to_string())
}
#[allow(clippy::needless_pass_by_value)]
fn db_error(err: babar::Error) -> HttpError {
(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
}
This is still a small program, but now it has the three things most API tutorials need:
- request models for incoming JSON
- response models for outgoing JSON
- handlers that turn HTTP input into typed database operations
7. Router, state, and handler mental model
The router is the table of contents for your service:
#![allow(unused)]
fn main() {
let app = Router::new()
.route("/healthz", get(healthz))
.route("/herds", get(list_herds).post(create_herd))
.route("/herds/:id", get(get_herd))
.with_state(AppState { pool });
}
Read it from top to bottom:
GET /healthzcallshealthzGET /herdscallslist_herdsPOST /herdscallscreate_herdGET /herds/:idcallsget_herd
AppState is how shared dependencies reach the handlers:
#![allow(unused)]
fn main() {
#[derive(Clone)]
struct AppState {
pool: Pool,
}
}
Because Pool is stored in state, handlers do not open brand-new database
connections themselves. They borrow the shared pool, acquire one connection for
the request, and hand it back automatically when the handler returns.
That keeps the handler story simple:
- Axum matches the route
- Axum extracts the inputs for that route
- the handler runs a typed database operation
- the handler returns JSON or an HTTP error
8. Request and response models
The two JSON-facing structs are intentionally boring:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize)]
struct CreateHerd {
name: String,
grazing_ground: String,
}
#[derive(Debug, Serialize)]
struct Herd {
id: i64,
name: String,
grazing_ground: String,
}
}
CreateHerd is the shape we accept from clients. It does not have an id
because Postgres creates that for us.
Herd is the shape we send back. It includes the generated id, so clients can
fetch the herd again later.
This separation is useful even in a tiny tutorial:
- request models describe what the client must send
- response models describe what the server promises to return
9. How Axum extracts input
Axum handlers declare their inputs directly in the function signature.
JSON body extraction
create_herd uses:
#![allow(unused)]
fn main() {
Json(payload): Json<CreateHerd>
}
That means:
- Axum reads the request body
- Axum parses it as JSON
- Axum deserializes it into
CreateHerd
If the body is missing required fields or is not valid JSON, Axum returns an error response before your handler logic runs.
Path extraction
get_herd uses:
#![allow(unused)]
fn main() {
Path(id): Path<i64>
}
That means the :id portion of /herds/:id is parsed as an i64. If the
client sends /herds/abc, Axum rejects it because abc cannot become an
integer.
State extraction
Both handlers use:
#![allow(unused)]
fn main() {
State(state): State<AppState>
}
That is how they reach the shared Pool.
10. Typed Command and Query values
The database layer is small, but it is already doing something important: turning SQL into typed Rust values.
Create uses a typed Command
The insert step is:
#![allow(unused)]
fn main() {
let insert_herd: Command<(String, String)> = Command::raw(
"INSERT INTO herds (name, grazing_ground) VALUES ($1, $2)",
(text, text),
);
}
Read that type literally:
- this is a
Command - it takes a
(String, String)parameter tuple - those two Rust values are encoded with the
textcodec
When the handler executes it, the payload values must match that shape:
#![allow(unused)]
fn main() {
conn.execute(&insert_herd, (payload.name.clone(), payload.grazing_ground.clone()))
.await?;
}
That is the beginner-friendly mental model for Command: write something, but
do not expect rows back.
Create then uses a small Query to load the inserted row
Because id is generated by the database, the handler asks Postgres for the id
that was just created on this same connection:
#![allow(unused)]
fn main() {
let current_herd_id: Query<(), (i64,)> = Query::raw(
"SELECT currval(pg_get_serial_sequence('herds', 'id'))",
(),
(int8,),
);
}
Then it runs another query to fetch the full herd:
#![allow(unused)]
fn main() {
let select_herd: Query<(i64,), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds WHERE id = $1",
(int8,),
(int8, text, text),
);
}
This is a helpful first example of Query:
- the first type parameter is the input tuple
- the second type parameter is the row tuple we expect back
List uses a typed Query
The list endpoint does not need parameters, so its input type is ():
#![allow(unused)]
fn main() {
let list_herds: Query<(), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds ORDER BY id",
(),
(int8, text, text),
);
}
That says: “no input values, and every row should decode as
(i64, String, String).”
Get-by-id uses a typed Query
The single-herd lookup takes one i64 id and expects one decoded row shape:
#![allow(unused)]
fn main() {
let get_herd: Query<(i64,), (i64, String, String)> = Query::raw(
"SELECT id, name, grazing_ground FROM herds WHERE id = $1",
(int8,),
(int8, text, text),
);
}
Notice the single-element tuple syntax:
(i64,)for the Rust type(int8,)for the codec tuple
The trailing comma matters because Rust distinguishes (i64,) from plain i64.
11. How handlers map database results to HTTP responses
The handlers stay small because each one follows the same shape.
Create
create_herd:
- acquires a pooled connection
- executes the typed insert command
- queries the generated id
- queries the inserted row
- returns
201 CreatedplusJson<Herd>
The return type makes that explicit:
#![allow(unused)]
fn main() {
Result<(StatusCode, Json<Herd>), HttpError>
}
List
list_herds runs one query, maps each row tuple into a Herd, collects them
into a Vec<Herd>, and returns:
#![allow(unused)]
fn main() {
Result<Json<Vec<Herd>>, HttpError>
}
Get one herd
get_herd runs the lookup query and then checks whether any row came back:
#![allow(unused)]
fn main() {
.into_iter()
.next()
.map(herd_from_row)
.ok_or_else(|| (StatusCode::NOT_FOUND, format!("herd {id} not found")))?;
}
That is the HTTP mapping in one place:
- row found ->
200 OKwith JSON - no row found ->
404 Not Found
Database failures map to 500 Internal Server Error, and pool acquisition
failures map to 503 Service Unavailable.
12. Try the finished API
Start the server:
cargo run
The example responses below assume a fresh herds table. If you already ran the
tutorial once against the same database, the returned id values may be higher
and GET /herds may include earlier rows too.
Create a herd:
curl -X POST http://127.0.0.1:3000/herds \
-H 'content-type: application/json' \
-d '{"name":"Royal Herd","grazing_ground":"Great Forest Meadow"}'
Expected response:
{"id":1,"name":"Royal Herd","grazing_ground":"Great Forest Meadow"}
List herds:
curl http://127.0.0.1:3000/herds
Expected response:
[{"id":1,"name":"Royal Herd","grazing_ground":"Great Forest Meadow"}]
Fetch one herd:
curl http://127.0.0.1:3000/herds/1
Expected response:
{"id":1,"name":"Royal Herd","grazing_ground":"Great Forest Meadow"}
Ask for a herd that does not exist:
curl http://127.0.0.1:3000/herds/999
Expected response body:
herd 999 not found
13. Add observability before production
A small async service still needs observability. Once a request can cross Axum, Tokio, and Postgres, a plain error string stops being enough. Good logs and traces help you answer three practical questions quickly:
- did the service start with the settings you expected?
- which request is running, and how long did it take?
- did the slow or failing step happen in HTTP handling or in Postgres?
That matters even more in async code, because .await lets Tokio pause one task
while other work runs. Observability gives you a breadcrumb trail back through
those pauses.
Add request, startup, and handler tracing
We already initialized tracing in main, which is the right place to do it.
Set up the subscriber before loading settings, opening the pool, or running
startup SQL so those steps emit events too.
Add one more dependency so Axum creates a request span for every HTTP call:
cargo add tower-http --features trace
The changed pieces in the same main.rs look like this:
use std::net::SocketAddr;
use axum::extract::{MatchedPath, Path, State};
use axum::http::{Request, StatusCode};
use axum::routing::get;
use axum::{Json, Router};
use babar::codec::{int8, text};
use babar::query::{Command, Query};
use babar::{Config, Pool, PoolConfig};
use serde::{Deserialize, Serialize};
use tower_http::trace::TraceLayer;
use tracing::{info, instrument};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
tracing_subscriber::fmt()
.with_env_filter(
std::env::var("RUST_LOG")
.unwrap_or_else(|_| "tower_http=info,herd_api=info,babar=info".into()),
)
.with_target(false)
.compact()
.init();
let settings = Settings::from_env()?;
info!(
api_addr = %settings.api_addr,
pg_host = %settings.pg_host,
pg_database = %settings.pg_database,
"starting herd-api",
);
let pool = Pool::new(settings.database_config(), PoolConfig::new().max_size(8)).await?;
initialize_schema(&pool).await?;
info!("schema ready");
let app = Router::new()
.route("/healthz", get(healthz))
.route("/herds", get(list_herds).post(create_herd))
.route("/herds/:id", get(get_herd))
.with_state(AppState { pool })
.layer(
TraceLayer::new_for_http()
.make_span_with(|request: &Request<_>| {
let matched_path = request
.extensions()
.get::<MatchedPath>()
.map(MatchedPath::as_str)
.unwrap_or("<unmatched>");
tracing::info_span!(
"http.request",
method = %request.method(),
matched_path,
)
})
.on_response(|response, latency, _span| {
info!(
status = response.status().as_u16(),
latency_ms = latency.as_millis() as u64,
"request finished",
);
}),
);
info!("listening on http://{}", settings.api_addr);
let listener = tokio::net::TcpListener::bind(settings.api_addr).await?;
axum::serve(listener, app).await?;
Ok(())
}
#[instrument(name = "startup.initialize_schema", skip(pool))]
async fn initialize_schema(
pool: &Pool,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
info!("ensuring herds table exists");
let conn = pool.acquire().await?;
let create_herds: Command<()> = Command::raw(
"CREATE TABLE IF NOT EXISTS herds (
id int8 GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name text NOT NULL,
grazing_ground text NOT NULL
)",
(),
);
conn.execute(&create_herds, ()).await?;
Ok(())
}
#[instrument(name = "handler.create_herd", skip(state, payload))]
async fn create_herd(
State(state): State<AppState>,
Json(payload): Json<CreateHerd>,
) -> Result<(StatusCode, Json<Herd>), HttpError> {
info!(
herd.name = %payload.name,
herd.grazing_ground = %payload.grazing_ground,
"registering herd",
);
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
// ... insert + select exactly as before ...
info!(herd.id = herd_id, "herd inserted");
Ok((StatusCode::CREATED, Json(herd)))
}
#[instrument(name = "handler.list_herds", skip(state))]
async fn list_herds(State(state): State<AppState>) -> Result<Json<Vec<Herd>>, HttpError> {
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
// ... query exactly as before ...
Ok(Json(herds))
}
#[instrument(name = "handler.get_herd", skip(state))]
async fn get_herd(
State(state): State<AppState>,
Path(id): Path<i64>,
) -> Result<Json<Herd>, HttpError> {
let conn = state.pool.acquire().await.map_err(pool_error_http)?;
// ... query exactly as before ...
Ok(Json(herd))
}
The important idea is not “log everything.” It is “log the boundaries”:
- startup: selected API address, Postgres host/database, and whether schema initialization finished
- incoming requests: method, matched route, status code, and latency
- handler-level facts: herd ids and herd names when they help explain what happened
- database work: operation spans from babar plus safe identifiers from your own code
Avoid logging secrets like PGPASSWORD, and be careful about dumping full
request bodies once they may contain private data.
What babar gives you for database visibility
Babar already emits tracing spans for its own database work, including
db.connect, db.prepare, db.execute, and db.transaction. That means the
request span from Axum can contain the lower-level database spans automatically.
If POST /herds slows down, you can tell whether the time went into request
routing, pool acquisition, or SQL execution instead of guessing.
See the traces locally
Run the service with an explicit log filter:
RUST_LOG=tower_http=info,herd_api=info,babar=info cargo run
Then create a herd from another shell:
curl -X POST http://127.0.0.1:3000/herds \
-H 'content-type: application/json' \
-d '{"name":"Royal Herd","grazing_ground":"Great Forest Meadow"}'
You should see output shaped roughly like this:
INFO starting herd-api api_addr=127.0.0.1:3000 pg_host=127.0.0.1 pg_database=postgres
INFO startup.initialize_schema: ensuring herds table exists
INFO schema ready
INFO listening on http://127.0.0.1:3000
INFO http.request{method=POST matched_path=/herds}: handler.create_herd: registering herd herd.name=Royal Herd herd.grazing_ground=Great Forest Meadow
INFO http.request{method=POST matched_path=/herds}: db.execute db.statement="INSERT INTO herds (name, grazing_ground) VALUES ($1, $2)"
INFO http.request{method=POST matched_path=/herds}: request finished status=201 latency_ms=4
The exact formatting depends on your subscriber, but the shape is the useful part: one request span, nested handler activity, and database spans beneath it.
Forward the same telemetry to Dial9 later
For local development, plain text logs to stdout are enough. In a deployed service, keep the same span names and fields, then add an exporter or collector layer that forwards them to your observability backend. If your team uses Dial9, think of it as the place those traces and logs land, not as something that changes how you instrument the herd registry itself.
A good production mental model is:
- emit structured
tracingevents in the service - keep request, handler, and database spans correlated
- attach deployment metadata like service name, environment, and version
- ship that telemetry to Dial9 through your normal OpenTelemetry or structured log pipeline
That way the same instrumentation helps you both on cargo run and in a real
deployment.
14. Where to go next
At this point you have a complete beginner-sized flow:
- Axum receives HTTP input
- extractors turn that input into Rust values
- babar encodes typed parameters into SQL
- babar decodes typed rows back into Rust values
- handlers map those values into HTTP responses
When you are ready to harden it, the next practical steps are:
- move startup schema creation into babar migrations
- add validation rules for empty herd names or grazing grounds
- add update and delete endpoints once create/list/get feel comfortable
Companion sources
crates/core/examples/quickstart.rs— the smallest typed database flowcrates/core/examples/todo_cli.rs— CRUD-shaped babar usage without HTTPcrates/core/examples/axum_service.rs— the closest full HTTP + Postgres example in the repository