Skip to content
Skip to content
Klariticslaritics

INTEGRATION · POSTGRESQL

Product analytics on your Postgres database.

Klaritics queries PostgreSQL directly — perfect for teams whose event data lives alongside their application database, or teams just getting started with product analytics. Self-hosted, warehouse-native, and ready to grow with you.

PostgreSQL logo
✅ GALatency (typical):<2s p50, <5s p95Min version:PostgreSQL 12+Best for:Up to ~1TB eventsDeployment:Self-hosted

How Klaritics queries PostgreSQL

Klaritics connects to PostgreSQL via a read-only role. Queries from the Klaritics UI compile into standard PostgreSQL (we use ANSI SQL features, plus Postgres-specific niceties like `LATERAL` joins for funnels and `generate_series()` for retention curves).

For best performance, Klaritics will recommend:

- A dedicated read replica for analytics workloads (so Klaritics never competes with application transactions) - Appropriate indexes (typically B-tree on `user_id`, `event_time`, and `event_name`) - BRIN indexes for time-ordered event tables — they offer huge space savings vs. B-tree for analytical scans - Materialized views for high-traffic dashboards, refreshed on a schedule

Why teams pair Klaritics with PostgreSQL

01

You probably already have it

Postgres is the most-deployed database on Earth. If you're not yet on a dedicated warehouse, Klaritics on Postgres is the fastest path to product analytics — no new infrastructure, no new vendor, no new bill.

02

Hybrid OLTP + OLAP works at moderate scale

PostgreSQL's parallel query execution and BRIN indexes, combined with read replicas and materialized views, can handle product analytics for most teams under 1TB of event data — which covers a surprising majority of companies.

03

Smooth upgrade path

When your event volume eventually outgrows Postgres, Klaritics' query layer is unchanged — you just point it at Snowflake, BigQuery, Redshift, or ClickHouse. Same dashboards, same funnels, same retention curves. No re-instrumentation, no re-modeling.

Where it shines (and where it does not)

Where it shines

  • Greenfield product analytics on existing Postgres infrastructure
  • Event volumes under 1TB, especially with proper indexing
  • Read-replica-based isolation for analytics workloads
  • Cost-conscious teams: zero added warehouse cost
  • Open-source comfort: no proprietary lock-in

Where it does not

  • Event volumes above 1–5TB — query latency degrades, especially on funnel and retention queries that require self-joins.
  • High-concurrency dashboards across many teams (Postgres' connection model is heavier than columnar warehouses).
  • Real-time analytics on continuously-streaming events (Postgres works, but is not its happy path).
  • If you're already at "the bill is too high and queries are too slow" — that's the migration trigger to a columnar warehouse.

Postgres is a starter warehouse. If you're past 5TB of events or 50+ concurrent dashboard users, plan a migration to Snowflake, BigQuery, Redshift, or ClickHouse. Klaritics makes that migration trivial — same UI, same dashboards, different backend.

Getting started

Step 1

Step 1 — Create a Klaritics read-only role

CREATE ROLE klaritics_reader WITH LOGIN PASSWORD 'use-a-secrets-manager';

Step 2

Step 2 — Grant read access

GRANT CONNECT ON DATABASE analytics TO klaritics_reader;
GRANT USAGE ON SCHEMA public TO klaritics_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO klaritics_reader;
ALTER DEFAULT PRIVILEGES IN SCHEMA public
  GRANT SELECT ON TABLES TO klaritics_reader;

Step 3

Step 3 — (Recommended) Add a BRIN index on the event time column

-- BRIN works extremely well on time-ordered event tables and is much smaller than B-tree.
CREATE INDEX events_time_brin_idx ON events USING BRIN (event_time);

Step 4

Step 4 — (Optional but recommended) Run Klaritics against a read replica

-- Configure Klaritics to point at your replica's connection string, not your primary.

Step 5

Step 5 — Connect from Klaritics admin UI

# Settings → Warehouses → Add PostgreSQL
# host: your-replica.internal
# port: 5432
# database: analytics
# user: klaritics_reader
# ssl: require

Sample query: 7-day signup-to-activation funnel

WITH step_1 AS (
  SELECT user_id, event_time AS step_1_time
  FROM events
  WHERE event_name = 'signup'
    AND event_time >= NOW() - INTERVAL '30 days'
),
step_2 AS (
  SELECT s1.user_id,
         MIN(t.event_time) AS step_2_time
  FROM step_1 s1
  JOIN events t
    ON t.user_id = s1.user_id
   AND t.event_name = 'activated'
   AND t.event_time BETWEEN s1.step_1_time
                        AND s1.step_1_time + INTERVAL '7 days'
  GROUP BY s1.user_id
)
SELECT
  COUNT(DISTINCT s1.user_id) AS signed_up,
  COUNT(DISTINCT s2.user_id) AS activated,
  COUNT(DISTINCT s2.user_id)::FLOAT / NULLIF(COUNT(DISTINCT s1.user_id), 0) AS conversion_rate
FROM step_1 s1
LEFT JOIN step_2 s2 ON s1.user_id = s2.user_id;

Security and compliance

  • Klaritics needs only `CONNECT`, `USAGE` on schema, and `SELECT` on tables.
  • All connections require SSL/TLS — Klaritics rejects plaintext.
  • Recommended: connect to a read replica only — never the primary — both for performance isolation and as a defense-in-depth measure.
  • Postgres' role-based access works cleanly with Klaritics' RBAC: combine row-level security policies with Klaritics' team permissions.
  • Compatible with all major managed Postgres: AWS RDS, Aurora Postgres, Google Cloud SQL, Azure Database for PostgreSQL, Crunchy Bridge, Neon, Supabase.

FAQ

Stop moving data. Start analyzing it.

Connect your warehouse in 8 minutes. See your first funnel in under an hour.