Skip to content
Skip to content
Klariticslaritics

INTEGRATION · CLICKHOUSE

Real-time product analytics on ClickHouse.

Klaritics on ClickHouse is the fastest configuration we ship. ClickHouse's vectorized columnar engine and MergeTree architecture deliver sub-second p95 latency at high concurrency — ideal for real-time dashboards, customer-facing embedded analytics, and event volumes that would buckle other warehouses.

ClickHouse logo
✅ GA · Fastest p50Latency (typical):<500ms p50, <2s p95Min version:ClickHouse 22.8+Compression:10–30× columnarDeployment:Self-hosted (yours or ours)

How Klaritics queries ClickHouse

Klaritics connects to your ClickHouse cluster (self-managed, ClickHouse Cloud, or Altinity Cloud) via a read-only user. Queries from the UI compile into ClickHouse SQL, taking advantage of features other warehouses don't have:

- **Vectorized execution** — operations on entire blocks of column values at once - **Sparse primary indexing** — instead of indexing every row, ClickHouse indexes ranges, making sub-second range scans practical - **Aggressive pruning** — partitions, primary keys, and skip indexes combine to read only the data the query touches - **Materialized views (`AggregatingMergeTree`)** — pre-aggregate retention and funnel data in real-time as events stream in - **Lightweight DELETE** — fast soft deletes for GDPR / right-to-be-forgotten workflows

For high-concurrency dashboards, Klaritics' query planner generates SQL that's tuned for ClickHouse's strengths — flat column access, push-down predicates, and pre-aggregated materialized views where appropriate.

Why teams pair Klaritics with ClickHouse

02

Storage economics that are hard to beat

ClickHouse's columnar compression typically achieves 10–30× compression on event data — meaning the same event archive that's 10TB on Postgres is often under 1TB on ClickHouse, with much faster scans.

03

Customer-facing analytics done right

If you're embedding product analytics inside a customer-facing dashboard (think: a SaaS giving its users their own analytics view), ClickHouse is purpose-built for this. Hundreds of concurrent dashboard queries returning in under a second is its happy path.

Where it shines (and where it does not)

Where it shines

  • Real-time and near-real-time product analytics (sub-minute freshness)
  • High-concurrency dashboards (100s+ of concurrent queries)
  • Customer-facing embedded analytics
  • Very large event volumes (10B+ rows) where storage economics matter
  • Use cases requiring deep retention (compression makes 5+ year retention affordable)

Where it does not

  • Frequent updates and deletes (ClickHouse's mutations are async and heavyweight). Best for append-mostly workloads.
  • Complex multi-table joins on huge tables (modern ClickHouse handles joins well, but a star-schema BI workload may still favor Snowflake).
  • Teams without operational experience running self-managed clusters — ClickHouse Cloud or a managed provider eases this. Klaritics works with both.
  • Very small data (<10GB) — a managed ClickHouse cluster has fixed minimum cost; Postgres might be more economical at this scale.

Getting started

Step 1

Step 1 — Create a Klaritics read-only user

CREATE USER klaritics_reader IDENTIFIED BY 'use-a-secrets-manager';

Step 2

Step 2 — Grant read access to your event database

GRANT SHOW, SELECT ON analytics.* TO klaritics_reader;

Step 3

Step 3 — (Recommended) Settings profile to limit query resources

CREATE SETTINGS PROFILE klaritics_profile SETTINGS
  max_memory_usage = 10000000000,  -- 10 GB per query
  max_execution_time = 60,         -- 60 seconds
  readonly = 1;
ALTER USER klaritics_reader SETTINGS PROFILE 'klaritics_profile';

Step 4

Step 4 — Connect from Klaritics admin UI

# Settings → Warehouses → Add ClickHouse
# host: clickhouse.internal
# port: 9440 (TLS) or 8443 (HTTPS)
# database: analytics
# user: klaritics_reader
# ssl: require

Sample query: 7-day signup-to-activation funnel

WITH step_1 AS (
  SELECT user_id, event_time AS step_1_time
  FROM analytics.events
  WHERE event_name = 'signup'
    AND event_time >= now() - INTERVAL 30 DAY
),
step_2 AS (
  SELECT s1.user_id,
         min(t.event_time) AS step_2_time
  FROM step_1 s1
  INNER JOIN analytics.events t
    ON t.user_id = s1.user_id
   AND t.event_name = 'activated'
   AND t.event_time BETWEEN s1.step_1_time
                        AND s1.step_1_time + INTERVAL 7 DAY
  GROUP BY s1.user_id
)
SELECT
  uniqExact(s1.user_id) AS signed_up,
  uniqExact(s2.user_id) AS activated,
  uniqExact(s2.user_id) / nullIf(uniqExact(s1.user_id), 0) AS conversion_rate
FROM step_1 s1
LEFT JOIN step_2 s2 ON s1.user_id = s2.user_id;

💡 for production funnels with a fixed step list, ClickHouse's `windowFunnel()` function is dramatically faster than the self-join pattern. Klaritics' planner uses `windowFunnel` automatically when the funnel definition allows.

Security and compliance

  • Read-only user with `SELECT` on the analytics database; nothing more.
  • TLS connections only (port 9440 native protocol or 8443 HTTPS).
  • Settings profiles to cap memory, execution time, and rows read per query — protecting your cluster from runaway dashboards.
  • Row-level security via ClickHouse's `CREATE ROW POLICY` works transparently with Klaritics — combine with Klaritics' team permissions.
  • Compatible with self-managed ClickHouse, ClickHouse Cloud, Altinity, Aiven, and DoubleCloud.

FAQ

Stop moving data. Start analyzing it.

Connect your warehouse in 8 minutes. See your first funnel in under an hour.