Skip to content
Skip to content
Klariticslaritics

INTEGRATION · SNOWFLAKE

Product analytics on your Snowflake warehouse.

Klaritics deploys inside your infrastructure and queries your Snowflake virtual warehouses directly. Snowflake's three-layer architecture — separated storage, compute, and cloud services is exactly what makes warehouse-native analytics work: you can dedicate a virtual warehouse to Klaritics workloads without affecting your other queries.

Snowflake logo
✅ GALatency (typical):<2s p50, <5s p95Min version:All current versionsSnowflake editions:Standard, Enterprise, Business Critical, Virtual Private SnowflakeDeployment:Self-hosted

How Klaritics queries Snowflake

Klaritics connects to your Snowflake account via a service user with read-only access to your event tables. When a user builds a funnel, retention curve, or cohort in the Klaritics UI, the query planner translates that into Snowflake SQL and submits it to a virtual warehouse you designate.

Because Snowflake separates storage from compute, you can dedicate a right-sized warehouse to Klaritics queries — say, an `XSMALL` for exploratory work that auto-suspends when idle, or a multi-cluster warehouse for heavier concurrent dashboards. Your other Snowflake workloads remain unaffected.

Klaritics deploymentAuthSnowflakeAggregated results
Klaritics deployment (in your VPC) → service user auth → designated virtual warehouse → reads from your Snowflake tables → returns aggregated results to Klaritics.

Why teams pair Klaritics with Snowflake

01

Compute isolation that actually works

Snowflake's virtual warehouse model means Klaritics queries never compete with your ELT or BI workloads. Spin up a dedicated warehouse, set auto-suspend to 60 seconds, and only pay for the seconds you query.

02

Micro-partition pruning works in your favor

Snowflake stores data in automatically managed micro-partitions with rich metadata. Klaritics' generated SQL takes advantage of this: time-bounded funnels and retention queries scan only the partitions they need, not your full event history.

03

Time-travel-safe development

Snowflake's Time Travel feature means you can run Klaritics against a historical snapshot of your event tables for backfills, A/B test re-analyses, or schema migrations — without copying anything.

Where it shines (and where it does not)

Where it shines

  • Daily and weekly product reporting at any scale
  • Multi-touch funnel and retention analysis on event volumes from millions to trillions
  • Dedicated analytics warehouses without disrupting BI or ELT
  • Time-bounded queries (which is most product analytics) — micro-partition pruning makes these very fast

Where it does not

  • Sub-second p99 latency for high-concurrency user-facing dashboards (Snowflake's per-query startup overhead is usually 100–500ms minimum). For embedded customer-facing analytics, ClickHouse may be a better backend.
  • Very high-cardinality real-time event streams where new partitions land every second — Snowflake works, but micro-partition pruning has diminishing returns at extreme freshness.

Getting started

Step 1

Step 1 — Create a Klaritics service user in Snowflake

CREATE USER klaritics_reader PASSWORD = '...' DEFAULT_ROLE = klaritics_role;
CREATE ROLE klaritics_role;
GRANT ROLE klaritics_role TO USER klaritics_reader;

Step 2

Step 2 — Grant read access to your event tables

GRANT USAGE ON DATABASE analytics TO ROLE klaritics_role;
GRANT USAGE ON SCHEMA analytics.events TO ROLE klaritics_role;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics.events TO ROLE klaritics_role;

Step 3

Step 3 — Designate a virtual warehouse for Klaritics

CREATE WAREHOUSE klaritics_wh WITH WAREHOUSE_SIZE = 'XSMALL'
  AUTO_SUSPEND = 60 AUTO_RESUME = TRUE INITIALLY_SUSPENDED = TRUE;
GRANT USAGE ON WAREHOUSE klaritics_wh TO ROLE klaritics_role;

Step 4

Step 4 — Connect from your Klaritics deployment

In the Klaritics admin UI: Settings → Warehouses → Add Snowflake.
Provide the account URL, service user credentials, role, warehouse, and database/schema.
Klaritics validates the connection, introspects your schema, and you're querying.

Sample query: 7-day signup-to-activation funnel

WITH step_1 AS (
  SELECT user_id, event_time AS step_1_time
  FROM analytics.events.tracks
  WHERE event = 'signup'
    AND event_time >= DATEADD('day', -30, CURRENT_TIMESTAMP)
),
step_2 AS (
  SELECT s1.user_id,
         MIN(t.event_time) AS step_2_time
  FROM step_1 s1
  JOIN analytics.events.tracks t
    ON t.user_id = s1.user_id
   AND t.event = 'activated'
   AND t.event_time BETWEEN s1.step_1_time
                        AND DATEADD('day', 7, s1.step_1_time)
  GROUP BY s1.user_id
)
SELECT
  COUNT(DISTINCT s1.user_id) AS signed_up,
  COUNT(DISTINCT s2.user_id) AS activated,
  COUNT(DISTINCT s2.user_id)::FLOAT / NULLIF(COUNT(DISTINCT s1.user_id), 0) AS conversion_rate
FROM step_1 s1
LEFT JOIN step_2 s2 ON s1.user_id = s2.user_id;

Security and compliance

  • Klaritics requires `USAGE` on a database/schema and `SELECT` on the specific event tables. Nothing more.
  • All connections use Snowflake's TLS 1.2+ encryption.
  • Klaritics supports key-pair authentication and OAuth via Okta, Azure AD, Auth0 (recommended over username/password).
  • Network policies: configure Snowflake to only accept Klaritics connections from your deployment's egress IP range.
  • Because Klaritics is self-hosted, the connection happens entirely inside your network perimeter — events never leave your AWS/GCP/Azure account.

FAQ

Stop moving data. Start analyzing it.

Connect your warehouse in 8 minutes. See your first funnel in under an hour.