Cloudflare Analytics Engine
Use Cloudflare Analytics Engine when you want first-party ingestion on Cloudflare without managing your own event store. ViteHub keeps the public runtime API on @vitehub/analytics, routes browser-side helpers through a generated ingestion endpoint, and writes normalized server-side events into a bound dataset.
Before you start
Cloudflare Analytics Engine does not need an extra npm package for @vitehub/analytics, but it does require a Worker binding.
- Configure an
analytics_engine_datasetsbinding in the generated Wrangler config throughanalytics.cloudflareAnalyticsEngine. - Expect Analytics Engine to stay ingestion-focused. Querying happens through Cloudflare's SQL API, not through
@vitehub/analytics. - Analytics Engine retains data for three months and uses sampled queries for large reads.
Configure Cloudflare Analytics Engine
Set analytics.provider to cloudflare-analytics-engine, then configure the dataset under analytics.cloudflareAnalyticsEngine.
export default defineNuxtConfig({
modules: ['@vitehub/analytics/nuxt'],
analytics: {
provider: 'cloudflare-analytics-engine',
cloudflareAnalyticsEngine: {
dataset: 'analytics_events',
},
},
})
Cloudflare-specific options
Keep Analytics Engine settings in top-level analytics.cloudflareAnalyticsEngine config.
| Option | Use it for |
|---|---|
binding | Override the Worker binding name used at runtime. Defaults to ANALYTICS. |
dataset | Name the Analytics Engine dataset bound to the Worker. Required. |
Track events
Use the shared runtime helpers on the client or the server. Browser-side calls always post to ViteHub's generated ingestion route. Server-side calls write directly to the bound Analytics Engine dataset.
import { group, track } from '@vitehub/analytics'
export default defineEventHandler(async () => {
await group('team_123', { seats: 5 })
await track('signup', { plan: 'pro' })
return { ok: true }
})
Semantic event shape
Treat the raw Analytics Engine slots as a storage detail. ViteHub writes a fixed event schema for new rows and recommends querying it through semantic aliases.
| Semantic field | Meaning |
|---|---|
sample_key | Stable sampling key derived from userId, groupId, page path, event name, or operation. |
operation | Helper kind such as track, page, identify, alias, group, or reset. |
event_name | Custom event name from track(). Empty for other helpers. |
page_path | Page path from page({ path }). Empty for other helpers. |
runtime | client or server. |
subject_kind | user, group, or none. |
subject_id | userId or groupId when present. |
schema_version | Fixed version marker. New rows use v2. |
metadata_json | Remaining helper payload such as extra data, traits, or previousId. |
event_count | Constant 1, useful for counts. |
For v2 rows, the raw slot layout is:
| Raw slot | Semantic field |
|---|---|
index1 | sample_key |
blob1 | operation |
blob2 | event_name |
blob3 | page_path |
blob4 | runtime |
blob5 | subject_kind |
blob6 | subject_id |
blob7 | schema_version |
blob8 | metadata_json |
double1 | event_count |
Query the dataset
ViteHub does not abstract reads. Query Analytics Engine directly through Cloudflare's SQL API, but keep one semantic SELECT as your query layer so the rest of your queries do not depend on blobN positions.
WITH analytics_events_semantic AS (
SELECT
index1 AS sample_key,
blob1 AS operation,
CASE
WHEN blob7 = 'v2' THEN blob2
WHEN blob1 = 'track' THEN blob2
ELSE ''
END AS event_name,
CASE
WHEN blob7 = 'v2' THEN blob3
WHEN blob1 = 'page' THEN blob2
ELSE ''
END AS page_path,
CASE
WHEN blob7 = 'v2' THEN blob4
ELSE blob3
END AS runtime,
CASE
WHEN blob7 = 'v2' THEN blob5
ELSE ''
END AS subject_kind,
CASE
WHEN blob7 = 'v2' THEN blob6
ELSE ''
END AS subject_id,
CASE
WHEN blob7 = 'v2' THEN blob8
ELSE blob4
END AS metadata_json,
CASE
WHEN blob7 = 'v2' THEN blob7
ELSE 'v1'
END AS schema_version,
double1 AS event_count
FROM analytics_events
)
SELECT
event_name,
SUM(event_count) AS events
FROM analytics_events_semantic
WHERE operation = 'track'
GROUP BY event_name
ORDER BY events DESC
LIMIT 20
The compatibility layer above reads both:
v2rows from the current fixed slot schema- older
v1rows whereblob2stored the event label or page path,blob3stored runtime, andblob4stored the JSON envelope
What changes on Cloudflare
| Concern | Behavior |
|---|---|
| Browser ingestion | Client-side helpers always post to /_vitehub/analytics/track or your overridden analytics.client.base. |
| Server ingestion | Nitro writes directly to the Analytics Engine binding. |
| Storage | ViteHub does not use D1 for analytics persistence anymore. |
| Native handle | getAnalytics().native exposes binding and dataset. |
| Querying | Reads stay in Cloudflare through the Analytics Engine SQL API. |