Skip to main content

Step 0: Choose your operating mode

Before setup, decide which user experience you want:
  • Direct Analytics: fastest, minimal guardrails
  • Trusted Analytics V1: contracts + policies + bundles + semantic metrics
  • Trusted Analytics V2: V1 runtime with OpenMetadata-fed governance signals

Compare operating modes

Pick the right trust level for your users

Step 1: Install dazense-core package

pip install dazense-core

Step 2: Initialize a dazense project

dazense init
This command will ask you:
  • To name your project
  • If you want to connect a database (optional)
  • If you want to add a repo in agent context (optional)
  • To add an LLM key (optional)
  • If you want to setup a Slack connection (optional)
You can skip any optional question and configure them later in your dazense_config.yaml file.
This will create:
  • A new folder with your project name
  • An architecture for your context files
  • A dazense_config.yaml configuration file
  • A RULES.md file

Step 3: Verify your setup

cd to the project folder and run:
dazense debug
This command checks your configuration and displays any issues.

Step 4: Synchronize your context

dazense sync
This will populate your context folder with your context files (data, metadata, repos, etc.).
dazense validate
If you’re using Trusted Analytics V1/V2, this checks bundle/policy/semantic consistency before chat usage.

Step 6: Launch the chat and ask questions

You have two options to access the chat UI:

Option 1: Using dazense chat command

dazense chat
This will start the dazense chat UI. It will open the chat interface in your browser at http://localhost:5005.

Option 2: Using Docker

Instead of dazense chat, you can use Docker to run the UI: With built-in example:
docker run -d \
  --name dazense \
  -p 5005:5005 \
  -e BETTER_AUTH_URL=http://localhost:5005 \
  metazense/dazense:latest
With your project:
docker run -d \
  --name dazense \
  -p 5005:5005 \
  -e BETTER_AUTH_URL=http://localhost:5005 \
  -v /path/to/your/project:/app/project \
  -e DAZENSE_DEFAULT_PROJECT_PATH=/app/project \
  metazense/dazense:latest
Access the UI at http://localhost:5005 and add your LLM API key in the settings. From there, you can start asking questions to your agent!

Step 7: Evaluate your agent

First, create a folder tests/ with questions and expected SQL in yaml. Then, measure agent’s performance on examples with dazense test command:
dazense test
View results in tests panel:
dazense test server
If you’re using Trusted Analytics V1/V2, also run:
dazense eval
dazense eval --engine
These commands validate governance behavior, not only answer correctness.

Evaluation Guide

Learn how to build comprehensive test suites and evaluate your agent

What’s Next?

Context Builder

Learn how to build and customize your agent’s context

Self-Hosting

Deploy your agent in production

Chat Interface

Explore the chat interface features

dazense Cloud

Use our managed cloud service