112 detection rules · zero config

Stop burning
money on
Databricks

Brickslasher scans your workspace in under 5 minutes and tells you exactly what's wasting money — idle clusters, zombie jobs, runaway GPU spend, and 100+ other patterns.

Read-only scan — zero risk to your workloads
No workspace data leaves your environment
First findings in under 5 minutes
Multi-workspace governance in one dashboard
brickslasher — scan — adb-7234561890.azuredatabricks.net
$ brickslasher scan --workspace prod-analytics
Authenticating via Service Principal...
✓ Connected — workspace: prod-analytics
Fetching 47 clusters, 312 jobs... done
Running 112 detection rules...
 
● CRITICAL — GPU Cluster Without ML Workload
resource: ml-experiments-gpu (p3.8xlarge × 4)
savings: $38,400/yr · idle 72h
● CRITICAL — No Auto-Termination (6 clusters)
resource: analytics-shared, etl-prod-v2, ...
savings: $22,100/yr estimated
● HIGH — Interactive Compute Ratio 84%
resource: workspace billing
savings: $61,300/yr (shift to job clusters)
 
Scan complete · 28 findings · $143,200/yr opportunity
$
112
Detection rules across
6 waste categories
<5m
Typical scan time
per workspace
Workspaces in one
dashboard
0
Write permissions required.
Read-only always.
The problem

Databricks bills are a
black box. We turn on the lights.

Most teams discover they've been overspending by accident — usually when the bill arrives. Brickslasher finds waste before it compounds.

✕ Without Brickslasher
GPU clusters sitting idle for days — nobody knows
50 all-purpose clusters where job clusters would cost 3× less
Auto-termination not set — clusters billing 24/7
84% of spend on Interactive compute vs Job compute
Cost growing 35% week-over-week — nobody noticed
No cost allocation — can't tie spend to teams or projects
✓ With Brickslasher
+Every idle GPU flagged with estimated $ waste per day
+Exact list of clusters to convert with projected savings
+Auto-termination gaps caught automatically, every scan
+SKU analysis shows exactly where to shift workloads
+Week-over-week trend alerts before the bill arrives
+Tag coverage checks ensure every cluster is attributed
Talk to our team →
What you'll find

Common waste Brickslasher
catches on day one

⚙ Compute 22 rules

Idle clusters & GPU waste

Idle all-purpose clusters running overnight, oversized driver nodes, GPU clusters used for non-GPU workloads, and auto-termination gaps. The most common single source of unnoticed spend.

⚡ Jobs 23 rules

Runaway jobs & scheduling waste

Jobs without timeouts, runaway runtimes, high failure rates, all-purpose compute used for batch jobs, and cron schedules firing dozens of times an hour.

💰 Billing 10 rules

SKU inefficiency & spend spikes

High interactive compute ratio, accelerating spend trends, on-demand vs spot opportunity, and DBU growth patterns that signal runaway workloads.

🛡 Governance 11 rules

Ownership & attribution gaps

Resources owned by personal email accounts, missing tag coverage, classic SQL warehouses still in use, and unsupported runtime versions creating support risk.

🤖 AI/ML 5 rules

GPU compute optimization

GPU clusters left running post-training, oversized instances for inference workloads, and model serving resources without proper auto-scaling configured.

⚡ Advanced 43 rules

Deep pattern detection

Creator concentration risk, duplicate cron schedules, job timeout anomalies, warehouse configuration inefficiencies, and cluster policy compliance gaps.

Example findings

Real rules. Real findings.
Real money.

These are actual detection rules from Brickslasher — not marketing copy. Every finding comes with a resource name, explanation, and fix.

GPU Cluster Without ML Workload
ml-experiments-gpu
Critical
$38,400/yr
All-purpose GPU cluster running on p3.8xlarge × 4 with no ML job activity. GPU idle time bills at 8–20× standard compute.
clusters
No Auto-Termination Set
analytics-shared-v3
Critical
$14,900/yr
Running cluster with no auto-termination configured — will bill indefinitely when idle. 12 workers at current node type.
clusters
High Interactive Compute Ratio
workspace billing
High
$61,300/yr
84% of all DBU spend is on Interactive compute. Job clusters for scheduled workloads would reduce this by 2–3×.
billing
Accelerating Cost Trend
billing usage
Critical
$28,100/yr
DBU spend grew 41% then 38% week-over-week — accelerating trend detected. Investigation required before next billing cycle.
billing
Job Runs on All-Purpose Cluster
nightly-etl-pipeline
High
$9,200/yr
Scheduled job consistently runs on an all-purpose cluster instead of a dedicated job cluster. Convert to save 2–3× on compute.
jobs
Cluster Idle 48h
ds-exploration-01
High
$7,700/yr
8-worker cluster has been running with no activity for 48 hours. Likely forgotten after exploratory work was completed.
clusters
ROI calculator

Estimate your
savings potential

Adjust the sliders to match your workspace. Most teams find enough waste to justify the investment 5–10× over.

SCAN

Continuous detection

Every scan checks 112 rules. New waste patterns get flagged the moment they appear — not weeks later when the bill arrives.

FIX

Actionable findings

Each finding includes the resource name, exact dollar estimate, and a one-line fix recommendation. No ambiguity, no guessing.

TRACK

Prove ROI over time

Track resolved findings across scans. Show finance exactly how much was recovered — documented, exportable, repeatable.

Estimate your savings

Monthly Databricks spend $25,000
Number of active clusters 20
% spend on interactive compute 60%
$0
EST. ANNUAL WASTE
$0
RECOVERABLE SAVINGS
TYPICAL ROI
0 mo
PAYBACK PERIOD
Talk to our team →
How it works

From zero to findings
in under 5 minutes

Connect a read-only token, run a scan, get prioritized findings with dollar estimates — no configuration, no agents, no data movement.

01
60 seconds

Connect your workspace

Paste your workspace URL and a read-only Service Principal or PAT token. Brickslasher never modifies anything — it only reads cluster, job, and billing metadata.

02
2–4 minutes

Automated scan runs

112 detection rules fire across clusters, jobs, billing, workspace config, and AI compute. Each rule generates a prioritized finding with resource names and dollar estimates.

03
Instant

Act on findings

Findings land in your dashboard with severity, fix recommendations, and savings estimates. Assign to team members, set status, and track ROI as issues get resolved.

Request a Demo →
Ongoing value timeline
DAY 1
First scan. First findings. Real dollar figures.
WEEK 1
Slack alerts live. Team sees cost trends automatically.
MONTH 1
Most teams recover the contract cost in savings found.
MONTH 3
3–10× ROI documented. Waste patterns don't come back.
Capabilities

Everything you need to
control Databricks spend

DETECT

112 Detection Rules

Clusters, jobs, billing, workspace governance, AI/ML compute — every category of waste covered out of the box, zero configuration.

QUANTIFY

Dollar-Level Estimates

Every finding includes an estimated annual savings figure — not vague guidance, actual numbers tied to your usage patterns.

ALERT

Slack & Teams Alerts

Critical findings auto-fire to your team in real time. Weekly digest keeps everyone aligned without manual reporting overhead.

TRACK

Trends & ROI Dashboard

Track how findings change across scans. See the ROI of resolved issues as your spend drops over time — exportable for leadership.

AUTOMATE

Scheduled Scans

Set daily or weekly automated scans. New waste gets detected the moment it appears, not when the bill arrives next month.

ASSIGN

Team Workflows

Assign findings to team members, add comments, track fix status. Built for engineering teams, not just finance and FinOps.

SECURE

Service Principal Auth

Secure, no-expiry authentication via Service Principals. No broad PAT tokens, no manual rotation, no stored credentials.

EXTEND

Custom Rules

Write YAML-based custom detection rules for your team's specific patterns, naming conventions, and cost policies.

Enterprise security

Built for security-
conscious teams

Brickslasher connects to your Databricks workspace using read-only API credentials. We never store your workspace data — every scan runs in your session and results are written only to your isolated account.

SOC 2 Type II in progress
GDPR compliant data handling
Full audit log of every scan and action
TLS 1.2+ in transit, AES-256 at rest
Zero write permissions requested
Access model

Read-only API access

Brickslasher only ever reads workspace metadata. No write permissions requested. No data pipelines, notebooks, or clusters are ever modified.

Data residency

No data leaves your workspace

Cluster configs, job definitions, and billing summaries are processed in-memory during the scan and never written to Brickslasher servers.

Encryption

Encrypted in transit & at rest

All API communications are TLS 1.2+. Stored credentials use AES-256 encryption with per-tenant keys. Secrets are never logged or exposed.

Compliance

Full audit trail

Every scan, finding status change, and team action is logged with timestamp and user identity. Exportable for compliance reviews.

How we work with customers

No forms. No self-serve.
Just results.

We work directly with data engineering teams and FinOps leads at companies with serious Databricks spend. Every customer gets full platform access from day one.

01 — Discovery

45-minute call

We review your Databricks environment together — workspaces, spend range, team structure, and what's causing the most pain. No commitment required.

No slide decks. Real conversation.
02 — Scoped Pilot

Run against your workspace

We run Brickslasher against your real environment. You see actual findings with dollar figures from your own data. No synthetic demos, no mock data.

First findings in < 5 minutes.
03 — Contract + Access

Full platform, all workspaces

Contract signed. Full access provisioned. Scheduled scans configured. Your team is onboarded the same week — not months later.

Same-week onboarding. No waiting.

Ready to see your real numbers?

Talk to our team. We'll show you exactly what Brickslasher finds in your environment — before you sign anything.

Talk to our team → Sign in
Common questions

Everything you need to know

No. Workspace metadata (cluster configs, job definitions, billing summaries) is processed in-memory during the scan. Finding results (rule IDs, resource names, savings estimates) are stored in your isolated account so you can track remediation — but raw Databricks data is never persisted on our end.

Read-only access only. You can use a Personal Access Token or a Service Principal with CAN_VIEW on clusters and jobs, and SELECT on system tables (billing.usage, compute.clusters). We never request write permissions. You can scope the Service Principal to a single workspace if preferred.

Reach out to sales@brickslasher.org. We'll set up a discovery call, run a scoped pilot against your real environment, and get you fully onboarded the same week the contract is signed. No waiting, no slow rollouts.

Yes — our standard process includes a pilot scan against your real Databricks environment before you commit. You'll see actual findings with dollar figures from your own data, not a synthetic demo. We believe the numbers speak for themselves.

2–5 minutes for most workspaces. We pull cluster and job metadata via the Databricks REST API (Tier 1, ~30 seconds) and billing system tables via SQL warehouse (Tier 2, ~1–2 minutes depending on data volume). Workspaces with Unity Catalog enabled get deeper cost attribution.

Yes — Brickslasher works with AWS, Azure, and GCP Databricks deployments. The Databricks REST API and system tables are consistent across clouds. Some rules (like spot instance detection) include cloud-specific logic for AWS and Azure. GCP spot detection is in active development.

Yes. Every rule has a unique ID and can be disabled per workspace. All customers also get access to custom YAML rules — you can write your own detection logic using the same framework as the built-in rules. Disabled rules are remembered across scans.

Same week as contract signing. Access is provisioned immediately. We help you connect your first workspace, configure scheduled scans, and set up Slack alerts in a single 30-minute session. Your team is running autonomous scans before the week is out.

Find your waste.
Keep the savings.

We show you what Brickslasher finds in your real environment before you sign anything.

Talk to our team → See how it works
sales@brickslasher.org · read-only access · first findings in < 5 minutes