The usage fuse for AI apps

Sell AI subscriptions without eating the usage bill

MeterFuse wraps OpenAI, Anthropic, and Vercel AI SDK calls so every request checks plan limits before it runs, tracks usage after it completes, and sends upgrades through Stripe.

TypeScriptPythonGoRustSwiftJavaClojure
OpenAI wrapper
import OpenAI from "openai";
import { createMeterFuse } from "meterfuse";
import { withOpenMeterFuse } from "meterfuse/openai";

const meterfuse = createMeterFuse({ apiKey: "mf_live_xxx" });

const openai = withOpenMeterFuse(new OpenAI(), {
  meterfuse,
  userId: (request) => request.user,
});

const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  user: "user_123",
  messages: [{ role: "user", content: "Write a launch email." }],
});

Features

A fuse between AI calls and gross margin

The same usage-protection flow works from web backends, mobile apps, JVM services, CLIs, and server-side workers.

Meter provider usage

Parse OpenAI, Anthropic, Google Gemini, OpenRouter, and Vercel AI SDK responses without per-provider billing code.

Blow the fuse before margin does

Run a cheap balance check before model calls and block or upsell users before costs escape your plan limits.

Upgrade through Stripe

Send users to Stripe Checkout and keep one credit ledger for your AI plan limits.

Clients

SDKs for the stack you already run

Click a language to see the exact install, import, and usage shape. TypeScript gets the official OpenAI and Anthropic class wrappers; every client can check before AI work and track after.

npmPyPIGoCargoRubyGemsMaven
TypeScriptnpm

Install

npm install meterfuse

Import

import { withOpenMeterFuse } from "meterfuse/openai";

TypeScript example
import OpenAI from "openai";
import { createMeterFuse } from "meterfuse";
import { withOpenMeterFuse } from "meterfuse/openai";

const meterfuse = createMeterFuse({ apiKey: "mf_live_xxx" });

const openai = withOpenMeterFuse(new OpenAI(), {
  meterfuse,
  userId: (request) => request.user,
});

const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  user: "user_123",
  messages: [{ role: "user", content: "Hello!" }],
});

Flow

Check before the call. Track after the call.

1

Create plans

Define monthly credits, prices, and tier limits in the dashboard.

2

Check first

Call the client before expensive AI work and return an upgrade path when access is blocked.

3

Track after

Send the provider response back to MeterFuse so credits and usage analytics stay current.

Pricing

Start free. Scale when you're ready.

No credit card required for the free plan.

Free

$0/mo

Get started
  • Tracked AI spend$2.5K/mo
  • Projects1
  • DashboardBasic
  • WebhooksNo
Most popular

Pro

$49/mo

Get started
  • Tracked AI spend$50K/mo
  • ProjectsUnlimited
  • DashboardFull
  • WebhooksYes

Scale

$49/mo + 1%

Get started
  • Tracked AI spendUnlimited
  • ProjectsUnlimited
  • DashboardFull
  • WebhooksYes

Integrations

Provider usage in, plan protection out

MeterFuse keeps provider parsing separate from client language support, so teams can mix runtimes and model vendors freely.

AnthropicOpenAIOpenRouterGoogle GeminiVercel AI SDKStripeRevenueCat