meetbot.dev

All comparisons · vs Vexa

meetbot vs Vexa.

Vexa is the only competitor in this set with a fully Apache-2.0 stack — orchestrator, transcription, MCP server, the whole pipeline. If "true OSS today" is your hard requirement, they win, and we tell you so. Where we differ: pricing shape. Their hosted PAYG is $0.30/hr bot + $0.20/hr transcription = $0.50/hr bundled. Ours is $0.30/hr flat for the bot-hour with transcription as BYOK today (free pass-through; you pay your provider) — hosted Whisper option ships Q3 2026. We also bring more battle-hardened Meet/Teams adapters today; their hosted dashboard is younger than ours, ours is younger than theirs in places too — call it a mixed bag.

Sign in →Skip to pricinglast verified 2026-05-09

01 · tl;dr

The short version.

Use Vexa if…

  • You need fully Apache-2.0 source today — every component, no exceptions.
  • You're building an AI agent and want MCP server access to meeting context out of the box.
  • You want to self-host on bare metal / OpenShift / your own K8s today and don't want any closed-source Docker layer.
  • You're price-insensitive and prioritise OSS purity.

Use meetbot if…

  • You want hosted with a flat bot-hour price and you're fine running BYOK transcription on per-speaker audio (Whisper/Deepgram/AssemblyAI) until our hosted Whisper ships Q3.
  • You need a more battle-hardened Meet/Teams adapter (Vexa's per-platform reliability is younger).
  • You want signed webhooks with helpers and three transports including RTMP.
  • You're indifferent on the bot binary's license today — you want the SDK + samples MIT and the API mature.

02 · spec table

Side by side. No spin.

Numbers verified against the cited source on the date in the page footer. PR a correction if anything has moved.

meetbotVexa
OSS license (full stack)MIT (SDKs/CLI/samples/spec); bot closedApache 2.0 (everything)[1]
self-hostM5 (source-available)today (Docker · K8s · OpenShift · bare metal)[2]
hosted bot price$0.30 / hr (bot-hour only)$0.30 / hr[3]
hosted transcriptionBYOK today (free pass-through); hosted Whisper +$0.10/hr at GA (Q3 2026)+$0.20 / hr[4]
bundled hosted $/hr$0.30 + your transcription bill (BYOK) today; $0.40 at GA$0.50
individual plan$12 / mo (1 concurrent bot)[5]
MCP serveryes (built-in)[6]
interactive bot (TTS speak)M1 (output-audio endpoint)yes (TTS-driven)
platformsMeet, Teams, ZoomMeet, Teams, Zoom
transportswebhook · websocket · RTMPwebhook · websocket
data residency (hosted)Hetzner Falkenstein (DE)self-host: anywhere; hosted: not specified
production maturitypre-launch (zero paying customers today; sample apps + daily smoke tests against real meetings)growing fast, smaller install base than Recall
  1. [1]OSS license (full stack): github.com/Vexa-ai/vexa
  2. [2]self-host: github.com/Vexa-ai/vexa
  3. [3]hosted bot price: vexa.ai/pricing
  4. [4]hosted transcription: vexa.ai/pricing
  5. [5]individual plan: vexa.ai/pricing
  6. [6]MCP server: github.com/Vexa-ai/vexa

03 · pricing scenarios

The math, three ways.

Three usage points: a hobbyist, a startup, and a scaled company. Formula visible per cell — copy it into a spreadsheet, plug your own numbers in.

scenario 1

Hobbyist · 10 hr / mo

10 hours of meeting recording per month.

meetbot
10 hr × $0.30 = $3.00 (bot-hour only)
$3.00/mo
Vexa
10 hr × ($0.30 + $0.20) = $5.00
$5.00/mo

Today: meetbot bills the bot-hour only — transcription (if you want it) is BYOK. Or self-host Vexa free on your own box — fixed cost ~$0 + your time. If your time is free, Vexa wins. It rarely is.

scenario 2

Startup · 1,000 hr / mo

1,000 hours of meeting recording per month.

meetbot
1,000 hr × $0.30 = $300 (bot-hour) + your transcription bill (BYOK)
$300+/mo
Vexa
1,000 hr × $0.50 = $500
$500/mo

Today bundled: $300 + ~$100–200 for BYOK transcription via Deepgram/AssemblyAI ≈ $400–500/mo. At GA (hosted Whisper Q3 2026): $400/mo. Self-host Vexa = ~$80/mo Hetzner box + ~10 hr/mo ops time ≈ $280/mo at $20/hr engineer rate.

scenario 3

Scale · 50,000 hr / mo

50,000 hours of meeting recording per month.

meetbot
50,000 hr × $0.30 = $15,000 (bot-hour) + transcription
$15,000+/mo
Vexa
50,000 hr × $0.50 = $25,000
$25,000/mo

At this scale BYOK transcription approaches its hardware floor (~$200–400/mo for self-hosted Whisper on a couple of Hetzner GPU boxes). Self-host Vexa at this scale = real ops investment (~1 FTE), plus GPU bills for transcription. Probably cheaper in hardware ($/hr) but expensive in headcount.

04 · where they win

Where Vexa is the better choice.

We include this section because the alternative — pretending we win everywhere — is dishonest, and dishonest comparison pages are the reason most of them aren't worth reading.

  • 01Apache-2.0 across the entire stack today. Our bot binary is closed; theirs isn't. If 'no closed-source layers' is a hard procurement criterion, they win.
  • 02MCP server out of the box. AI-agent builders get meeting context via a standard protocol; we don't expose this yet.
  • 03TTS-driven interactive bots are first-class — the bot can speak in the meeting. Ours ships in M1 (output-audio endpoint) but theirs is more developed today.
  • 04Self-host paths to OpenShift and bare metal today. Our self-host is M5 work and not committed beyond source-available release.
  • 05Pricing for very low concurrency: $12/mo for 1 concurrent bot is hard to beat for solo developers who only ever run one meeting at a time.

05 · where we win

Where meetbot wins.

Each line links to the doc page that proves it. Numbers, not adjectives. Sourced against Vexa's public surface as of the date below.

  • 01Cheaper hosted bot-hour. $0.30/hr (us, bot-hour) vs $0.50/hr (Vexa, bundled bot + transcription). At GA — once our hosted Whisper lands Q3 — bundled rate is $0.40/hr (us) vs $0.50/hr (them). Today the gap is BYOK-on-our-side: per-speaker audio piped into Whisper/Deepgram/AssemblyAI on your key, zero meetbot fee on that leg.
    proof: /pricing
  • 02More battle-tested Meet/Teams adapters. We've shipped through Google's April-2026 dual-queue admit screen change with a bot pool model that actually survives it.
    proof: /docs/meet
  • 03Three transports per endpoint (webhook, WebSocket, RTMP). Vexa ships webhook + WebSocket today; RTMP is a self-host-only escape hatch.
    proof: /docs/transports
  • 04Per-minute billing and a customer dashboard with signed-receipt invoices, API key rotation, retention controls per bot.
    proof: /account
  • 05First hour free, no card. Vexa requires a paid Individual plan for hosted use.
    proof: /pricing

06 · migration

The whole switch. Eight lines.

Same shape, same fields, different host. Replace your Vexa bot-dispatch call with a meetbot one. Webhook payloads land in the same JSON shape your handler already parses.

Vexa (before)ts
// Vexa (hosted)
const res = await fetch("https://gateway.vexa.ai/bots", {
  method: "POST",
  headers: {
    "X-API-Key": process.env.VEXA_KEY!,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    platform: "google_meet",
    native_meeting_id: "abc-defg-hij",
    bot_name: "notes",
    language: "en",
  }),
});
meetbot (after)ts
// meetbot — transcription is BYOK today (hosted Whisper Q3 2026)
const res = await fetch("https://api.meetbot.dev/api/v1/bot", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.MEETBOT_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    meeting_url: "https://meet.google.com/abc-defg-hij",
    bot_name: "notes",
    // per-speaker audio in your bucket; pipe to your provider
    delivery: [{ transport: "webhook", url: WEBHOOK_URL }],
  }),
});

07 · faq

The questions we actually get.

Q.If Vexa is fully OSS, why use you?
If you're going to self-host, Vexa is a defensible choice today and we say so on this page. Most teams don't end up self-hosting — the operational burden of keeping bot adapters working through monthly Meet/Teams DOM changes is real, and most prefer to pay someone to do it. We're cheaper than their hosted on the bot-hour ($0.30 vs $0.30 + $0.20 transcription), with what we believe is a more battle-tested Meet/Teams adapter today; you should run a real proof-of-concept against both before committing.
Q.When will meetbot be fully OSS?
The SDKs (JS, Python, Go, Rust), CLI, sample apps, and OpenAPI spec are MIT today. The bot binary is on the M5 roadmap for source-available release under a permissive license — not committed to Apache yet because the anti-competing-host clause question is genuinely unresolved. If you need full Apache today, Vexa.
Q.Do you support MCP for AI-agent workflows?
Not yet. MCP support is something we want to ship — likely as a separate @meetbot/mcp-server package — but it's not on the M1–M3 roadmap. If MCP is a buying criterion today, Vexa is the right pick.
Q.Can your bot speak in the meeting?
M1 ships POST /api/v1/bots/:id/output-audio — initially with a pre-recorded WAV upload-and-play, streaming TTS later. Vexa's interactive bot surface is more developed today; we'll close this gap by Q3.
Q.What's the realistic self-host TCO for Vexa?
A 32 GB Hetzner CCX23 ($60–80/mo) handles a few concurrent bots. Add a GPU box if you want hosted transcription (~$200/mo for an RTX 4090). Then add ops time — adapter breakage, container updates, monitoring. At low scale this is cheap; at scale you're either saving real money or hiring a dedicated SRE.
Q.Same wire shape on webhooks?
Mostly. Both deliver a recording manifest URL and HMAC-signed payloads. Field names differ (Vexa uses snake_case + their own event taxonomy). The migration guide diff is one switch statement in your handler.

Last verified 2026-05-09 against Vexa's public surface. Spotted an error? Fix it on GitHub.