Skip to content

LLM providers

The node types that look LLM-shaped are:

  • Question — a deterministic prompt-and-wait for a typed reply. No model involved.
  • Condition — evaluates a stored variable with fixed operators (equals, contains, not_equals, numeric comparisons). No model involved.

If a flow needs to generate content dynamically — summarising an answer, paraphrasing, classifying intent — that is currently a gap.

Two node types are in the flow_nodes.type enum but have no worker logic yet:

  • api_call — intended to call an external HTTP endpoint (which could be any LLM provider).
  • set_variable — intended to store a value derived at flow-time.

Together these would be enough to build a “call OpenAI, stash the response, branch on it” pattern, but neither is implemented. Treat the enum entries as a roadmap marker, not a feature.

What a shipped LLM integration will look like

Section titled “What a shipped LLM integration will look like”

When we wire it, the design we’re aiming at is:

  • Bring your own key — Voxa won’t resell tokens; you pay your LLM provider directly.
  • Per-tenant provider configuration — store the API key encrypted (same pattern as Meta credentials), let admins pick the model per flow.
  • LLM node typesllm_generate (produces text), llm_classify (picks from a fixed label set), llm_extract (pulls named fields from free text).
  • No training — Voxa will never forward your data for model training; the Privacy Policy §4 is explicit.

If your flow needs LLM behaviour right now:

  1. Run your own web service that wraps the LLM provider.
  2. Hit it from a server of yours, not from Voxa, because api_call isn’t wired.
  3. POST the classified / generated result back to Voxa via the contact’s WhatsApp number (i.e., you simulate being the contact).
  4. Voxa processes it as a normal inbound message.

This is clunky. We know. The real fix is wiring api_call + the LLM node types — on the roadmap.