LLM providers
What exists in the flow engine today
Section titled “What exists in the flow engine today”The node types that look LLM-shaped are:
- Question — a deterministic prompt-and-wait for a typed reply. No model involved.
- Condition — evaluates a stored variable with fixed operators (equals, contains, not_equals, numeric comparisons). No model involved.
If a flow needs to generate content dynamically — summarising an answer, paraphrasing, classifying intent — that is currently a gap.
What’s stubbed but not wired
Section titled “What’s stubbed but not wired”Two node types are in the flow_nodes.type enum but have no worker
logic yet:
api_call— intended to call an external HTTP endpoint (which could be any LLM provider).set_variable— intended to store a value derived at flow-time.
Together these would be enough to build a “call OpenAI, stash the response, branch on it” pattern, but neither is implemented. Treat the enum entries as a roadmap marker, not a feature.
What a shipped LLM integration will look like
Section titled “What a shipped LLM integration will look like”When we wire it, the design we’re aiming at is:
- Bring your own key — Voxa won’t resell tokens; you pay your LLM provider directly.
- Per-tenant provider configuration — store the API key encrypted (same pattern as Meta credentials), let admins pick the model per flow.
- LLM node types —
llm_generate(produces text),llm_classify(picks from a fixed label set),llm_extract(pulls named fields from free text). - No training — Voxa will never forward your data for model training; the Privacy Policy §4 is explicit.
Work you can do today
Section titled “Work you can do today”If your flow needs LLM behaviour right now:
- Run your own web service that wraps the LLM provider.
- Hit it from a server of yours, not from Voxa, because
api_callisn’t wired. - POST the classified / generated result back to Voxa via the contact’s WhatsApp number (i.e., you simulate being the contact).
- Voxa processes it as a normal inbound message.
This is clunky. We know. The real fix is wiring api_call + the LLM
node types — on the roadmap.
Related
Section titled “Related”- Integrations overview
- Working with prompts — prompt patterns that apply whether the “prompt” is a WhatsApp message or an LLM system prompt.