Generate Grounded Knowledge Base Answers
Answers questions using your Needle Collection, providing human-readable responses, inline citations, coverage scores, and a structured payload for evaluation.
Knowledge BaseQuestion AnsweringData GovernanceHallucination Checking
This workflow answers natural language questions using a Needle Collection as a knowledge base. It returns a human-readable answer with inline citations and a machine-readable JSON bundle ready to feed into an evaluator or hallucination checker.
It does three things:
- Accepts a user question via a manual trigger.
- Searches your Needle Collection for relevant passages and composes a grounded, cited answer with a knowledge base coverage rating.
- Outputs a structured evaluator payload that can be passed directly to a downstream evaluation workflow.
What you need
- A Needle account.
- A Needle Collection with documents uploaded (your knowledge base).
- The collection ID, which you set once in the COLLECTION_ID workflow variable.
How the flow works
| Node | Description |
|---|---|
| Manual Trigger | Collects the user's question and an optional maximum results parameter (defaults to 5). This is the only runtime input; the knowledge base is configured once via the COLLECTION_ID variable. |
| AI Q&A Agent | Receives the question and uses the collection search tool to retrieve relevant passages. The agent is instructed to answer strictly from retrieved context, never fabricate facts, and cite source documents inline. It classifies coverage as HIGH, MED, LOW, or NONE, and prepends a governance header. The AI model is pinned to a low temperature to ensure stable structured outputs. |
| Output Normalizer | Takes the AI agent's flat string outputs and assembles the final structured format. It splits source names and snippets, pairs them into a sources array, and builds the evaluator payload object. |
Output
The workflow produces a single JSON object with this exact shape:
| Field | Type | Description |
|---|---|---|
| answer_text | String | The full answer, starting with a governance header line followed by the cited answer body. |
| kb_coverage | String | One of HIGH, MED, LOW, or NONE, indicating how well the knowledge base covered the question. |
| sources | Array | Each element contains the source name and snippet from the retrieved documents. |
| evaluator_payload | Object | Contains the original prompt, context docs, model answer, and coverage score. This object is designed to plug directly into an output evaluator workflow. |
Notes
- The COLLECTION_ID is a workflow variable, not a runtime input. This is by design because knowledge base Q&A workflows typically target one collection. Set it once when you configure the workflow.
- The failure semantics are explicit: NONE means no relevant documents were found, LOW or MED means partial coverage, and HIGH means full coverage. Your downstream evaluator can use this coverage score to set different review thresholds.
- The AI agent can search multiple times and rephrase queries for better retrieval coverage.
- All intermediate fields between the AI and Code node use a specific raw suffix to make run logs self-documenting and distinguish raw AI output from the final structured response.
Want to showcase your own workflows?
Become a Needle workflow partner and turn your expertise into recurring revenue.
