Every report is produced by the pipeline below. The prompts here are the live prompts resolved against the override store the same way the runtime pipeline resolves them — nothing paraphrased, nothing simplified. Long on purpose.
Reusable pieces composed into multiple pipeline steps. Editing any one of them affects every step that composes it — which is why they are written, versioned, and reviewed separately from per-step instructions.
Shared block
Citation Rules
How to cite: web search is primary, training knowledge marked as such, never fabricate URLs, every factual claim needs a source.
The credibility guarantee of the product. Bad citations = no trust. These rules make Claude honest about where each claim came from.
Used by: 3. Find evidence and write findings
**SOURCE CITATION RULES**
You have two primary sources of knowledge:
1. **Web search results and official vendor documentation**: Use web search to find the specific help center articles, technical documentation, implementation guides, and support articles that address the buyer's requirement. Always prefer official vendor help centers over marketing pages. Cite the specific document or help article found.
2. **Your training knowledge**: You have extensive knowledge of vendor products from your training data. When citing from training knowledge, name the specific document or help article and mark it as "(from vendor documentation)" or "(from vendor blog/marketing; verify current availability)."
Rules:
- **Web search is your primary tool.** For each vendor and requirement, search the vendor's help center to find the SPECIFIC feature that addresses the buyer's need. Do not rely solely on training knowledge.
- Never fabricate a specific URL. Only cite URLs returned by web search.
- Every factual claim about a specific vendor capability must have a source attribution.
- If web search and your training knowledge both lack information about a capability, then and only then mark it as "unclear."
**WRITING STYLE**
- NEVER use em dashes (—) in any output. Use colons, commas, semicolons, or periods instead. The platform's editorial voice does not use em dashes under any circumstances, including inside citations, quotes, parentheticals, or bullet lists. If you would normally write "X — Y" write "X: Y" or "X. Y" or split the thought into two sentences.
- Do not use en dashes (–) as substitutes for em dashes; hyphens (-) are fine for compound words only (e.g., "3-way match").
Shared block
Status Definitions + End-to-End Fit
Strict definitions of supported / partial / unclear / not-supported, plus the end-to-end process fit test that every status must pass.
The most important rule in the system. Without this, Claude would rate anything that technically exists as "supported", even when the mechanism breaks the buyer's process downstream.
Used by: 3. Find evidence and write findings
**STATUS DEFINITIONS; apply these strictly based on evidence, not vendor reputation**
- "supported": The vendor handles this well at the depth the buyer needs. Mechanism is clear, no material shortfall for this specific requirement. You can assign "supported" based on your training knowledge; you do not need reference data to confirm it.
- "partial": The vendor offers a version of this capability, but it falls short in a material way; e.g., 2-way matching instead of 3-way, fixed approval chains instead of flexible stakeholder involvement, header-level OCR instead of line-item extraction, subset of ERP fields instead of full fidelity. Partial is not a hedge; it means the buyer will hit a ceiling.
- "unclear": You genuinely do not have enough information from ANY source (reference data, training knowledge of documentation, training knowledge of blogs/marketing) to assess this capability. Do NOT mark something as "unclear" simply because it is absent from the reference data provided in this prompt. Only use "unclear" when you truly don't know.
- "not-supported": The capability is absent or requires a third-party add-on not included in the base product.
**END-TO-END PROCESS FIT; this is mandatory for every status assignment**
The software categories Stackrate evaluates are multi-step process systems. A buyer's process description defines a chain of steps where data flows from one stage to the next. When you evaluate a requirement, you are evaluating one step in that chain; but the mechanism you identify must be compatible with every other step the buyer described.
Before assigning "supported" to any requirement, you must test: does the mechanism I am proposing for this step allow the buyer's upstream and downstream steps to work as described? If a vendor's mechanism for step 1 breaks or prevents step 3, that mechanism is "partial" at best; even if the mechanism itself is real and well-documented.
Example: A buyer needs invoices segregated by project during processing, then consolidated for centralized payment. A vendor that achieves segregation through separate legal entities technically provides data isolation; but it breaks the consolidation step, because separate entities mean separate books, separate payment runs, and no unified AP queue. The correct status is "partial" or "not-supported," not "supported," because the mechanism is incompatible with the buyer's end-to-end process.
The test is always: if the buyer implemented exactly the mechanism you are describing, would their full process; from the first step they described to the last; work end to end without restructuring their operations?
Each step takes structured input, runs a Claude call (or parallel per-vendor calls, for the findings step), and hands structured output to the next step. User-message templates use {{placeholder}} syntax for values filled in at runtime.
For each requirement, decides what to search for on each vendor's help center, including anti-patterns that would break the buyer's process.
Role
Pre-analyze each requirement so the findings step searches for the right features with the right vendor-specific terminology, not generic terms.
Model
Claude Sonnet (claude-sonnet-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- The requirements extracted in step 1
- The list of vendors being evaluated
- The original process description
Outputs
- A search plan per requirement: solution type, possible mechanisms, anti-patterns, and 3-5 targeted search queries per vendor
User message
Asks Claude to design vendor-specific search queries for one requirement before findings run.
Step 3
Find evidence and write findings
For each vendor × requirement: does 2-3 web searches, then writes a "how it works" finding with inline citations and a status.
Role
The core of the product. Produces the evidence-backed per-vendor per-requirement answer that the rest of the report is built on.
Model
Claude Sonnet (claude-sonnet-4-6)
Sources consulted
web-search, knowledge-base, vendor-comments, vendor-docs, search-plans
Shared blocks composed
Status + E2E fit, Source rules
Inputs
- The requirements from step 1
- The vendors to evaluate
- The process description
- Search plans from step 2
- Knowledge base hints (topic and prior-found URLs per vendor)
- Vendor-provided documentation (if uploaded by vendors)
- Approved vendor comments on prior reports (first-class source)
- Pre-fetched help-center content (from enrichWithWebSearch)
Outputs
- ReportFinding per vendor per requirement: status, howItWorks, limitations, confidence, sources
User message
The core prompt that drives per-vendor per-requirement evaluation with web search + inline citations.
Step 4
Validate mechanisms (cross-requirement)
Checks whether the mechanism proposed for one requirement breaks another requirement. Only downgrades, never upgrades.
Role
The end-to-end process fit guarantee. Catches cases where a vendor technically supports every requirement individually but the mechanisms conflict end-to-end.
Model
Claude Opus (claude-opus-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- All findings from step 3
- All requirements
- The process description
Outputs
- Updated findings, with some downgraded (supported → partial, etc.) when a mechanism conflict is detected
- mechanismConflict metadata on any downgraded finding
User message
Cross-requirement compatibility check. Asks Claude to flag genuine structural conflicts for one vendor.
3-5 sentences naming strongest and weakest vendors for the buyer's situation and calling out critical gaps in operational terms.
Role
The CFO-readable top of the report. Must name concrete support counts and specific operational impacts, not generic marketing language.
Model
Claude Opus (claude-opus-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- All requirements
- All findings (post-validation)
- Vendor list
- Process description
Outputs
- A 3-5 sentence executive summary string
System preamble
Short intro that frames the model as writing an executive briefing for a CFO or VP of Finance.
User message
Composes the buyer situation, per-vendor breakdown, and requirement list into a 3-5 sentence summary ask.
Step 6
Follow-up questions
4-5 targeted questions the buyer should ask in vendor demos, prioritizing actual gaps and points of divergence.
Role
Bridges the report into the buyer's next action. Produces consultant-grade questions, not brochure-answerable generic ones.
Model
Claude Sonnet (claude-sonnet-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- All requirements
- All findings
- Vendor list
- Process description
Outputs
- Array of 4-5 follow-up question strings
System preamble
Short intro that frames the model as writing follow-up questions for a finance leader post-report.
User message
Asks Claude for 4-5 consultant-grade questions targeting the report's actual gaps and divergences.
Step 7
Answer a follow-up question
When a buyer asks a specific question after the report exists, answers it grounded in the report's findings and their original scenario.
Role
The report's interactive layer. Must stay grounded in the evidence already gathered, be honest about uncertainty, and name the operational impact of gaps.
Model
Claude Sonnet (claude-sonnet-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- A single question from the buyer
- The full report (requirements, findings, process description, vendors)
Outputs
- An answer (2-4 paragraphs) plus a list of sources drawn from the report's findings
User message
Grounds a single follow-up question in the existing report's findings and writes a consultant-tone answer.
Step 8
Validate and generate a new vendor
When a new vendor is added, validates it is a real product in the given category and generates 15 seed capabilities for it.
Role
The admission gate for new vendors. Prevents bogus additions and produces a consistent baseline capability profile so the new vendor participates in reports immediately.
Model
Claude Sonnet (claude-sonnet-4-6)
Sources consulted
None (pure reasoning)
Shared blocks composed
None
Inputs
- Proposed vendor name
- Target software category
Outputs
- Either a rejection with reason, or a GeneratedVendorData with 15 capabilities
User message
Validates a proposed vendor belongs in the category and generates 15 seed capabilities.