Deceptively simple. Three fields, one button. But the simplicity is the point — it hides a significant amount of intelligence behind a workflow that any QA engineer or product manager can operate without writing a single line of test script.
Here’s what each piece of this interface is actually doing:
1 Document upload — PDF, DOCX, or PPTX The AI accepts your requirements document in whatever format it exists in. No pre-processing, no reformatting required. A Confluence export, a product spec deck, or a raw BRD — it handles all three. The engine parses natural language, not structured markup.
2 Jira integration — create user stories directly from PRD This checkbox is quietly powerful. With it enabled, the AI doesn’t just extract test cases — it creates the corresponding Jira user stories first, then generates tests against them. Your PRD populates your backlog and your test suite in a single pass. For teams running agile cycles, this closes a loop that usually requires two separate manual steps.
3 Test cases per user story — configurable coverage depth Set to 5 by default, this control lets teams balance thoroughness against noise. A critical payment flow might warrant 8–10 test cases covering edge conditions; a low-risk configuration page might only need 3. The number you set here shapes how exhaustively the AI covers each extracted requirement.
4 Application URL — environment targeting The AI doesn’t generate tests in a vacuum. By knowing the target application URL, it can orient generated scenarios to the actual environment — staging, pre-prod, or production — without manual reconfiguration between runs.
5 Advanced settings — load profiles, thresholds, and more For teams that want finer control, the collapsible Advanced Settings panel exposes concurrency parameters, ramp-up duration, SLA thresholds, and assertion configurations. Default values are intelligently inferred from the PRD; advanced settings let engineers override them where judgment calls are needed.
What the AI does under the hood
Clicking “Generate Test Cases” kicks off a pipeline that’s doing considerably more than keyword matching. Cavisson’s AI processes the uploaded document in three distinct stages:
1 Requirement extraction: The engine scans for performance intent — latency targets, concurrency figures, SLA language, throughput expectations. It distinguishes between hard requirements (“must not exceed”) and soft targets (“should aim for”), treating each differently in the test output.
2 Scenario mapping: Each extracted requirement is mapped to a user journey or endpoint. The AI groups related requirements into coherent test scenarios, avoiding the fragmentation that comes from treating every sentence in isolation.
3 Test case generation: For each scenario, the AI produces the configured number of test cases — covering happy paths, boundary conditions, and failure modes. Thresholds are tied directly to the requirement language, not guessed. The output is a review-ready test suite, not a draft that needs to be rebuilt from scratch.
Before and after: what the workflow actually looks like
Without AI | With Cavisson AI |
—The engineer reads the full PRD manually —Performance criteria extracted by memory —Test scripts built from scratch per flow —Thresholds based on estimates or guesswork —2–3 days to full coverage on a standard release —Requirements drift goes undetected | +PRD uploaded, parsed in minutes +All testable assertions auto-extracted +Scenarios generated with load profiles attached +Thresholds tied directly to stated SLA language +Same coverage achieved in hours, not days +PRD re-parse flags new or changed conditions |
“The best performance test is the one that actually reflects what the business promised — not what the engineer remembered.”