Growth marketers, Marketing analysts, Campaign managers, Conversion specialists, Marketing leads
Prepare the Required Inputs listed in the Workflow Prompt. Use as much detail as necessary.
1. Copy the Workflow Prompt.
2. Paste it into your AI tool.
3. Replace the "Required Inputs"
4. Run the prompt.
Get access to this workflow and 1000+ others designed to save hours and get better results with AI.
Use this workflow to prioritise a list of A/B test ideas and identify which tests should run first.
### Required Input
- Business Goal: [State the outcome the tests should support. Example: increase free trial starts by 15% this quarter]
- Test Ideas: [List each test idea. Example: new hero headline, shorter form, social proof near CTA, pricing FAQ expansion]
- Page or Funnel Area: [Describe where the tests apply. Example: homepage hero, checkout step, lead magnet landing page]
- Target Audience: [Describe the segment. Example: finance leaders at mid-market companies]
- Current Performance Data: [Share available baseline metrics. Example: 22% CTA click rate, 3.4% signup rate, 41% form abandonment]
- Traffic Volume: [Provide approximate sessions or conversions. Example: 12,000 visits and 420 signups per month]
- Constraints: [List limits. Example: limited design support, no backend changes this month, legal approval required]
- Risk Tolerance: [State low, medium, or high. Example: low risk because page supports paid acquisition]
### Input Validation
Review all required inputs before prioritising. If test ideas are too vague, data is missing, or the goal is unclear, ask targeted clarification questions and pause. Do not create the prioritisation until the inputs are usable.
### Instructions
Evaluate each A/B test idea as a practical experiment. Prioritise tests that are likely to improve the stated business goal, can be implemented realistically, and produce useful learning.
Score each idea from 1 to 5 for:
- Impact: likely effect on the primary goal
- Confidence: strength of evidence or rationale
- Effort: implementation difficulty, where 5 means low effort and 1 means high effort
- Risk Control: lower brand, revenue, compliance, or user experience risk scores higher
- Learning Value: how much the test teaches about the audience or funnel
Calculate a total score out of 25. If traffic volume appears too low for reliable testing, flag this and recommend a safer validation approach.
### Output
Return the prioritisation in this structure:
1. Prioritisation Summary
- State the recommended first test and why
- Note any traffic, risk, or measurement concerns
2. Test Scoring Table
Create a table with these columns:
- Test Idea
- Impact
- Confidence
- Effort
- Risk Control
- Learning Value
- Total Score
- Priority Rank
3. Recommended Test Roadmap
Group tests into:
- Run First
- Run Next
- Backlog
- Do Not Run Yet
4. First Test Brief
For the top-ranked test, provide:
- Hypothesis
- Control
- Variant
- Primary metric
- Secondary metrics
- Audience or traffic segment
- Minimum run guidance
- Decision rule
5. Notes on Weak Test Ideas
Explain which ideas need more evidence, clearer scope, or a different validation method.
6. Next Actions
List the immediate steps to prepare the first test.
Add a conservative version of the roadmap that prioritises low-risk tests only.
The first recommended test for BrightDesk is a shorter demo request form. It has strong relevance to the business goal, clear drop-off evidence, and low implementation complexity.
Hypothesis: Reducing form fields from eight to four will increase completed demo requests because visitors face less perceived effort before speaking with sales.
Primary metric: Demo form completion rate.
Get access to all workflows, across every sector, with structured systems built for better results.