How RoastGuard works, what it can and can't do, and everything in between.
RoastGuard runs a multi-step AI pipeline. First, it analyzes your campaign description (and image, if provided) to identify the key dimensions along which different consumers are likely to disagree — things like cultural sensitivity, price perception, authenticity, or body image. It then samples 32 synthetic consumer personas that each occupy a distinct position across those dimensions. Finally, each persona reasons through your campaign using a structured "logic of appropriateness" framework — asking what kind of situation this is, what kind of person they are, and what a person like them would do — before generating a realistic social media reaction.
Dimensions are the axes of consumer diversity most relevant to your specific campaign. For a sustainability-forward campaign, one key dimension might be Environmental Values — spanning consumers who actively seek eco-certified products down to those who view green claims as greenwashing. For a luxury collaboration, Price & Class Perception becomes central. RoastGuard identifies 4–6 campaign-specific dimensions per simulation, plus two hidden structural dimensions (geographic spread and occupation) to ensure the 32 personas aren't all from the same demographic slice.
The 32 personas are sampled using a quasi-random technique (Sobol sequences) that ensures even coverage across all dimension combinations — avoiding clustering and redundancy. Each persona is then fleshed out by an AI model with full demographics, life context, values, brand relationship style, and a psychological trigger specific to your campaign. The goal is 32 genuinely distinct people, not 32 variations on the same archetype.
Each persona independently evaluates your campaign and generates a social media comment with associated risk signals — sentiment tone, likelihood the comment spreads, escalation risk, and trigger keywords. A final synthesis pass reads all 32 reactions to produce your risk summary: an overall risk level (Low / Medium / High), the top 2–4 risk themes with representative quotes, and concrete mitigation suggestions.
RoastGuard is a simulation tool, not a prediction engine. The personas are synthetic constructs, not real people. The value lies in systematic coverage — surfacing risk vectors your team may not have considered, across a wider demographic range than a typical internal review. Treat the output as a structured stress-test, not a market research study. It is most useful for catching blind spots and framing pre-launch conversations, not for projecting precise sentiment percentages.
Yes. AI models can misread cultural nuance, produce overly homogeneous reactions, or surface risks that are unlikely in practice. Results are most reliable for campaigns with clear cultural or social content and less reliable for highly technical or niche B2B messaging. We recommend using RoastGuard alongside — not instead of — qualitative human review for high-stakes campaigns.
Yes. When you upload a visual asset, RoastGuard runs a dedicated vision analysis step that extracts a detailed description of the image — colors, composition, subjects, implied lifestyle, cultural references, and any potentially sensitive visual elements. This description is passed into every downstream step, so both the dimension extraction and persona reactions account for what the campaign actually looks like, not just what the brief says.
RoastGuard currently focuses on the US market. All analysis, persona generation, and comments are produced in English to ensure consistency and quality. International market support is on our roadmap.
Your campaign description and any uploaded images are sent to third-party AI providers (including OpenAI) for processing. We store your campaign data and simulation results in our database so you can revisit them. We do not use your campaign content to train AI models or share it with other users. See our Privacy Policy for full details.
Yes. All reports are private by default. Sharing is opt-in — you can generate a public link for any completed report, which allows anyone with the link to view the results without logging in. You can disable sharing at any time.
One simulation is one full pipeline run on one campaign — from dimension extraction through persona generation, comment generation, and risk summary. Re-running a failed campaign or resuming an interrupted pipeline does not consume an additional simulation credit.
Free accounts get 3 simulations total (no expiry). Pro accounts get 10 simulations per day, which resets at midnight UTC. Pro also unlocks the ability to generate shareable public report links.
Yes, you can cancel at any time from your account settings. You will retain Pro access until the end of your current billing period. We do not offer refunds for partial periods.