Wiz PoC plan: what success criteria should we set for exploitable risk prioritization, attack paths, and remediation workflow adoption?
Cloud Security Platforms

Wiz PoC plan: what success criteria should we set for exploitable risk prioritization, attack paths, and remediation workflow adoption?

10 min read

Most security teams walk into a Wiz PoC with a vague goal like “see more risk” or “reduce noise.” That’s not enough. To prove Wiz can become your code-to-cloud operating model, you need concrete, measurable success criteria across three axes: exploitable risk prioritization, attack path visibility, and remediation workflow adoption.

Below is the PoC plan I’d set as a former Deputy CISO who rebuilt prioritization around exploitability, internet exposure, identity paths, and blast radius—and who refused to sign off on any tool that ended in a spreadsheet.


The Quick Overview

  • What It Is: A practical Wiz PoC success framework that defines how to measure exploitable risk prioritization, attack path coverage, and remediation workflow adoption in weeks—not months.
  • Who It Is For: Cloud security, product security, and platform/DevOps leaders evaluating Wiz as their core CNAPP and security graph.
  • Core Problem Solved: Moving from “more findings” to proof that Wiz can focus your teams on truly exploitable paths, route fixes to the right owners, and drive code-level remediation that engineers actually adopt.

How It Works

You frame the Wiz PoC around three outcomes, each with specific KPIs:

  1. Exploitable risk prioritization: Show that Wiz can reduce noise and surface the few risks that actually create viable attack paths.
  2. Attack path discovery and validation: Prove Wiz can model how attackers move—initial access, lateral movement, privilege escalation, data access chains.
  3. Remediation workflow adoption: Demonstrate that engineering teams will work from Wiz-driven queues (Jira/ServiceNow/PRs) and hit realistic SLAs.

You then instrument each outcome with time-bound, data-backed goals over a 4–6 week PoC.


1. Success Criteria for Exploitable Risk Prioritization

Traditional PoCs fixate on “number of findings discovered.” That’s how you end up with a 3,500-row export and no progress. With Wiz, you should measure how well it pinpoints exploitable risk—reachable, impactful, with clear blast radius.

1.1 Define “Exploitable” for Your Environment

Align with your stakeholders up front. In the PoC, an “exploitable” risk should combine at least:

  • Exposure:
    • Internet-reachable or reachable from a compromised asset (effective internet-exposure, not just a public IP).
  • Exploitability:
    • Known exploitable vulnerabilities (e.g., weaponized CVEs, Wiz Threat Research intel).
    • Weak identity paths (over-privileged roles, path to admin).
  • Blast radius:
    • Access to sensitive data, crown jewel workloads, or high-privilege identities.
  • Runtime relevance (if in scope):
    • Observed activity in runtime against the asset (via Wiz eBPF Runtime Sensor and logs).

Wiz’s security graph correlates code, cloud resources, identities, network, and runtime so you can build this definition into filters, not manual Excel magic.

1.2 Core KPIs to Track

Set these as explicit PoC success criteria:

  1. Noise reduction vs. legacy tools

    • Metric: % reduction in “top priority” items after applying Wiz’s context (exposure, exploitability, blast radius, identity paths).
    • Target: At least 50–80% reduction in the issues you actively track, compared to your current CVSS-only or siloed tools.
    • How to measure:
      • Take a representative cloud account / app.
      • Compare “critical/high” items from your existing tools to Wiz’s prioritized list of exploitable risks and attack paths for that same scope.
  2. Coverage of truly critical issues

    • Metric: % of your known “must-fix” issues that Wiz flags as top priority (e.g., legacy Log4J exposures, crown jewel misconfigurations).
    • Target: 100% of previously known criticals show up in Wiz’s top-risk list.
    • Why it matters: Proves Wiz doesn’t miss the important stuff while de-prioritizing theoretical noise.
  3. Time-to-first-prioritized-view

    • Metric: Time from connecting Wiz to your cloud accounts to getting a prioritized view of exploitable risks.
    • Target: Within 60 minutes of connecting a representative set of accounts, you should see a meaningful prioritized list.
    • Why it matters: If you need weeks of tuning for anything useful, it won’t work in a real incident.
  4. Business-context alignment

    • Metric: % of top 20–50 Wiz-prioritized risks that match what your team and business consider “actually scary.”
    • Target: ≥ 80% alignment after one iteration of tuning tags (prod vs dev, crown jewels, environment tiers).
    • How to measure:
      • Bring security + engineering + product owners into a working session.
      • Walk through Wiz’s top attack paths / risks.
      • Score each as “correctly high,” “should be lower,” or “should be higher.”
      • Tune Wiz filters and tags; re-evaluate.

2. Success Criteria for Attack Path Discovery

Attackers don’t think in CVEs; they think in paths. Your Wiz PoC should prove that the security graph can map complete attack paths—from external exposure through lateral movement and privilege escalation to data access.

2.1 What “Good” Looks Like

In the PoC, Wiz should:

  • Attack surface scanning: Discover externally reachable assets and model effective internet-exposure, not just “public = bad.”
  • Deep internal analysis: Connect code, cloud, identities, network, and runtime to model:
    • Lateral movement
    • Privilege escalation
    • Data access chains
  • Contextual attack paths: Present these as end-to-end, prioritized paths, not disjointed misconfigurations.

2.2 Core KPIs to Track

  1. Number of meaningful attack paths discovered

    • Metric: Count of unique, high-impact attack paths into or involving:
      • Internet-exposed services
      • Privileged identities (admin, break-glass, CI/CD roles)
      • Sensitive data stores
    • Target:
      • Identify at least 5–10 high-fidelity paths across the PoC scope.
      • Including at least one path to a crown jewel asset.
    • Why it matters: Proves Wiz is surfacing real kill chains, not just standalone misconfigs.
  2. Path completeness

    • Metric: For a sample of attack paths, measure how many hops the path includes and whether it matches how your red team / threat modelers think.
    • Target:
      • Paths should include initial access → lateral movement → privilege escalation → data access where applicable.
      • ≥ 80% of reviewed paths should be judged “realistic” by your offensive / architecture teams.
    • How to measure:
      • Have a senior engineer or red teamer validate 5–10 Wiz attack paths.
      • Confirm that each step is technically plausible given your environment.
  3. Effective internet-exposure clarity

    • Metric: Ability to differentiate between:
      • Truly internet-reachable services with exploitable flaws.
      • “Public” resources with no viable access path or compensating controls.
    • Target:
      • For a sample of “public” assets, Wiz’s classification of effective internet-exposure is accurate in ≥ 90% of cases.
    • Why it matters: This is where many tools either over-alert or miss real exposure entirely.
  4. Prioritized attack path list vs. flat vulnerability queue

    • Metric: Whether Wiz can consolidate hundreds/thousands of issues into a small number of attack paths and critical exposures.
    • Target:
      • PoC should demonstrate a shift from thousands of isolated issues to a manageable list (e.g., 20–50) of high-risk paths and correlated weaknesses.

3. Success Criteria for Remediation Workflow Adoption

The real test of Wiz is whether engineers work from Wiz-originated tasks without debate. That means ownership mapping, ticketing/PR workflows, and measurable SLA adherence.

3.1 Define Your “Happy Path” Workflow

Before you start the PoC, document the target flow—for example:

  1. Wiz Security Graph identifies an exploitable risk / attack path.
  2. Wiz maps it to the right owner:
    • Team
    • Service
    • Repo
  3. Wiz automatically creates:
    • Jira or ServiceNow tickets, and/or
    • PRs in the appropriate repo (via Wiz Green agent where available).
  4. Engineering teams:
    • Accept and triage the work.
    • Implement a fix (preferably in code/IaC).
    • Close the issue within the agreed SLA.
  5. Wiz validates that:
    • The risk is actually closed (no configuration drift / re-open).
    • The attack path is broken end-to-end.

Your PoC criteria should measure each step actually happening, not just the potential.

3.2 Core KPIs to Track

  1. Ownership mapping accuracy

    • Metric: % of Wiz-identified risks that are auto-mapped to the correct team/repo/service.
    • Target: ≥ 85–90% accuracy after one tuning pass.
    • How to measure:
      • Sample 20–30 issues from different stacks (microservices, data platform, shared infra).
      • Ask engineering leaders if the mapped owner is correct.
  2. Workflow integration adoption

    • Metric: Actual usage of integrated workflows:
      • Number of Wiz-originated Jira/ServiceNow tickets created.
      • Number of Wiz-originated PRs opened by Wiz Green agent (if in scope).
    • Target (during PoC):
      • At least 2–3 teams receiving Wiz-originated work.
      • At least 10–20 tickets or PRs created from Wiz.
    • Why it matters: If you can’t prove cross-team adoption during the PoC, it won’t magically happen in production.
  3. Remediation SLA performance

    • Metric: Time from issue creation (ticket/PR) to:
      • Fix merged.
      • Risk verified as closed in Wiz.
    • Target (PoC-scale):
      • ≥ 70% of PoC-scope exploitable risks fixed within your agreed SLA (e.g., 7–14 days).
    • Why it matters: You want to see that your teams can maintain or improve velocity—similar to customers reaching “0 criticals” without breaching SLAs.
  4. Code-first fixes vs. point-in-time patches

    • Metric: % of remediations done:
      • In code/IaC (source-of-truth changes).
      • As ad-hoc cloud console changes or runtime-only mitigations.
    • Target: ≥ 60–70% of PoC fixes should be code/IaC changes.
    • Why it matters: Confirms Wiz is supporting “FIX AT SCALE IN CODE,” not just burn-down exercises.
  5. Engineer sentiment and self-remediation

    • Metric: Qualitative feedback from engineering teams on:
      • Clarity of Wiz findings.
      • Reproducibility of suggested fixes.
      • Whether they can self-remediate without security hand-holding.
    • Target:
      • Engineers describe Wiz tickets/PRs as “actionable” and “clear.”
      • At least one team explicitly agrees to keep using Wiz-driven workflows post-PoC.

4. Suggested PoC Structure & Milestones

A 4–6 week PoC is usually enough if it’s structured.

Week 0–1: Scope & Onboarding

  • Connect Wiz agentlessly to:
    • 1–3 cloud providers or representative accounts/subscriptions.
    • A small set of high-value apps / services.
  • Define:
    • “Exploitable risk” criteria.
    • Crown jewels and environment tags (prod, dev, staging).
  • Success checkpoint:
    • Prioritized exploitable risk list live within 60 minutes of first connection.
    • Draft KPI dashboard (even if it’s manual at first).

Week 2: Attack Surface & Attack Paths

  • Use Wiz attack surface scanning to:
    • Identify all internet-reachable assets in scope.
    • Validate effective internet-exposure.
  • Review top attack paths with:
    • Security architecture
    • Red team or senior engineers
  • Success checkpoint:
    • 5–10 realistic attack paths identified and validated.
    • At least one path into a high-value environment or data store.

Week 3–4: Remediation Workflow Pilot

  • Turn on integrations:
    • Jira/ServiceNow routing with ownership mapping.
    • PR generation where relevant (Wiz Green agent).
  • Pick 2–3 teams to pilot:
    • One service team.
    • One platform/infra team.
    • Optional: One data team.
  • Success checkpoint:
    • 10–20 Wiz-originated tickets/PRs created.
    • First wave of fixes merged and validated in Wiz.

Week 5–6: Measure Outcomes & Decide

  • Review KPI performance across the three pillars:
    1. Exploitable risk prioritization
    2. Attack path discovery
    3. Remediation workflow adoption
  • Map results to business outcomes:
    • Noise reduction.
    • Improved SLA adherence.
    • Faster incident investigation paths.
  • Success checkpoint:
    • Exec-ready summary demonstrating:
      • Less noise, more exploitable risks.
      • Concrete attack paths identified and broken.
      • Engineers actively using Wiz-driven workflows.

Summary

A successful Wiz PoC doesn’t just show more risk; it shows less wasted effort.

You’ll know you have the right success criteria when you can say:

  • Exploitable risk prioritization: “We went from thousands of ‘critical’ issues to a small, focused set of attack paths, prioritized by exposure, exploitability, blast radius, and identity paths.”
  • Attack paths: “We can see how an attacker would move from initial access to data, and we’ve used Wiz to break those chains.”
  • Remediation workflows: “Engineers receive clear, owned work items from Wiz and can fix issues in code within our SLAs—without endless meetings or spreadsheets.”

If your PoC can validate those three pillars, you’re not just buying a CNAPP. You’re committing to a security operating model that connects code, cloud, and runtime into a single, shared context—and finally lets defenders move at the same speed as attackers, without sacrificing precision.


Next Step

Get Started