Skip to main content

Policy Testing

Overview

Policy testing allows you to validate governance policies before deploying them to production. Using dry-run evaluation, you can test how policies will behave against real or simulated action scenarios without affecting live operations.

Key Capabilities

  • Dry-Run Evaluation: Test policies without enforcement
  • Scenario Testing: Evaluate policies against predefined scenarios
  • Bulk Testing: Test multiple scenarios in batch
  • Conflict Detection: Identify overlapping or conflicting policies
  • Impact Analysis: Understand how policy changes affect operations
  • Audit Comparison: Compare test results with historical decisions

How It Works

Testing Workflow

┌─────────────────────────────────────────────────────────────────────────────┐
│ POLICY TESTING WORKFLOW │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 1. Create or │────>│ 2. Define Test │────>│ 3. Run Dry-Run │
│ Modify Policy │ │ Scenarios │ │ Evaluation │
└─────────────────┘ └─────────────────┘ └────────┬────────┘

┌───────────────────────────────────────────────┘

v
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 4. Review │────>│ 5. Fix Issues │────>│ 6. Deploy to │
│ Results │ │ (if needed) │ │ Production │
└─────────────────┘ └─────────────────┘ └─────────────────┘

Dry-Run Evaluation

Dry-run mode evaluates actions against policies without enforcing decisions:

┌─────────────────────────────────────────────────────────────────────────────┐
│ DRY-RUN EVALUATION │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ INPUT: │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Action: database_update │ │
│ │ Resource: production.customers │ │
│ │ User: analyst@company.com │ │
│ │ Environment: production │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ EVALUATION (dry-run): │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Policy Engine │ │
│ │ ├── Policy 1: production-database-protection → MATCH │ │
│ │ │ └── Decision: REQUIRE_APPROVAL (Level 3) │ │
│ │ ├── Policy 2: pii-data-protection → NO MATCH │ │
│ │ └── Policy 3: after-hours-restriction → NO MATCH │ │
│ │ │ │
│ │ Risk Score: 65 (HIGH) │ │
│ │ ├── Security: 70 │ │
│ │ ├── Data: 60 │ │
│ │ ├── Compliance: 65 │ │
│ │ └── Financial: 55 │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ OUTPUT: │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Result: WOULD_REQUIRE_APPROVAL │ │
│ │ Approval Level: 3 │ │
│ │ Matched Policy: production-database-protection │ │
│ │ Risk Score: 65 │ │
│ │ Mode: DRY_RUN (not enforced) │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘

Configuration

Test Scenario Schema

{
"scenario_name": "string (required)",
"description": "string (optional)",
"action": {
"action_type": "string (required)",
"resource": "string (required)",
"namespace": "string (optional)",
"parameters": {}
},
"context": {
"user_id": "string",
"user_email": "string",
"user_role": "string",
"environment": "string",
"client_ip": "string"
},
"expected_result": {
"decision": "ALLOW | DENY | REQUIRE_APPROVAL | ESCALATE",
"risk_level": "LOW | MEDIUM | HIGH | CRITICAL",
"matched_policy": "string (optional)"
}
}

Dry-Run Options

OptionDefaultDescription
dry_runfalseEnable dry-run mode
include_risk_detailstrueInclude full risk breakdown
include_policy_matchestrueList all matching policies
include_recommendationstrueInclude AI recommendations
compare_with_productionfalseCompare with current production policies

Usage Examples

Single Scenario Test (Python SDK)

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Run a dry-run evaluation
result = client.policies.evaluate_dry_run(
action_type="database_update",
resource="production.customers",
namespace="database",
user_id="user-123",
user_email="analyst@company.com",
user_role="analyst",
environment="production",
client_ip="192.168.1.100"
)

print(f"Decision: {result.decision} (dry-run)")
print(f"Risk Score: {result.risk_score.total_score}")
print(f"Risk Level: {result.risk_score.risk_level}")
print(f"Would require approval level: {result.approval_level}")

# Matched policies
for policy in result.matched_policies:
print(f" Matched: {policy.policy_name}")
print(f" Confidence: {policy.confidence}")
print(f" Decision: {policy.decision}")

Single Scenario Test (cURL)

curl -X POST https://api.ascend.security/api/policies/evaluate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"dry_run": true,
"action_type": "database_update",
"resource": "production.customers",
"namespace": "database",
"context": {
"user_id": "user-123",
"user_email": "analyst@company.com",
"user_role": "analyst",
"environment": "production"
}
}'

# Response:
# {
# "dry_run": true,
# "evaluation_id": "eval_abc123",
# "decision": "REQUIRE_APPROVAL",
# "risk_score": {
# "total_score": 65,
# "risk_level": "HIGH",
# "category_scores": {
# "security": 70,
# "data": 60,
# "compliance": 65,
# "financial": 55
# }
# },
# "matched_policies": [
# {
# "policy_id": "pol_xyz789",
# "policy_name": "production-database-protection",
# "matched": true,
# "confidence": 0.95,
# "decision": "REQUIRE_APPROVAL"
# }
# ],
# "approval_level": 3,
# "evaluation_time_ms": 45
# }

Bulk Scenario Testing

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Define test scenarios
scenarios = [
{
"name": "admin_production_write",
"action": {
"action_type": "database_update",
"resource": "production.customers",
"namespace": "database"
},
"context": {
"user_role": "admin",
"environment": "production"
},
"expected": {
"decision": "REQUIRE_APPROVAL",
"risk_level": "HIGH"
}
},
{
"name": "analyst_staging_read",
"action": {
"action_type": "database_query",
"resource": "staging.customers",
"namespace": "database"
},
"context": {
"user_role": "analyst",
"environment": "staging"
},
"expected": {
"decision": "ALLOW",
"risk_level": "LOW"
}
},
{
"name": "analyst_pii_access",
"action": {
"action_type": "read",
"resource": "production.customer_pii",
"namespace": "database"
},
"context": {
"user_role": "analyst",
"environment": "production"
},
"expected": {
"decision": "DENY",
"risk_level": "HIGH"
}
}
]

# Run bulk test
results = client.policies.test_scenarios(scenarios)

# Analyze results
print(f"Total Scenarios: {results.total}")
print(f"Passed: {results.passed}")
print(f"Failed: {results.failed}")

for scenario_result in results.results:
status = "PASS" if scenario_result.passed else "FAIL"
print(f"\n{status}: {scenario_result.scenario_name}")
if not scenario_result.passed:
print(f" Expected: {scenario_result.expected_decision}")
print(f" Actual: {scenario_result.actual_decision}")
print(f" Reason: {scenario_result.failure_reason}")

Bulk Testing (cURL)

curl -X POST https://api.ascend.security/api/policies/test/bulk \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"scenarios": [
{
"name": "production_write_test",
"action": {"action_type": "update", "resource": "prod.data"},
"context": {"environment": "production"},
"expected": {"decision": "REQUIRE_APPROVAL"}
},
{
"name": "staging_read_test",
"action": {"action_type": "read", "resource": "staging.data"},
"context": {"environment": "staging"},
"expected": {"decision": "ALLOW"}
}
]
}'

Test Specific Policy

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Test a specific policy (not yet deployed)
policy_definition = {
"policy_name": "new-pii-policy",
"namespace_patterns": ["database"],
"resource_patterns": ["*pii*", "*personal*"],
"verb_patterns": ["read", "write"],
"actions": "REQUIRE_APPROVAL",
"action_params": {"approval_level": 3}
}

# Test against scenarios
test_result = client.policies.test_policy(
policy=policy_definition,
scenarios=[
{
"action": {"action_type": "read", "resource": "customer_pii"},
"expected": {"decision": "REQUIRE_APPROVAL"}
},
{
"action": {"action_type": "read", "resource": "product_catalog"},
"expected": {"decision": "NO_MATCH"} # Policy shouldn't match
}
]
)

print(f"Policy Test Results: {test_result.passed}/{test_result.total} passed")

Conflict Detection

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Check for conflicts with existing policies
conflicts = client.policies.detect_conflicts(
policy={
"policy_name": "new-database-policy",
"namespace_patterns": ["database"],
"resource_patterns": ["production.*"],
"actions": "ALLOW"
}
)

if conflicts.has_conflicts:
print("Conflicts detected:")
for conflict in conflicts.conflicts:
print(f"\nConflict with: {conflict.conflicting_policy}")
print(f" Type: {conflict.conflict_type}")
print(f" Description: {conflict.description}")
print(f" Resolution: {conflict.suggested_resolution}")

Impact Analysis

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Analyze impact of policy change
impact = client.policies.analyze_impact(
policy_id="pol_abc123",
changes={
"action": "DENY", # Changing from REQUIRE_APPROVAL to DENY
"priority": 25 # Increasing priority
}
)

print(f"Impact Analysis for policy change:")
print(f" Historical actions affected: {impact.affected_count}")
print(f" Would have been DENIED: {impact.would_deny_count}")
print(f" Would have been ALLOWED: {impact.would_allow_count}")
print(f" Risk level change: {impact.risk_assessment}")

# Show sample affected actions
for action in impact.sample_affected[:5]:
print(f"\n Action: {action.action_type} on {action.resource}")
print(f" Previous decision: {action.previous_decision}")
print(f" New decision: {action.new_decision}")

Compare with Production

from ascend import AscendClient

client = AscendClient(api_key="your-api-key")

# Test policy changes against recent production actions
comparison = client.policies.compare_with_production(
policy_changes=[
{
"policy_id": "pol_abc123",
"changes": {"approval_level": 4}
}
],
time_range_hours=24
)

print(f"Production Comparison (last 24 hours):")
print(f" Total actions evaluated: {comparison.total_actions}")
print(f" Decisions unchanged: {comparison.unchanged_count}")
print(f" Decisions changed: {comparison.changed_count}")

for change in comparison.decision_changes[:10]:
print(f"\n Action: {change.action_type}")
print(f" Production decision: {change.production_decision}")
print(f" Test decision: {change.test_decision}")

Best Practices

Testing Strategy

  1. Create Comprehensive Scenarios: Cover all expected use cases
  2. Include Edge Cases: Test boundary conditions and unusual inputs
  3. Test Both Positive and Negative: Verify both allows and denies
  4. Use Real Data Patterns: Base scenarios on actual usage patterns
  5. Automate Testing: Integrate tests into CI/CD pipeline

Scenario Design

┌─────────────────────────────────────────────────────────────────────────────┐
│ SCENARIO DESIGN CHECKLIST │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Coverage Areas: │
│ ☑ Normal operations (happy path) │
│ ☑ High-risk operations (should require approval) │
│ ☑ Blocked operations (should be denied) │
│ ☑ Edge cases (boundary conditions) │
│ ☑ Different user roles │
│ ☑ Different environments │
│ ☑ Time-based conditions │
│ ☑ Data classification scenarios │
│ │
│ For Each Scenario: │
│ ☑ Clear name and description │
│ ☑ Complete action definition │
│ ☑ Realistic context │
│ ☑ Expected result with reasoning │
│ │
└─────────────────────────────────────────────────────────────────────────────┘

Pre-Deployment Checklist

Before deploying a policy to production:

  • All test scenarios pass
  • No unexpected conflicts detected
  • Impact analysis reviewed
  • Production comparison shows acceptable changes
  • Edge cases tested
  • Team review completed
  • Documentation updated

Testing Automation

# Example CI/CD integration
import pytest
from ascend import AscendClient

client = AscendClient(api_key=os.environ["ASCEND_API_KEY"])

@pytest.fixture
def policy_client():
return client.policies

class TestProductionProtectionPolicy:
def test_production_write_requires_approval(self, policy_client):
result = policy_client.evaluate_dry_run(
action_type="update",
resource="production.data",
environment="production"
)
assert result.decision == "REQUIRE_APPROVAL"
assert result.approval_level >= 3

def test_staging_write_allowed(self, policy_client):
result = policy_client.evaluate_dry_run(
action_type="update",
resource="staging.data",
environment="staging"
)
assert result.decision in ["ALLOW", "REQUIRE_APPROVAL"]
assert result.risk_score.risk_level != "CRITICAL"

def test_production_delete_denied(self, policy_client):
result = policy_client.evaluate_dry_run(
action_type="delete",
resource="production.critical_data",
environment="production"
)
assert result.decision == "DENY"

Common Testing Mistakes

MistakeImpactSolution
Testing only happy pathMissed vulnerabilitiesInclude negative tests
Ignoring context variationsInconsistent behaviorTest multiple contexts
Not testing conflictsPolicy collisionsRun conflict detection
Skipping impact analysisUnexpected disruptionsAlways analyze impact
Not automating testsRegression risksIntegrate with CI/CD

Compliance

Policy testing supports compliance with:

  • SOC 2 CC8.1: Change management testing
  • PCI-DSS 6.4: Change control procedures
  • NIST 800-53 CM-3: Configuration change control
  • ISO 27001 A.12.1.2: Change management