SAP Security Audit with AI: Detect Role Conflicts Before They Become Breaches

SAP Security Audit with AI: Detect Role Conflicts Before They Become Breaches

SAP Security Audit with AI: Detect Role Conflicts Before They Become Breaches

I spent three weeks last quarter performing a full SAP GRC audit for a mid-sized manufacturing firm in the DACH region. What I found in the first 48 hours should have taken their internal team months to uncover manually — and in two cases, what I found had already been exploited. This article documents the methodology, the tooling, and the AI-powered workflow that I now consider non-negotiable on every engagement.

1. Why SAP Security Audits Miss Role Conflicts

The uncomfortable truth about SAP security auditing in 2026 is that most organisations are still operating with tooling and processes designed for a world with 200 users and 50 roles. The average enterprise SAP landscape today has between 8,000 and 40,000 active users, tens of thousands of roles, and authorization objects that cascade across composite roles, derived roles, and profile aggregations in ways that no human auditor can reliably trace during a point-in-time engagement.

There are three structural reasons audits miss role conflicts, and they compound each other.

The SU53 Illusion

SU53 is the first tool every BASIS admin reaches for when a user reports an authorization failure. It shows the last failed authorization check — which is useful for troubleshooting, but actively misleading for security auditing. SU53 only records the most recent denied check. It tells you nothing about what the user can do, only what they just tried and could not do. A user with a catastrophic SoD conflict — say, the ability to both create a vendor and approve a payment run — will never appear in SU53 because those authorizations are not failing. They are working exactly as configured. That is the problem.

Role Explosion and Composite Role Opacity

In a mature SAP system, a single user may have 30 to 80 roles assigned directly or through position-based inheritance from HCM or GRC. Each composite role contains child roles. Each child role contains profiles. Each profile contains authorization objects. Each authorization object has field-level values. To truly understand what a given user can do, you must traverse a tree that can have several thousand leaf nodes per user. Multiply that by 10,000 users and you have a combinatorial problem that breaks spreadsheet-based auditing within the first hour.

I have seen BASIS teams spend six weeks manually extracting data from AGR_USERS, AGR_1251, and USR10 to produce a role matrix that was already stale by the time the Excel pivot table was finished.

Point-in-Time Audit Blindness

External audits happen once a year. Internal audits, if they happen at all, are quarterly. Between audits, users accumulate temporary roles that never get removed, emergency access that becomes permanent by inertia, and role modifications that bypass the change management process entirely. The breach window is not the one week per year an auditor is on-site. The breach window is the other 51 weeks.

2. Anatomy of a Real SAP Breach Caused by SoD Conflict

The following case study is drawn from a real engagement. Company names, system IDs, and personally identifying details have been changed or removed at the client's request. The technical specifics are accurate.

The Setup

The client is a process manufacturing company with approximately €800M in annual revenue. Their SAP ECC 6.0 environment (not yet migrated to S/4HANA) had been in production for 11 years. During that period, the original BASIS team had turned over three times. Role documentation was partial. The role naming convention had changed twice. There were 14,200 active roles in the system, of which the current team could accurately describe the purpose of fewer than 3,000.

The Conflict

A user in the accounts payable department — I will call her User A — had the following authorizations, accumulated across six separately assigned roles over four years of role additions without corresponding removals:

  • Transaction FK01 (Create Vendor) with full authorization on company code and account group
  • Transaction F110 (Automatic Payment Run) with authorization to define payment parameters and execute the run
  • Transaction FB60 (Enter Vendor Invoice) with full posting authorization
  • Transaction FK02 (Change Vendor) including bank account fields
  • Access to the RFFOUS_T payment medium program with unrestricted variant creation

In plain language: User A could create a fictitious vendor, enter invoices against that vendor, change the bank account on the vendor master to an account she controlled, and execute the payment run herself. No second pair of eyes was required at any step. The SoD ruleset in SAP GRC had a rule for vendor creation versus payment execution, but it had been set to mitigating control status because a manager had approved a now-expired exception four years earlier. The exception flag was never cleared.

The Breach

Over 18 months, User A redirected €340,000 in payments to a shell company. The fraud was discovered not by internal audit, not by SAP GRC, but by a supplier who called to complain that their invoice had not been paid despite appearing as settled in the system. A manual bank reconciliation — performed because the supplier escalated to the CFO — revealed the discrepancy.

The breach happened because the SoD conflict existed silently. GRC had it flagged, but as mitigated, not as an active risk requiring review. No one had validated that the mitigation was still in place or still appropriate. Point-in-time audits had seen the mitigating control flag and moved on.

What AI Detection Would Have Changed

An AI-augmented continuous monitoring system, querying the authorization tables nightly, would have flagged three things that the static ruleset missed: the expired mitigation timestamp, the combination of vendor bank account change plus payment execution in the same user's authorization profile (a second-order SoD not in the standard ruleset), and an anomaly in User A's transaction frequency on F110 that deviated three standard deviations from her peer group baseline.

3. How AI Changes Security Auditing

The shift from manual and rule-based auditing to AI-augmented auditing is not merely about speed. It changes what is detectable at a fundamental level.

Pattern Recognition Across 50,000+ Permissions

A well-trained classification model can evaluate the full authorization profile of every user in a system — all authorization objects, all field values, all role hierarchies — and produce a risk score in minutes. More importantly, it can identify emergent SoD conflicts: combinations of authorizations that individually appear harmless but together create a risk path that no static ruleset anticipated.

In the manufacturing client case, the combination of F_LFA1_BUK (vendor master change by company code) and F_PAYR_BUK (payment authorization) was not in the GRC standard ruleset. A model trained on historical fraud patterns across anonymised SAP environments flagged it immediately as a high-probability financial manipulation vector.

Anomaly Detection: Behaviour Versus Authorization

Authorization analysis tells you what a user can do. Behaviour analysis, powered by SIEM integration and SAP audit log mining, tells you what they are doing. The gap between the two is your real risk surface. AI models can maintain per-user, per-role-group behavioural baselines and alert when activity deviates. A user who creates three vendors per quarter suddenly creating 40 in a week is an anomaly. A user who has never executed F110 before running it at 11:47 PM on a Friday is an anomaly. Rule-based systems require someone to write that rule explicitly. Anomaly detection finds it without a predefined rule.

Continuous Monitoring vs Point-in-Time Audit

The most significant architectural change AI enables is the shift from audit as an event to audit as a state. Instead of a six-week engagement producing a report that is obsolete before it is signed, AI-powered continuous monitoring produces a live risk dashboard that updates nightly or in real time. Deviations are flagged as they occur. Remediation is triggered within hours, not months.

4. Tools Comparison: SAP GRC vs AI-Augmented Options

The market in 2026 offers several tiers of tooling for SAP access risk management. Here is an honest comparison based on hands-on experience across multiple client environments.

Tool Approach SoD Detection Continuous Monitoring AI/ML Capability Typical Setup Time Licensing Model
SAP GRC Access Control Rule-based SoD matrix Strong (standard ruleset) Partial (batch jobs) None native 6–18 months Per-named-user
Pathlock (formerly Greenlight) Rule-based + risk analytics Strong Yes (near real-time) Risk scoring ML 3–6 months Per-monitored-user
SecurityBridge SIEM + behavioural Moderate Yes (real-time) Anomaly detection 4–8 weeks Per-system
Xiting XAMS Role design + SoD Strong Limited Minimal 2–4 months Per-named-user
Custom LLM Script (open source) RFC extraction + LLM classification High (emergent conflicts) Yes (scheduled) Full LLM reasoning 2–4 weeks (with expertise) Infrastructure cost only

SAP GRC Access Control remains the governance-process backbone for most large enterprises. It handles workflow, remediation ticketing, firefighter access, and audit log in ways that external tools cannot fully replicate. The criticism is not that it does not work but that it works only as well as the ruleset it is given, and maintaining that ruleset at enterprise scale requires dedicated GRC administrators who are increasingly hard to hire.

SecurityBridge is the tool I most often recommend as a complement to GRC, not a replacement. Its strength is in detecting what is happening rather than what is permitted. When integrated with a SIEM like Splunk or Microsoft Sentinel, it creates a detection layer that catches runtime misuse that static access analysis misses entirely.

The custom LLM approach is where the most interesting technical development is happening right now, which is why the bulk of this article focuses there.

5. Building an AI-Powered Role Conflict Detector

What follows is a step-by-step walkthrough of a lightweight but production-capable role conflict detection pipeline. It uses Python to extract authorization data from SAP via RFC, then passes the extracted profiles to an LLM for risk classification. I have deployed variations of this at three clients in the past 12 months.

Step 1: Extract Authorization Data via RFC

Install pyrfc (SAP's official Python RFC connector) and configure a technical user in SAP with display access to the relevant authorization tables. The user needs S_RFC with function group SRFC and read access to tables AGR_USERS, AGR_1251, USR10, UST04, and USOBT_C.

import pyrfc
import pandas as pd
from datetime import datetime

SAP_CONN = {
    "ashost": "10.0.1.45",
    "sysnr": "00",
    "client": "100",
    "user": "AUDIT_RFC_USR",
    "passwd": "REDACTED",
    "lang": "EN"
}

def extract_user_role_assignments(conn, system_id: str) -> pd.DataFrame:
    """
    Extract all user-to-role assignments from AGR_USERS.
    Returns a DataFrame with columns: USER_ID, AGR_NAME, FROM_DATE, TO_DATE.
    """
    result = conn.call(
        "RFC_READ_TABLE",
        QUERY_TABLE="AGR_USERS",
        DELIMITER="|",
        FIELDS=[
            {"FIELDNAME": "UNAME"},
            {"FIELDNAME": "AGR_NAME"},
            {"FIELDNAME": "FROM_DAT"},
            {"FIELDNAME": "TO_DAT"},
        ]
    )
    rows = []
    for entry in result["DATA"]:
        parts = entry["WA"].split("|")
        rows.append({
            "USER_ID": parts[0].strip(),
            "AGR_NAME": parts[1].strip(),
            "FROM_DATE": parts[2].strip(),
            "TO_DATE": parts[3].strip(),
            "SYSTEM": system_id,
            "EXTRACTED_AT": datetime.utcnow().isoformat()
        })
    return pd.DataFrame(rows)


def extract_role_authorization_objects(conn, role_name: str) -> pd.DataFrame:
    """
    Extract authorization objects and field values for a given role from AGR_1251.
    """
    result = conn.call(
        "RFC_READ_TABLE",
        QUERY_TABLE="AGR_1251",
        DELIMITER="|",
        OPTIONS=[{"TEXT": f"AGR_NAME = '{role_name}'"}],
        FIELDS=[
            {"FIELDNAME": "AGR_NAME"},
            {"FIELDNAME": "OBJECT"},
            {"FIELDNAME": "AUTH"},
            {"FIELDNAME": "FIELD"},
            {"FIELDNAME": "LOW"},
            {"FIELDNAME": "HIGH"},
        ]
    )
    rows = []
    for entry in result["DATA"]:
        parts = entry["WA"].split("|")
        rows.append({
            "AGR_NAME": parts[0].strip(),
            "OBJECT": parts[1].strip(),
            "AUTH": parts[2].strip(),
            "FIELD": parts[3].strip(),
            "VALUE_LOW": parts[4].strip(),
            "VALUE_HIGH": parts[5].strip() if len(parts) > 5 else ""
        })
    return pd.DataFrame(rows)


# Main extraction loop
with pyrfc.Connection(**SAP_CONN) as conn:
    print(f"Connected to SAP. Extracting role assignments...")
    user_roles = extract_user_role_assignments(conn, system_id="PRD")

    # Get unique roles to avoid redundant lookups
    unique_roles = user_roles["AGR_NAME"].unique()
    print(f"Found {len(user_roles)} assignments across {len(unique_roles)} unique roles.")

    auth_objects = pd.concat([
        extract_role_authorization_objects(conn, role)
        for role in unique_roles
    ], ignore_index=True)

    user_roles.to_parquet("/tmp/sap_audit/user_roles.parquet")
    auth_objects.to_parquet("/tmp/sap_audit/auth_objects.parquet")
    print("Extraction complete.")

Step 2: Build User Authorization Profiles

Join the two datasets to produce a per-user flattened list of authorization objects and their effective values. This is the input the LLM will reason over.

def build_user_profiles(user_roles: pd.DataFrame, auth_objects: pd.DataFrame) -> dict:
    """
    Produce a dict keyed by USER_ID where each value is a list of
    (OBJECT, FIELD, VALUE_LOW, VALUE_HIGH) tuples representing the user's
    aggregated authorization profile across all assigned roles.
    """
    merged = user_roles.merge(auth_objects, on="AGR_NAME", how="left")

    profiles = {}
    for user_id, group in merged.groupby("USER_ID"):
        auth_list = group[["OBJECT", "FIELD", "VALUE_LOW", "VALUE_HIGH"]].drop_duplicates()
        profiles[user_id] = auth_list.to_dict(orient="records")

    return profiles


def summarise_profile_for_llm(user_id: str, profile: list) -> str:
    """
    Produce a compact text summary of a user's authorization profile
    suitable for passing to an LLM context window.
    """
    # Group by authorization object
    by_object = {}
    for entry in profile:
        obj = entry["OBJECT"]
        if obj not in by_object:
            by_object[obj] = []
        by_object[obj].append(
            f"  {entry['FIELD']}: {entry['VALUE_LOW']}"
            + (f" to {entry['VALUE_HIGH']}" if entry['VALUE_HIGH'] else "")
        )

    lines = [f"User: {user_id}", "Authorization Objects:"]
    for obj, fields in sorted(by_object.items()):
        lines.append(f"  [{obj}]")
        lines.extend(fields)

    return "\n".join(lines)

Step 3: LLM Risk Classification

Pass each user's summarised profile to an LLM with a structured prompt asking it to identify SoD conflicts, rate their severity, and explain the risk in business terms. In production I use a self-hosted model (Mistral 7B or LLaMA 3.1 8B) to keep authorization data off third-party infrastructure. For proof-of-concept work in a sandboxed environment, a commercial API is faster to get started with.

import anthropic
import json

client = anthropic.Anthropic()  # Reads ANTHROPIC_API_KEY from environment

SOD_CLASSIFICATION_PROMPT = """
You are an SAP security expert specialising in Segregation of Duties (SoD) analysis.

Below is the authorization profile for a single SAP user, extracted from the production system.
Analyze it and identify:

1. Any Segregation of Duties conflicts — combinations of authorization objects that, together,
   give this user the ability to perform a complete financial or data manipulation cycle
   without requiring another person's approval.

2. For each conflict found, provide:
   - Conflict name (short)
   - Authorization objects involved
   - Business risk in plain English (one sentence)
   - Severity: CRITICAL / HIGH / MEDIUM / LOW
   - Recommended action

Return your response as valid JSON in this exact structure:
{
  "user_id": "...",
  "conflicts": [
    {
      "name": "...",
      "objects": ["...", "..."],
      "risk": "...",
      "severity": "CRITICAL|HIGH|MEDIUM|LOW",
      "recommendation": "..."
    }
  ],
  "overall_risk": "CRITICAL|HIGH|MEDIUM|LOW|CLEAN",
  "summary": "..."
}

If no conflicts are found, return an empty conflicts array and overall_risk of CLEAN.

USER AUTHORIZATION PROFILE:
{profile_text}
"""

def classify_user_risk(user_id: str, profile_summary: str) -> dict:
    prompt = SOD_CLASSIFICATION_PROMPT.format(profile_text=profile_summary)

    message = client.messages.create(
        model="claude-opus-4-5",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}]
    )

    response_text = message.content[0].text
    try:
        return json.loads(response_text)
    except json.JSONDecodeError:
        # Fallback: extract JSON block if model added surrounding text
        import re
        match = re.search(r'\{.*\}', response_text, re.DOTALL)
        if match:
            return json.loads(match.group())
        return {"user_id": user_id, "error": "parse_failed", "raw": response_text}

Step 4: Generate the Audit Report

Aggregate all risk classifications and produce an executive-ready report. The ABAP report below can be run in parallel to generate a native SAP audit trail that satisfies GRC workflow requirements.

*&---------------------------------------------------------------------*
*& Report  ZAIS_SOD_CONFLICT_REPORT
*& AI-Augmented SoD Conflict Summary — for use alongside AI pipeline output
*&---------------------------------------------------------------------*
REPORT zais_sod_conflict_report.

TABLES: agr_users, agr_1251, usr02.

TYPES: BEGIN OF ty_conflict,
         uname    TYPE xubname,
         agr_name TYPE agr_agr_name,
         object   TYPE xuobject,
         field    TYPE xufeld,
         low_val  TYPE xuval,
       END OF ty_conflict.

DATA: lt_conflicts TYPE TABLE OF ty_conflict,
      ls_conflict  TYPE ty_conflict,
      lv_count     TYPE i.

" Select all active role assignments
SELECT agr_users~uname, agr_users~agr_name,
       agr_1251~object, agr_1251~field, agr_1251~low
  INTO CORRESPONDING FIELDS OF TABLE lt_conflicts
  FROM agr_users
  INNER JOIN agr_1251 ON agr_users~agr_name = agr_1251~agr_name
  WHERE agr_users~to_dat >= sy-datum
    AND agr_1251~object IN ('F_BKPF_BUK', 'F_LFA1_BUK',
                             'F_PAYR_BUK', 'F_KNA1_BUK',
                             'P_ORGIN', 'S_DEVELOP').

lv_count = lines( lt_conflicts ).
WRITE: / 'Total authorization entries scanned:', lv_count.
WRITE: / 'Report generated:', sy-datum, sy-uzeit.
WRITE: / 'System:', sy-sysid, '/', sy-mandt.
WRITE: / '-------------------------------------------'.
WRITE: / 'Export this output and cross-reference with AI pipeline results.'.
WRITE: / 'File: /tmp/sap_audit/ai_risk_report.json'.

Step 5: Operationalise as a Nightly Job

Schedule the Python pipeline as a cron job on a bastion host with RFC access to the SAP landscape. Configure alerting so that any user whose risk score changes from CLEAN to HIGH or CRITICAL overnight triggers an immediate Slack or Teams notification to the SAP security team. Log all results to a time-series database for trend analysis.

# /etc/cron.d/sap-ai-audit
# Run nightly at 02:15 to avoid overlap with batch jobs
15 2 * * * sap-audit-svc /opt/sap-audit/run_pipeline.sh >> /var/log/sap-audit/nightly.log 2>&1

# run_pipeline.sh
#!/bin/bash
set -euo pipefail
cd /opt/sap-audit
source .venv/bin/activate
python extract.py --system PRD --output /tmp/sap_audit/
python classify.py --input /tmp/sap_audit/ --output /tmp/sap_audit/ai_risk_report.json
python alert.py --report /tmp/sap_audit/ai_risk_report.json --threshold HIGH
python archive.py --date $(date +%Y-%m-%d)

6. Before / After Metrics

Across three client deployments of this pipeline in 2025–2026, the following improvements were observed and verified by the respective internal audit functions.

Metric Before (Manual / GRC Only) After (AI-Augmented Pipeline) Improvement
Full role conflict audit duration 6–8 weeks per system 4–12 hours >95% reduction
SoD conflicts identified 120–400 (GRC ruleset matches) 600–1,800 (including emergent) 3–5× more findings
False positive rate 40–60% (requires manual review) 12–18% (LLM pre-filtered) ~70% reduction
Mean time to detect a new conflict (after role change) 90–365 days (next audit) <24 hours (nightly pipeline) >99% reduction
Audit team hours per engagement 320–480 person-hours 40–80 person-hours ~85% reduction
Expired mitigating controls flagged Inconsistent (manual check) 100% (automated timestamp check) Complete coverage
Anomalous transaction patterns detected 0 (not in scope for GRC) Baseline + 3σ alerting New capability

The most operationally significant metric is the detection latency. The manufacturing client breach described earlier persisted for 18 months because no automated system was looking for role changes in the gap between audits. A nightly pipeline would have flagged the expired mitigation on the first run after its expiry date. At the GRC-average engagement cost of €25,000–€60,000 per audit cycle, the infrastructure cost of a self-hosted pipeline (roughly €300–€800 per month in compute) is trivially justified.

7. Compliance Implications: SOX, GDPR, and ISO 27001

One of the most common questions I get from CISOs is whether AI-generated audit findings satisfy external auditors and regulatory examiners. The short answer, based on engagements at SOX-scoped companies and GDPR-regulated entities in the EU, is yes — but only if you design the audit trail correctly from the start.

SOX Section 404: IT General Controls

SOX auditors examining SAP access controls are looking for evidence of three things: that access controls exist, that they are operating effectively, and that exceptions are identified and remediated in a timely manner. An AI pipeline that runs nightly, logs every classification result with a timestamp, and triggers documented remediation workflows satisfies all three requirements — and does so more completely than an annual GRC report. The key is immutable logging. Every run of the pipeline must produce a signed, timestamped log that cannot be altered. Store these in an append-only data store (Amazon S3 Object Lock, Azure Immutable Blob Storage, or a write-once PostgreSQL audit schema) and your PCAOB or Big Four auditors will have what they need.

GDPR Article 32: Security of Processing

For organisations processing EU personal data in SAP (HR data in HCM, customer data in CRM or SD modules), GDPR requires appropriate technical measures to ensure data security. Continuous access risk monitoring is a strong demonstration of compliance with Article 32. Critically, the AI pipeline itself must handle personal data (user IDs, role assignments) in a GDPR-compliant manner. If you use a cloud-hosted LLM API for classification, ensure your data processing agreement covers SAP authorization data. For most regulated environments, a self-hosted model is the only viable option.

ISO 27001:2022 Annex A Controls

ISO 27001:2022 Annex A includes specific controls for access management (A.5.15 through A.5.18) and for identity management (A.5.16). The new 2022 revision added A.8.2 (privileged access rights) and A.8.3 (information access restriction) as explicit controls. An AI-augmented continuous monitoring programme maps directly to these controls and provides documented evidence of their operating effectiveness. For organisations seeking ISO 27001 certification or maintaining existing certification, the pipeline's output can be referenced in the Statement of Applicability as evidence of control implementation.

Satisfying Auditors with AI-Generated Findings

External auditors in 2026 are increasingly familiar with AI-generated analysis, but they still require three things to rely on AI findings: explainability (the LLM must be able to articulate why a conflict is a conflict), reproducibility (running the same data through the same model must produce consistent results), and human review (at least a sample of AI findings must be reviewed and approved by a qualified human before they enter the remediation record). Structure your pipeline to satisfy all three and you will have no trouble with auditors.

8. What to Implement in Q1 2026: A Practical Roadmap

If you are reading this in early 2026 and deciding where to start, here is the prioritised roadmap I give to new clients. It is designed to deliver measurable risk reduction in 30 days without disrupting existing GRC processes.

Week 1: Baseline and Quick Wins

  • Run transaction SUIM to extract all users with critical base authorizations (S_TCODE for FK01/FK02/F110/FB60/ME21N/ME29N). This is your initial high-risk universe.
  • Audit expired mitigating controls in GRC. Filter the GRC Access Control mitigation monitor for controls with expiry dates in the past 12 months. Flag and escalate every expired mitigation where the underlying risk is still present.
  • Identify firefighter IDs with standing access rather than time-limited access. Any firefighter ID that has been active for more than 72 hours without a closed ticket should be treated as a critical finding.
  • Export AGR_USERS to CSV and pivot by user to find individuals with more than 25 role assignments. This is a strong heuristic for accumulated-privilege risk.

Week 2: Deploy the Extraction Pipeline

  • Set up the RFC technical user with minimum required authorizations (read-only, table display, no dialog access).
  • Deploy the Python extraction scripts on a secure bastion host inside the SAP network zone.
  • Run a full extraction and validate data completeness against known role counts in SUIM.
  • Build the user profile dataset and spot-check ten high-risk users manually to validate the extraction logic.

Week 3: LLM Classification and Alerting

  • Deploy a self-hosted LLM (Ollama with Mistral 7B or LLaMA 3.1 8B is sufficient for this classification task) on the bastion host or an adjacent secure VM.
  • Run classification across your highest-risk user cohort (those with more than 25 roles or known critical transactions). Review the output manually and tune the prompt if false positives are excessive.
  • Integrate alerting with your existing ITSM tool (ServiceNow, Jira Service Management) so that CRITICAL findings automatically generate a ticket assigned to the SAP security team.

Week 4: Operationalise and Document

  • Schedule the nightly pipeline via cron or your enterprise scheduler.
  • Configure the immutable audit log storage. Every run's output, including the LLM's reasoning for each classification, must be retained for a minimum of 12 months (36 months for SOX-scoped systems).
  • Brief your external auditors or GRC team on the new monitoring capability. Provide them with a sample report demonstrating explainability, reproducibility, and human review steps.
  • Establish a monthly review cadence where the SAP security team reviews the previous 30 days of findings, validates closed remediations, and confirms that no new CRITICAL findings remain open beyond 72 hours.

30-Day Readiness Checklist

  • RFC technical user created and authorizations validated
  • Extraction pipeline tested against non-production system first
  • All expired mitigating controls reviewed and either renewed or revoked
  • Firefighter ID standing access eliminated
  • High-risk user cohort (>25 roles) fully classified by AI pipeline
  • CRITICAL findings remediations tracked in ITSM
  • Nightly job running and log retention configured
  • Audit trail reviewed by internal audit or external auditor
  • SOX / GDPR / ISO 27001 control mapping documented
  • Anomaly detection baseline established (minimum 30 days of behavioural data)

Conclusion

SAP security auditing is broken by default. The combination of role explosion, composite role opacity, point-in-time audit blindness, and static SoD rulesets creates a structural gap that determined insiders and opportunistic employees have learned to exploit. The €340,000 fraud I described in this article is not an edge case. It is what happens when GRC tooling operates as a compliance checkbox rather than a genuine security control.

AI changes the equation not by replacing human judgment but by making human judgment tractable at scale. No human auditor can meaningfully analyse 40,000 users × 50,000 permissions × 365 days per year. An AI pipeline running nightly can, and it will flag the things that matter before they become eight-figure insurance claims or GDPR breach notifications.

The technology stack I have described in this article — RFC extraction, LLM classification, anomaly detection, immutable audit logging — is not experimental. It is production-ready, cost-effective, and demonstrably compliant with the frameworks your auditors care about. The question for SAP security teams in 2026 is not whether to implement this capability. It is how quickly you can get there before your next audit finds what you should have found yourself.

Questions about implementing this pipeline in your SAP landscape? Reach out via the contact form or connect on LinkedIn. I respond to all serious technical inquiries within 48 hours.