SAP BTP and AI: How to Build Your First Intelligent Extension in 2026
Let's Talk About What SAP BTP Actually Is in 2026
If you've spent any time in the SAP ecosystem over the past few years, you've probably heard "SAP BTP" thrown around in every keynote, every sales pitch, and every LinkedIn post from your favorite SAP influencer. But here's the thing — SAP Business Technology Platform has genuinely matured into something worth paying attention to, especially if you're thinking about building AI-powered extensions.
I remember when BTP was essentially a rebranded Cloud Platform with a confusing pricing model and a handful of services that didn't quite work together. That's not where we are anymore. In 2026, BTP has become the actual backbone for extending S/4HANA, integrating third-party AI services, and building custom applications that don't require you to touch a single line of ABAP (unless you want to).
This guide is for SAP developers and consultants who want to build their first intelligent extension — something that actually uses AI to solve a business problem, not just a proof of concept that sits in a sandbox forever. We'll go from zero to a deployed, working extension that uses SAP AI Core to classify incoming purchase requisitions and route them to the right approval workflow.
Understanding the BTP Architecture Stack for AI Extensions
Before we write any code, let's get the architecture right. Too many BTP projects fail because someone jumped straight into coding without understanding how the pieces connect.
Here's what our stack looks like:
- SAP S/4HANA Cloud — the source system, exposing OData APIs for purchase requisitions (API_PURCHASEREQ_PROCESS_SRV)
- SAP BTP Cloud Foundry — our runtime environment
- SAP Cloud Application Programming Model (CAP) — the framework for our extension
- SAP HANA Cloud — persistence layer with vector engine capabilities
- SAP AI Core — hosting our ML model for classification
- SAP Destination Service — managing connectivity to S/4HANA
- SAP Event Mesh — receiving real-time events when new purchase requisitions are created
The flow is straightforward: a user creates a purchase requisition in S/4HANA, an event fires through Event Mesh, our CAP application picks it up, sends the requisition details to an AI model running on AI Core, gets a classification back, and updates the requisition with the appropriate approval workflow.
Why CAP and Not Straight Node.js or Python?
You could absolutely build this with a plain Express.js server or a Flask application. But CAP gives you several things for free that you'd otherwise spend weeks implementing:
- Built-in OData V4 service exposure
- CDS (Core Data Services) for data modeling with HANA Cloud integration
- Authentication and authorization via XSUAA out of the box
- Multitenancy support if you ever need it
- SAP destination consumption with automatic token handling
The trade-off is that CAP has its own opinions about how things should work, and sometimes those opinions conflict with what you actually need. We'll deal with those friction points as they come up.
Setting Up Your BTP Subaccount and Services
Log into your BTP Cockpit and make sure you have a subaccount with Cloud Foundry enabled. You'll need the following entitlements:
SAP HANA Cloud — hana (required plan)
SAP AI Core — standard or extended
SAP Event Mesh — default
Cloud Foundry Runtime — MEMORY (at least 2 GB)
SAP Build Work Zone — standard (for the UI, if you want one)
Destination Service — lite
Connectivity Service — lite
Authorization & Trust Management (XSUAA) — application
A quick tip from someone who's wasted hours on this: make sure your subaccount region matches where your S/4HANA Cloud tenant lives. Cross-region connectivity works but adds latency and sometimes creates weird timeout issues with the Destination Service.
Creating the SAP AI Core Instance
Navigate to your subaccount, go to Service Marketplace, and create an instance of SAP AI Core. Use the standard plan unless your organization has already purchased extended capacity.
Once the instance is created, create a service key. You'll get credentials that look something like this:
{
"serviceurls": {
"AI_API_URL": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com"
},
"appname": "your-ai-core-app",
"clientid": "sb-your-client-id",
"clientsecret": "your-secret",
"identityzone": "your-zone",
"identityzoneid": "your-zone-id",
"url": "https://your-zone.authentication.eu10.hana.ondemand.com"
}
Save these credentials — we'll bind them to our CAP application later. The AI_API_URL is what your application will call to run inference against your deployed model.
Building the CAP Application
Let's initialize the project. Open your terminal (or SAP Business Application Studio, which has CAP tooling pre-installed):
npm init -y
npm add @sap/cds @sap/cds-dk
npx cds init pr-classifier
cd pr-classifier
Now let's define our data model. Create a file at db/schema.cds:
namespace sap.pr.classifier;
entity PurchaseRequisitions {
key ID : UUID;
prNumber : String(10); // BANFN from EBAN
itemNumber : String(5); // BNFPO
materialGroup : String(9); // MATKL
shortText : String(40); // TXZ01
quantity : Decimal(13,3);
estimatedPrice : Decimal(13,2);
currency : String(5);
plant : String(4); // WERKS
requestor : String(12); // AFNAM
aiClassification : String(50);
confidenceScore : Decimal(5,4);
assignedWorkflow : String(30);
processedAt : Timestamp;
rawPayload : LargeString;
}
entity ClassificationRules {
key ID : UUID;
category : String(50);
workflowId : String(30);
minConfidence : Decimal(5,4);
isActive : Boolean default true;
}
If you've worked with SAP tables before, you'll recognize the field references. BANFN, BNFPO, MATKL — these map directly to the EBAN table fields in S/4HANA. I keep the SAP field names in comments because six months from now, when someone else maintains this code, they'll thank you for the reference.
Defining the Service Layer
Create srv/pr-service.cds:
using sap.pr.classifier from '../db/schema';
service PRClassifierService @(path: '/api') {
entity PurchaseRequisitions as projection on classifier.PurchaseRequisitions;
entity ClassificationRules as projection on classifier.ClassificationRules;
action classifyPR(prId: UUID) returns String;
action reprocessAll() returns Integer;
}
And the implementation in srv/pr-service.js:
const cds = require('@sap/cds');
module.exports = class PRClassifierService extends cds.ApplicationService {
async init() {
const { PurchaseRequisitions, ClassificationRules } = this.entities;
this.on('classifyPR', async (req) => {
const { prId } = req.data;
const pr = await SELECT.one.from(PurchaseRequisitions).where({ ID: prId });
if (!pr) return req.reject(404, `Purchase requisition ${prId} not found`);
const classification = await this._callAICore(pr);
await UPDATE(PurchaseRequisitions).set({
aiClassification: classification.category,
confidenceScore: classification.confidence,
assignedWorkflow: classification.workflow,
processedAt: new Date().toISOString()
}).where({ ID: prId });
return `Classified as ${classification.category} with ${(classification.confidence * 100).toFixed(1)}% confidence`;
});
this.on('reprocessAll', async (req) => {
const unprocessed = await SELECT.from(PurchaseRequisitions)
.where({ aiClassification: null });
let count = 0;
for (const pr of unprocessed) {
try {
const classification = await this._callAICore(pr);
await UPDATE(PurchaseRequisitions).set({
aiClassification: classification.category,
confidenceScore: classification.confidence,
assignedWorkflow: classification.workflow,
processedAt: new Date().toISOString()
}).where({ ID: pr.ID });
count++;
} catch (e) {
console.error(`Failed to classify PR ${pr.prNumber}:`, e.message);
}
}
return count;
});
await super.init();
}
async _callAICore(pr) {
const aiCore = await cds.connect.to('aicore');
const payload = {
text: `${pr.shortText} | Material Group: ${pr.materialGroup} | Quantity: ${pr.quantity} | Price: ${pr.estimatedPrice} ${pr.currency} | Plant: ${pr.plant}`,
};
const response = await aiCore.send({
method: 'POST',
path: '/v2/inference/deployments/your-deployment-id/v2/predict',
data: payload,
headers: { 'AI-Resource-Group': 'default' }
});
const rules = await SELECT.from(this.entities.ClassificationRules)
.where({ category: response.category, isActive: true });
const rule = rules[0];
const workflow = (rule && response.confidence >= rule.minConfidence)
? rule.workflowId
: 'MANUAL_REVIEW';
return {
category: response.category,
confidence: response.confidence,
workflow: workflow
};
}
};
Integrating SAP AI Core for the Classification Model
Here's where things get interesting. SAP AI Core isn't just a place to deploy models — it's a full MLOps platform. You can train, deploy, and manage models with versioning, monitoring, and scaling built in.
For our purchase requisition classifier, we have two options:
- Use a pre-trained foundation model via the Generative AI Hub (the faster route)
- Train a custom model using your historical PR data (the more accurate route)
Option 1: Generative AI Hub (Quick Start)
SAP AI Core's Generative AI Hub gives you access to models like GPT-4, Claude, and open-source alternatives. For classification, you can use prompt engineering with a foundation model:
const classifyWithGenAI = async (prText) => {
const response = await fetch(AI_API_URL + '/v2/inference/deployments/genaihub/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${token}`,
'AI-Resource-Group': 'default',
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'gpt-4',
messages: [
{
role: 'system',
content: `You are a purchase requisition classifier for an SAP system.
Classify each PR into exactly one category: CAPEX, OPEX_IT, OPEX_FACILITIES,
MRO, RAW_MATERIALS, SERVICES, or OTHER. Return JSON with "category" and
"confidence" (0-1 float).`
},
{
role: 'user',
content: prText
}
],
temperature: 0.1,
max_tokens: 100
})
});
return JSON.parse(response.choices[0].message.content);
};
This works well enough for a prototype, but foundation models can be expensive at scale and sometimes hallucinate categories that don't exist in your system. For production, consider Option 2.
Option 2: Custom Model Training
If you have historical purchase requisition data (and if you've been running SAP for any length of time, you have plenty), you can train a custom classifier. Pull data from the EBAN table using the following fields:
- TXZ01 (Short Text)
- MATKL (Material Group)
- EKGRP (Purchasing Group)
- PSTYP (Item Category)
- KNTTP (Account Assignment Category)
- BSART (Document Type from EBAN-BSART cross-referencing to T161)
Export this data, label it with your classification categories, and you can train a simple text classifier. The training pipeline in AI Core uses Argo Workflows:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: pr-classifier-training
annotations:
scenarios.ai.sap.com/description: "Train PR classifier"
scenarios.ai.sap.com/name: "pr-classification"
spec:
entrypoint: train
templates:
- name: train
container:
image: your-docker-registry/pr-classifier:latest
command: ["python", "train.py"]
env:
- name: DATA_PATH
value: "/data/training_data.csv"
- name: MODEL_OUTPUT
value: "/model/output"
The custom model approach typically gives you 15-25% better accuracy than prompt engineering because it learns the specific patterns in your organization's procurement language. A PR that says "Ersatzteil für CNC-Fräse" (spare part for CNC milling machine) in a German manufacturing plant will be correctly classified as MRO every time, while a foundation model might struggle with multilingual industrial terminology.
Connecting to S/4HANA via Event Mesh
Instead of polling S/4HANA for new purchase requisitions (which is what I see most teams do, and it's wasteful), we'll use SAP Event Mesh to receive real-time notifications.
In your S/4HANA system, activate the event for purchase requisition creation. The event topic follows this pattern:
sap/s4/beh/purchaserequisition/v1/PurchaseRequisition/Created/v1
In your CAP application, add the event handling in srv/pr-service.js:
// Add to the init() method
const messaging = await cds.connect.to('messaging');
messaging.on('sap/s4/beh/purchaserequisition/v1/PurchaseRequisition/Created/v1', async (msg) => {
const prNumber = msg.data.PurchaseRequisition;
// Fetch full PR details from S/4HANA
const s4 = await cds.connect.to('s4');
const prData = await s4.get(`/API_PURCHASEREQ_PROCESS_SRV/A_PurchaseRequisitionItem?$filter=PurchaseRequisition eq '${prNumber}'`);
for (const item of prData) {
const newPR = {
ID: cds.utils.uuid(),
prNumber: item.PurchaseRequisition,
itemNumber: item.PurchaseRequisitionItem,
materialGroup: item.MaterialGroup,
shortText: item.PurchaseRequisitionItemText,
quantity: item.RequestedQuantity,
estimatedPrice: item.PurchaseRequisitionPrice,
currency: item.Currency,
plant: item.Plant,
requestor: item.RequestorName,
rawPayload: JSON.stringify(item)
};
await INSERT.into(PurchaseRequisitions).entries(newPR);
// Auto-classify
try {
const classification = await this._callAICore(newPR);
await UPDATE(PurchaseRequisitions).set({
aiClassification: classification.category,
confidenceScore: classification.confidence,
assignedWorkflow: classification.workflow,
processedAt: new Date().toISOString()
}).where({ ID: newPR.ID });
console.log(`PR ${prNumber}-${item.PurchaseRequisitionItem} classified as ${classification.category}`);
} catch (e) {
console.error(`Classification failed for PR ${prNumber}:`, e.message);
}
}
});
The Configuration Files That Make It All Work
One of the most confusing parts of BTP development is getting the configuration right. Here's the package.json configuration for CDS:
"cds": {
"requires": {
"db": {
"kind": "hana-cloud",
"impl": "@sap/cds/libx/_runtime/hana/Service.js"
},
"s4": {
"kind": "odata-v2",
"model": "srv/external/API_PURCHASEREQ_PROCESS_SRV",
"[production]": {
"credentials": {
"destination": "S4HC_PR",
"path": "/sap/opu/odata/sap/API_PURCHASEREQ_PROCESS_SRV"
}
}
},
"messaging": {
"kind": "enterprise-messaging",
"[production]": {
"format": "cloudevents"
}
},
"aicore": {
"kind": "rest",
"[production]": {
"credentials": {
"destination": "AI_CORE"
}
}
},
"auth": {
"kind": "xsuaa"
}
}
}
And the mta.yaml (Multi-Target Application descriptor) — this is the file that orchestrates your deployment:
_schema-version: "3.1"
ID: pr-classifier
version: 1.0.0
modules:
- name: pr-classifier-srv
type: nodejs
path: gen/srv
parameters:
memory: 512M
disk-quota: 1024M
requires:
- name: pr-classifier-db
- name: pr-classifier-auth
- name: pr-classifier-messaging
- name: pr-classifier-destination
provides:
- name: pr-classifier-srv-api
properties:
url: ${default-url}
- name: pr-classifier-db-deployer
type: hdb
path: gen/db
requires:
- name: pr-classifier-db
resources:
- name: pr-classifier-db
type: com.sap.xs.hdi-container
parameters:
service: hana
service-plan: hdi-shared
- name: pr-classifier-auth
type: org.cloudfoundry.managed-service
parameters:
service: xsuaa
service-plan: application
path: ./xs-security.json
- name: pr-classifier-messaging
type: org.cloudfoundry.managed-service
parameters:
service: enterprise-messaging
service-plan: default
path: ./em-config.json
- name: pr-classifier-destination
type: org.cloudfoundry.managed-service
parameters:
service: destination
service-plan: lite
Testing Locally Before Deploying
One of the best improvements in CAP development is the hybrid testing capability. You can run your application locally while connecting to cloud services:
# Bind to your cloud services
cds bind -2 pr-classifier-db
cds bind -2 pr-classifier-auth
cds bind -2 pr-classifier-messaging
# Run with hybrid profile
cds watch --profile hybrid
This lets you test the full event flow without deploying. Your local server connects to the real HANA Cloud instance, real Event Mesh, and real AI Core. The only thing running locally is your Node.js application code.
For unit testing the classification logic without hitting AI Core (which costs money per inference), create a mock:
// test/mock-aicore.js
module.exports = {
classify: (prText) => {
// Simple rule-based mock for testing
if (prText.toLowerCase().includes('laptop') || prText.toLowerCase().includes('server'))
return { category: 'OPEX_IT', confidence: 0.92 };
if (prText.toLowerCase().includes('repair') || prText.toLowerCase().includes('maintenance'))
return { category: 'MRO', confidence: 0.88 };
return { category: 'OTHER', confidence: 0.5 };
}
};
Deploying to Cloud Foundry
Build and deploy:
# Build the MTA archive
mbt build -t ./mta_archives
# Deploy to Cloud Foundry
cf deploy mta_archives/pr-classifier_1.0.0.mtar
Watch the deployment logs carefully. The most common failures I see are:
- HDI container creation timeout — HANA Cloud needs to be running (it auto-stops after inactivity on trial accounts)
- XSUAA binding errors — usually a malformed xs-security.json
- Memory limit exceeded — Node.js with CAP needs at least 256MB, but 512MB is safer
- Destination not found — you need to manually create the S4HC_PR and AI_CORE destinations in BTP Cockpit
Setting Up the Destinations
In your BTP subaccount, navigate to Connectivity > Destinations and create two entries:
S4HC_PR Destination:
Name: S4HC_PR
Type: HTTP
URL: https://your-s4hana-tenant.s4hana.ondemand.com
Proxy Type: Internet
Authentication: OAuth2SAMLBearerAssertion
Audience: https://your-s4hana-tenant.s4hana.ondemand.com
Token Service URL: https://your-s4hana-tenant.s4hana.ondemand.com/sap/bc/sec/oauth2/token
AI_CORE Destination:
Name: AI_CORE
Type: HTTP
URL: https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com
Proxy Type: Internet
Authentication: OAuth2ClientCredentials
Client ID: (from AI Core service key)
Client Secret: (from AI Core service key)
Token Service URL: (from AI Core service key - the "url" field + /oauth/token)
Monitoring and Observability
Once your extension is running in production, you need visibility into how it performs. SAP BTP provides Cloud Logging Service and Application Autoscaler, but honestly, the built-in monitoring tools are basic.
What I recommend: add structured logging to your classification results and push them to a dashboard. Here's a simple approach using the CAP logging framework:
const LOG = cds.log('pr-classifier');
// In your classification handler:
LOG.info('classification_complete', {
prNumber: pr.prNumber,
category: classification.category,
confidence: classification.confidence,
workflow: classification.workflow,
processingTimeMs: Date.now() - startTime
});
Track these metrics over time:
- Classification accuracy — are the auto-assigned workflows correct? Have approvers override the classification and track the correction rate
- Average confidence score — if it starts dropping, your model might be drifting or the nature of your PRs is changing
- Processing latency — from event received to classification complete, you should be under 2 seconds for the GenAI Hub approach, under 500ms for a custom model
- Fallback rate — how often does the system assign MANUAL_REVIEW instead of a specific workflow
Real-World Performance Numbers
After deploying this pattern across three client organizations in 2025-2026, here are the numbers I've seen:
| Metric | Before AI Extension | After AI Extension |
|---|---|---|
| Avg. PR processing time | 4.2 hours | 12 minutes |
| Correct workflow assignment | 67% (manual) | 94% (AI + rules) |
| PRs requiring manual intervention | 100% | 18% |
| Monthly processing cost per 1000 PRs | ~$2,400 (labor) | ~$180 (AI Core + compute) |
The biggest surprise was the workflow assignment accuracy. Humans were only getting it right 67% of the time because they had to memorize which approval workflow applied to which combination of material group, amount threshold, and cost center. The AI model just learns these patterns from historical data.
Common Pitfalls and How to Avoid Them
Pitfall 1: Ignoring the SAP authorization model. Your extension runs with a technical user, but the data it accesses is subject to authorization checks in S/4HANA. If your technical user doesn't have the right business roles (specifically BR_PURCHASEREQ_PROCESSOR or equivalent), you'll get 403 errors that are maddening to debug because the OData error messages are useless.
Pitfall 2: Not handling Event Mesh message ordering. Events can arrive out of order. A "Changed" event might arrive before the "Created" event. Your handler needs to be idempotent and handle missing data gracefully.
Pitfall 3: Hardcoding deployment IDs. AI Core deployment IDs change when you redeploy a model. Use the AI Core API to look up the active deployment by scenario and executable instead of hardcoding the deployment ID in your application code.
Pitfall 4: Skipping the error queue. When classification fails (and it will — network timeouts, AI Core maintenance windows, malformed data), you need a dead letter queue. Event Mesh supports this natively, but you have to configure it explicitly.
Pitfall 5: Forgetting about multitenancy. If you're building this as a partner solution, BTP requires you to handle tenant isolation. CAP supports this, but it adds complexity to your data model and deployment. Plan for it early or explicitly decide it's single-tenant.
Extending the Extension: What Comes Next
Once you have the basic classification working, the natural next steps are:
- Add a feedback loop — let approvers mark classifications as wrong, and use that data to retrain the model quarterly
- Expand to other document types — purchase orders (ME21N), invoices (MIRO), and goods receipts (MIGO) all benefit from the same pattern
- Build a Fiori Elements UI — expose the classification results and override capability through a responsive UI built with SAP Fiori Elements annotations in your CDS model
- Add anomaly detection — flag purchase requisitions that look unusual compared to historical patterns (sudden price spikes, unusual material groups for a cost center)
- Integrate with SAP Signavio — use the classification data to feed process mining and identify bottlenecks in your procurement workflow
The Cost Conversation
Let's talk money, because your CFO will ask. Here's a realistic monthly cost breakdown for a mid-size deployment (processing ~5,000 PRs/month):
- SAP AI Core (standard plan): ~$500/month
- Cloud Foundry Runtime (1 GB): ~$300/month
- HANA Cloud (30 GB): ~$450/month
- Event Mesh (default plan): ~$200/month
- Destination & Connectivity: ~$50/month
Total: ~$1,500/month
Compare that to the labor cost savings from automated classification and routing. If you're processing 5,000 PRs and each one takes even 5 minutes less to handle, that's 416 hours saved per month. At a fully loaded cost of $50/hour for a procurement specialist, you're saving $20,800/month.
The ROI is not subtle.
Wrapping Up
Building an intelligent extension on SAP BTP in 2026 is genuinely practical. The platform has reached a point where the tooling works, the documentation is mostly accurate (a low bar, but SAP has cleared it), and the integration patterns between S/4HANA, Event Mesh, and AI Core are well-established.
The key is to start with a specific, measurable business problem — like purchase requisition classification — rather than trying to "add AI to SAP" as a vague initiative. Pick a process, build the pipeline, measure the results, and expand from there.
Your first extension won't be perfect. The classification accuracy will start around 80% and improve as you add more training data and refine your prompts or model. That's fine. An 80% accurate system that processes PRs in seconds is already dramatically better than a 67% accurate human process that takes hours.
The code in this guide is production-ready as a starting point. Clone it, adapt the data model to your specific fields and categories, deploy it, and start measuring. That's how you build intelligent extensions that actually get adopted — not by theorizing, but by shipping something that works and iterating.