- 3
- April
OpenClaw Deep Dive Series EP.3 — From EP.2 where we deep-dived into Skills and built a Custom Skill, it's now time to go deeper into "Kernel Module" — the core of OpenClaw that controls everything from incoming/outgoing messages to Tool Execution. If you've already read the Kernel Module theory article, EP.3 will take you hands-on to write a real Module from the first line to a fully working implementation!
In short — what you will learn in this article:
- What is a Kernel Module — The component that sits between the AI Brain (LLM) and the outside world
- Module Lifecycle — The lifecycle: init → register → process → cleanup
- 5 API Hooks — onMessage, onToolCall, onToolResult, onResponse, onError
- Write Your First Module — An Audit Logger that logs every incoming/outgoing message
- Real-world Modules — Content Filter and Rate Limiter with working examples
- Testing & Debugging — How to test your Module before deploying
Kernel Module vs Skills — What's the Difference?
Before writing a Module, you need to clearly understand the difference between Skills and Kernel Modules, because they operate at different levels — if a Skill is like an employee who handles specialized tasks, a Kernel Module is like a manager who oversees the entire system.
| Comparison | Skills | Kernel Module |
|---|---|---|
| Scope | Handles specialized trigger-based tasks (e.g., "check stock") | Controls the entire system, intercepts every message |
| When It Runs | Only when a Trigger matches | On every message passing through the system |
| Access Level | Can only access the data sent to it | Has access to every message, every tool call, every response |
| Primary Use Case | Add specialized capabilities (fetch data, execute commands) | Audit logging, content filtering, rate limiting, permission control |
| Analogy | An employee who follows orders | A manager who oversees the entire system |
A Kernel Module sits between the AI Brain (LLM) and the outside world, serving as an intercept and modify layer for all data flowing in and out — including input processing, output formatting, tool execution, permission control, and logging.
Module Lifecycle — The 4 Phases of a Module
A Kernel Module operates through a 4-phase lifecycle, from initialization to shutdown:
init() → registerHooks() → [Message Loop: onMessage → onToolCall → onToolResult → onResponse] → cleanup()
| | | |
Start Register Process every Message Shutdown
Module desired Hooks Loops until Agent stops Clean up
| Phase | Function | Purpose | Example |
|---|---|---|---|
| 1. Initialize | init(config) |
Load config, connect to database, prepare resources | Open log file, connect to Redis |
| 2. Register | registerHooks() |
Register the desired Hooks (onMessage, onResponse, etc.) | Tell the system "I will intercept every message" |
| 3. Process | onMessage() etc. |
Process every message that passes through — loops continuously | Log every message, filter sensitive data |
| 4. Cleanup | cleanup() |
Close connections, save final data, clean up resources | Close log file, disconnect Redis |
API Hooks — 5 Critical Integration Points
The heart of a Kernel Module is the API Hooks — points where you can "inject" your code into OpenClaw's processing pipeline. There are 5 Hooks in total:
| Hook | When It Runs | Data Received | What You Can Do |
|---|---|---|---|
| onMessage | Before sending the message to the LLM | User message, user ID, timestamp | Modify/filter message, log, block request |
| onToolCall | Before calling a Tool (Skill) | Tool name, parameters, context | Allow/deny, modify parameters, log |
| onToolResult | After a Tool finishes executing | Tool result, execution time, status | Modify/filter results, log, alert on anomalies |
| onResponse | Before sending the response back to the user | LLM response, tokens used, model info | Modify response, add disclaimer, log |
| onError | When an error occurs in the system | Error type, message, stack trace | Log error, send alert, fallback response |
Each Hook can modify data before passing it along or block the operation entirely. This is the power that gives Kernel Modules control over everything the AI Agent does.
Write Your First Kernel Module — Audit Logger
Let's build a Module that logs every incoming/outgoing message as an Audit Trail — this Module will record who sent what, when, and how the AI responded. It's ideal for organizations that need risk management and compliance.
Step 1: Create the Module Folder
# Create the folder structure
mkdir -p modules/audit-logger
# File structure
modules/audit-logger/
├── manifest.json # Module metadata
├── index.js # Main code (hooks)
└── logs/ # Folder for log files
Step 2: Write manifest.json
{
"name": "audit-logger",
"version": "1.0.0",
"description": "Log every incoming/outgoing message as an Audit Trail",
"author": "Your Organization",
"hooks": ["onMessage", "onResponse", "onError"],
"config": {
"logDir": "./modules/audit-logger/logs",
"logFormat": "json",
"maxFileSize": "10MB",
"retentionDays": 90
}
}
Step 3: Write index.js — Hook Logic
// modules/audit-logger/index.js
const fs = require('fs');
const path = require('path');
let logStream;
module.exports = {
// Phase 1: Initialize
async init(config) {
const logDir = config.logDir || './logs';
if (!fs.existsSync(logDir)) fs.mkdirSync(logDir, { recursive: true });
const logFile = path.join(logDir, `audit-${new Date().toISOString().slice(0,10)}.jsonl`);
logStream = fs.createWriteStream(logFile, { flags: 'a' });
console.log(`[AuditLogger] Initialized. Logging to ${logFile}`);
},
// Phase 2: Register Hooks
registerHooks() {
return ['onMessage', 'onResponse', 'onError'];
},
// Hook: Before sending to LLM — log user message
async onMessage(data) {
const entry = {
timestamp: new Date().toISOString(),
type: 'user_message',
userId: data.userId,
message: data.message,
sessionId: data.sessionId
};
logStream.write(JSON.stringify(entry) + '\n');
// Pass message through without modification
return data;
},
// Hook: Before sending back to user — log AI response
async onResponse(data) {
const entry = {
timestamp: new Date().toISOString(),
type: 'ai_response',
userId: data.userId,
response: data.response.substring(0, 500),
tokensUsed: data.tokensUsed,
model: data.model
};
logStream.write(JSON.stringify(entry) + '\n');
return data;
},
// Hook: When an error occurs
async onError(error) {
const entry = {
timestamp: new Date().toISOString(),
type: 'error',
errorType: error.type,
message: error.message,
stack: error.stack
};
logStream.write(JSON.stringify(entry) + '\n');
},
// Phase 4: Cleanup
async cleanup() {
if (logStream) logStream.end();
console.log('[AuditLogger] Cleanup complete.');
}
};
Step 4: Register the Module in Config
# config/modules.yaml — Register Kernel Module
modules:
- path: "modules/audit-logger"
enabled: true
config:
logDir: "./logs/audit"
retentionDays: 90
# Add more Modules as needed
# - path: "modules/content-filter"
# enabled: true
Step 5: Test the Module
# Run the Agent with the Module
npm start -- --agent agents/my-agent.yaml
# Send a test message
> Hello, what reports are available today?
# Check the log file
cat logs/audit/audit-2026-04-03.jsonl
# Output:
{"timestamp":"2026-04-03T09:15:30.123Z","type":"user_message","userId":"user-001","message":"Hello, what reports are available today?","sessionId":"sess-abc"}
{"timestamp":"2026-04-03T09:15:32.456Z","type":"ai_response","userId":"user-001","response":"Hello! Today's reports include...","tokensUsed":245,"model":"gpt-4"}
In just 5 steps you have a working Audit Logger Module! Every message passing through the system will be recorded as JSON Lines — easy to search and analyzable with standard tools.
Real-World Example — Content Filter Module
A Module that filters sensitive data before sending it to the LLM — preventing national ID numbers, bank account numbers, or passwords from reaching the external AI. Ideal for organizations that require high-level security.
// modules/content-filter/index.js
const SENSITIVE_PATTERNS = [
{ name: 'Thai ID Card', regex: /\b\d{1}-\d{4}-\d{5}-\d{2}-\d{1}\b/g, mask: '[REDACTED-ID]' },
{ name: 'Bank Account', regex: /\b\d{3}-\d{1}-\d{5}-\d{1}\b/g, mask: '[REDACTED-BANK]' },
{ name: 'Credit Card', regex: /\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b/g, mask: '[REDACTED-CC]' },
{ name: 'Email Password', regex: /(?:password|pwd)[\s:=]+\S+/gi, mask: '[REDACTED-PWD]' },
{ name: 'Phone Number', regex: /\b0[689]\d{1}-\d{3}-\d{4}\b/g, mask: '[REDACTED-PHONE]' }
];
module.exports = {
registerHooks() { return ['onMessage']; },
async onMessage(data) {
let filtered = data.message;
let redactedCount = 0;
for (const pattern of SENSITIVE_PATTERNS) {
const matches = filtered.match(pattern.regex);
if (matches) {
redactedCount += matches.length;
filtered = filtered.replace(pattern.regex, pattern.mask);
}
}
if (redactedCount > 0) {
console.log(`[ContentFilter] Redacted ${redactedCount} sensitive items`);
}
return { ...data, message: filtered };
}
};
This Module works at the onMessage Hook — before the message is sent to the LLM. Any data matching a pattern is masked immediately, so the AI never sees the actual data.
Real-World Example — Rate Limiter Module
A Module that limits requests per user per minute — preventing API abuse and controlling LLM costs. Ideal for systems with multiple users.
// modules/rate-limiter/index.js
const userCounters = new Map();
module.exports = {
async init(config) {
this.maxRequests = config.max_requests_per_minute || 10;
this.cooldownMsg = config.cooldown_message || 'You are sending messages too fast. Please wait a moment.';
// Reset counters every minute
setInterval(() => userCounters.clear(), 60 * 1000);
},
registerHooks() { return ['onMessage']; },
async onMessage(data) {
const userId = data.userId;
const count = (userCounters.get(userId) || 0) + 1;
userCounters.set(userId, count);
if (count > this.maxRequests) {
console.log(`[RateLimiter] User ${userId} exceeded limit (${count}/${this.maxRequests})`);
// Block request — send response immediately without going through LLM
return { ...data, blocked: true, blockReason: this.cooldownMsg };
}
return data;
}
};
| Config Option | Default Value | Description |
|---|---|---|
max_requests_per_minute |
10 | Maximum requests per user per minute |
cooldown_message |
"You are sending messages too fast..." | Message displayed when the limit is exceeded |
Module State — Persistent Data Storage
Kernel Modules have a State that can persist across restarts — allowing a Module to remember data across sessions, such as usage statistics, user preferences, or conversation history.
// Example: Module that tracks usage statistics
module.exports = {
async init(config) {
// Load state from file (persists across restarts)
this.state = await this.getState() || {
totalMessages: 0,
totalTokens: 0,
userStats: {},
startDate: new Date().toISOString()
};
},
async onMessage(data) {
this.state.totalMessages++;
this.state.userStats[data.userId] = (this.state.userStats[data.userId] || 0) + 1;
await this.setState(this.state); // Save immediately
return data;
},
async onResponse(data) {
this.state.totalTokens += data.tokensUsed || 0;
await this.setState(this.state);
return data;
},
// State can be accessed from other Skills or Modules
getUsageReport() {
return {
totalMessages: this.state.totalMessages,
totalTokens: this.state.totalTokens,
topUsers: Object.entries(this.state.userStats)
.sort((a, b) => b[1] - a[1])
.slice(0, 10)
};
}
};
getState() and setState() are OpenClaw APIs that automatically save data to disk — no need to handle file I/O yourself. The Module will remember its data even after an Agent restart.
Important Considerations When Writing Kernel Modules:
- Modules have access to every message — write carefully and never expose user data unnecessarily.
- Don't block Hooks for too long — if a Hook exceeds the timeout, the system will skip it, causing the Module to work incompletely.
- Test Modules separately before deploying — never deploy an untested Module to production, as a broken Module can cause the entire Agent to stop working.
- Enable 2FA — for Modules that access sensitive data, add extra authentication before deploying.
Saeree ERP + Kernel Module:
With Kernel Modules, organizations can build comprehensive audit trails, content filters that prevent data leaks, and automated compliance checks — Saeree ERP is developing an AI Assistant that will use these Kernel Modules as security layers for the system. Interested in an ERP system ready for AI in the future? Consult with our team for free
OpenClaw Deep Dive Series — Read More
OpenClaw Deep Dive Series — 6 Episodes of In-Depth AI Agent Exploration:
- EP.1: Install OpenClaw and Build Your First AI Agent
- EP.2: OpenClaw Skills Deep Dive — Build Custom Skills from Scratch
- EP.3: OpenClaw Kernel Module Hands-On — Write Your Own Module Step by Step (this article)
- EP.4: Build Multi-Agent Workflows with OpenClaw
- EP.5: Deploy OpenClaw to Production — Security, Monitoring, Scaling
- EP.6: Connect OpenClaw to ERP — Build an AI Assistant for Your Organization
"Kernel Modules are the hidden power of OpenClaw — giving you control over everything the AI Agent does, from the first message to the final response."
- Saeree ERP Team



