Three Production Apps, Zero Code: What Keip Actually Looks Like in Practice#
In the past month I shipped three integration apps to my home cluster: a Spanish translator that replies in audio, a health tracking bot I can query from any room in my house, and a camera alert system that decides for itself what’s worth telling me about. Each one took under an hour to have running. None of them required me to write a single line of application code.
That last part is the actual story. The apps are configuration, end to end. There is no glue script. An AI assistant produced most of that configuration in the time it would have taken me to scaffold a project and write the first class.
A Quick Recap#
I wrote about Keip and Enterprise Integration Patterns a few weeks ago. The short version: Keip is an open source platform built on Spring Integration that turns Enterprise Integration Patterns into Kubernetes resources. Instead of writing a Spring Boot application to wire services together, I write an XML route config and Keip deploys it as a pod.
apiVersion: keip.codice.org/v1alpha2
kind: IntegrationRoute
metadata:
name: my-route
namespace: keip
spec:
replicas: 1
routeConfigMap: my-route-xmlThe route config lives in a ConfigMap. There is no Dockerfile, no application entrypoint, and no build step. Keip handles all of that.
Installing Keip on an existing cluster is a single command:
kubectl apply -f https://github.com/codice/keip/releases/latest/download/install.yamlThat installs the operator, CRDs, and controller. From there, deploying a route is just a kubectl apply on a ConfigMap and an IntegrationRoute resource.
What I’ve been building on top of this is keip-connect, a library of protocol adapters that connect integration routes to the services I actually use: Matrix chat, ntfy push notifications, and an Anthropic Claude chat model. Each adapter follows the same Spring Integration channel adapter pattern: an inbound adapter that produces messages onto a channel, or an outbound gateway that sends them somewhere and optionally waits for a reply. (keip-connect is not yet publicly released.)
With those pieces in place, here’s what I built.
App 1: Personal Translator App#
My wife and I are visiting Mexico soon. My Spanish is nearly-nonexistent, and I wanted something faster than stopping to open a separate app. I also wanted it to respond in audio when translating to Spanish, so I could hear how things should sound rather than just reading them.
The obvious alternative is Google Translate. The reason I didn’t use it is the same one that runs through everything else in this stack: I don’t want my conversations in someone else’s logs. A translation request is a conversation fragment, and sending it to a third-party API means it leaves my network. The local model is fast and the text stays on my hardware.
There’s a practical reason too. Matrix is already the interface I use for everything: talking to my AI assistant, checking health data, getting camera alerts. The translator is just another room in the same app I already have open. There is no separate tool to install and no context switching.
The route listens on a private Matrix room. Messages arrive as text or voice. It detects the language, translates in the appropriate direction, and when the output is Spanish, generates a voice reply using a local TTS model running on my own hardware. English output stays as text.
The XML that does all of this:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-http="http://www.springframework.org/schema/integration/http"
xmlns:matrix="http://cruver.ai/schema/keip-connect/matrix">
<!-- Receive messages from the translation room -->
<matrix:inbound-channel-adapter
client-factory-ref="matrixClient"
channel="rawInput"
room-ids="!MpzjvTEceOoscPvapJ:matrix.cruver.network"/>
<!-- Audio: transcribe via Whisper before translating -->
<int:filter input-channel="rawInput" output-channel="audioInput"
expression="headers['matrix_content_type'] == 'm.audio'"/>
<int-http:outbound-gateway
request-channel="audioInput" reply-channel="textInput"
url="${whisper.url}/v1/audio/transcriptions"
http-method="POST" expected-response-type="java.lang.String"/>
<!-- Text input arrives directly -->
<int:channel id="textInput"/>
<!-- Call LLM: detect language and translate; returns JSON
{"direction":"en→es|es→en","translation":"..."} -->
<int:header-enricher input-channel="textInput" output-channel="llmCall">
<int:header name="Content-Type" value="application/json"/>
</int:header-enricher>
<int-http:outbound-gateway
request-channel="llmCall" reply-channel="llmResponse"
url="${llm.url}/v1/chat/completions"
http-method="POST" expected-response-type="java.lang.String"
mapped-request-headers="Content-Type"/>
<!-- Parse direction from LLM response -->
<int:router input-channel="llmResponse"
expression="new com.fasterxml.jackson.databind.ObjectMapper()
.readTree(payload).get('direction').asText()
.startsWith('en') ? 'toSpanishAudio' : 'textReply'"/>
<!-- ES→EN: return translated text directly -->
<int:channel id="textReply"/>
<matrix:outbound-gateway client-factory-ref="matrixClient"
request-channel="textReply"/>
<!-- EN→ES: call XTTS for audio, then send WAV to room -->
<int:channel id="toSpanishAudio"/>
<int-http:outbound-gateway
request-channel="toSpanishAudio" reply-channel="audioReply"
url="${xtts.url}/v1/audio/speech"
http-method="POST" expected-response-type="byte[]"/>
<matrix:outbound-gateway client-factory-ref="matrixClient"
request-channel="audioReply"/>
</beans>The services it talks to, Whisper for transcription, a local LLM for translation, and XTTS for speech, all run on my own hardware. Nothing leaves my network. The route is the plumbing, and the services are the intelligence.
From “I want a translator bot” to a working app in the Matrix room took about 45 minutes. Most of that was getting the Matrix room configured and E2E encryption keys sorted. The route itself was maybe 15 minutes, most of which was an AI assistant drafting the XML while I reviewed it.
The translator isn’t the point. It’s just one thing the pattern makes easy. The next two examples are completely different problems, and the approach is the same.
App 2: Health Tracker#
I track glucose, ketones, weight, and blood pressure, among other things, in InfluxDB. The data is useful, but querying it has always required either a Grafana dashboard or writing a Flux query by hand. I wanted to ask questions in plain language and get answers.
The route listens on a dedicated Matrix room and forwards messages to a local LLM with context about what data is available. The LLM generates a Flux query, the route executes it against InfluxDB, and the result comes back as a natural language summary.
I can send messages like “How have my ketones been this week?” or “What was my average glucose yesterday?” and get a direct answer.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-http="http://www.springframework.org/schema/integration/http"
xmlns:matrix="http://cruver.ai/schema/keip-connect/matrix">
<matrix:inbound-channel-adapter
client-factory-ref="matrixClient"
channel="healthInput"
room-ids="!yZRhRvDISqwNnBokVM:matrix.cruver.network"/>
<!-- Route: slash commands log data; plain questions query it -->
<int:router input-channel="healthInput"
expression="payload.startsWith('/') ? 'logChannel' : 'queryChannel'"/>
<!-- Log path: LLM converts slash command to InfluxDB line protocol -->
<int:header-enricher input-channel="logChannel" output-channel="llmLogCall">
<int:header name="Content-Type" value="application/json"/>
</int:header-enricher>
<int-http:outbound-gateway
request-channel="llmLogCall" reply-channel="lineProtocol"
url="${llm.url}/v1/chat/completions"
http-method="POST" expected-response-type="java.lang.String"/>
<!-- Write line protocol to InfluxDB -->
<int:header-enricher input-channel="lineProtocol" output-channel="influxWrite">
<int:header name="Authorization" expression="'Token ' + '${influx.token}'"/>
</int:header-enricher>
<int-http:outbound-channel-adapter
channel="influxWrite"
url="${influx.url}/api/v2/write?org=${influx.org}&bucket=${influx.bucket}"
http-method="POST" mapped-request-headers="Authorization"/>
<!-- Query path: fetch last 7 days from InfluxDB, summarize with LLM -->
<int:header-enricher input-channel="queryChannel" output-channel="influxQuery">
<int:header name="Authorization" expression="'Token ' + '${influx.token}'"/>
</int:header-enricher>
<int-http:outbound-gateway
request-channel="influxQuery" reply-channel="rawData"
url="${influx.url}/api/v2/query?org=${influx.org}"
http-method="POST" expected-response-type="java.lang.String"/>
<!-- LLM summarizes raw CSV into plain language -->
<int-http:outbound-gateway
request-channel="rawData" reply-channel="replies"
url="${llm.url}/v1/chat/completions"
http-method="POST" expected-response-type="java.lang.String"/>
<matrix:outbound-gateway client-factory-ref="matrixClient"
request-channel="replies"/>
</beans>This one took about 40 minutes end to end. The hardest part was writing a clear system prompt that reliably produces valid Flux syntax. The route itself was straightforward, and again, AI drafted it.
App 3: Intelligent Camera Alerts#
I have six cameras running through Frigate, my local NVR. Frigate does motion detection and object recognition with a Google Coral TPU. What it doesn’t do is decide whether a detection is actually interesting.
A car parked in the street shouldn’t page me. A delivery truck at the door should. A person I don’t recognize in the backyard at 2 AM definitely should. Making that distinction requires judgment, and that’s what the integration layer handles.
The route subscribes to Frigate’s MQTT event stream. When a detection comes in, it grabs the camera snapshot, sends it to a local vision model for analysis, and acts on the verdict. Routine activity is dropped silently. Anything worth knowing about triggers a push notification through ntfy with a description of what the model saw.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-mqtt="http://www.springframework.org/schema/integration/mqtt"
xmlns:ntfy="http://cruver.ai/schema/keip-connect/ntfy">
<!-- Subscribe to Frigate detection events -->
<int-mqtt:message-driven-channel-adapter
client-ref="mqttClient"
topics="frigate/events"
channel="rawEvents"/>
<!-- Filter to high-confidence detections only -->
<int:filter input-channel="rawEvents" output-channel="significantEvents"
expression="payload.score >= 0.65"/>
<!-- Fetch camera snapshot from Frigate API -->
<int-http:outbound-gateway
request-channel="significantEvents" reply-channel="withSnapshot"
url="${frigate.url}/api/{camera}/latest.jpg?h=720"
http-method="GET" expected-response-type="byte[]"
uri-variables-expression="headers"/>
<!-- Ask vision model: is this worth reporting?
Returns JSON {"verdict":"ALERT|ROUTINE","description":"..."} -->
<int:header-enricher input-channel="withSnapshot" output-channel="visionCall">
<int:header name="Content-Type" value="application/json"/>
</int:header-enricher>
<int-http:outbound-gateway
request-channel="visionCall" reply-channel="analyzed"
url="${vision.url}/v1/chat/completions"
http-method="POST" expected-response-type="java.lang.String"
mapped-request-headers="Content-Type"/>
<!-- Filter out ROUTINE judgments -->
<int:filter input-channel="analyzed" output-channel="alerts"
expression="payload.verdict == 'ALERT'"/>
<!-- Send push notification -->
<ntfy:outbound-gateway
client-factory-ref="ntfyClient"
request-channel="alerts"
default-topic="camera-analysis"/>
</beans>The vision model runs locally on the same GPU stack as everything else. The push notification goes to my phone via a self-hosted ntfy server. The route has no idea what’s in the images; it moves them to the right place and acts on the verdict.
What This Means in Practice#
Each of these apps has real logic: language detection, LLM prompting, vision model inference, database queries. None of that logic lives in the route. It lives in the services the route calls: a Whisper server, a local LLM, a vision model endpoint, an InfluxDB instance. The integration layer connects those services to each other and to the world, deciding what triggers what, what data flows where, and what happens when something fails.
The routes are readable. Looking at any of those XML configs, the structure is clear in thirty seconds, because the primitives are named for what they do: filter, transform, route, split, aggregate. There’s no application framework to understand, no dependency injection container to trace through, no build system to run.
Every service those routes call runs on my hardware. The translations, the health queries, the camera analysis; none of it touches an external API. When the internet goes down, the translator still works. When a provider changes its terms or pricing, nothing breaks. That’s the practical side of building on infrastructure I own.
The Flywheel Effect#
All of this has produced a flywheel effect. Camera alerts go to ntfy, and that’s useful by itself. But ntfy topics are just another event source, and any other route can subscribe to the same topics and build on them. It would be trivial to create a route that logs alert frequency to InfluxDB, one that aggregates overnight detections into a morning digest delivered to Matrix, or one that silences notifications during a window set from a Matrix message. None of those exist yet, but every piece needed to build them is already running. Adding them is just a matter of creating the IntegrationRoutes in k8s.
The same pattern holds across all three apps. The health tracker produces InfluxDB data. The translator produces Matrix messages. The camera watcher produces ntfy events. Each output is something another route can consume. The routes compound on each other.
What this means in practice is that the ceiling is not complexity or integration friction, it’s compute. Adding a new route costs nothing in subscription fees, nothing in API quotas, and nothing in terms of data sovereignty. On a cluster with local GPU inference, the only real question becomes whether I want the thing, not whether building it is worth the overhead. And the more I build, the easier it gets to add something new. The Matrix client, the LLM endpoint, and the ntfy connection are already running. A new route potentially inherits all of it.
The next step is making the routes themselves even easier to create. That’s a topic for a future post.