Configure what your cameras detect — without building a surveillance system. No face recognition. No tracking. No biometrics. By architecture, not settings.
This is the output. All of it.
Notice there are no images on this site. That's intentional. The system outputs semantic events — text that describes what happened. The surveillance data on the right? That code doesn't exist. Not disabled. Not hidden. Missing.
Your robot vacuum needs to see. Your doorbell needs to see. Your elder care system needs to see. None of them need to identify, track, or remember faces.
The cameras of the future will have eyes — they just don't need to surveil.
Privacy by absence, not by setting
Most camera software can be configured for privacy. This software lacks the code to violate it. There's no setting to turn off — the capability was never written.
Motion & object events
"Package detected at front door"
Face recognition
No code for this exists
Cross-camera tracking
No code for this exists
Solo video export
Requires multiple trustees to approve
Three intentional constraints
This isn't a feature gap — it's architecture. Every limitation is a deliberate choice.
Events only, not video
The output is a log of what happened. There's no API to stream or export continuous video.
No biometric code
Face, license plate, and gait recognition aren't disabled — the code was never written.
Multi-party video access
Raw footage requires approval from multiple trustees. No one person can extract video alone.
Why build it this way?
This wasn't an accident. Every missing feature is a deliberate design choice.
🔒 Settings can be changed
Every camera system has a "privacy mode." Every one can be turned off by someone with admin access — an employer, a landlord, a government, a hacker.
If the code exists, it can be enabled. We removed the code.
🤖 Machines are getting eyes
Robot vacuums, delivery drones, autonomous vehicles, elder care systems — they all need to see. That doesn't mean they need to identify.
Seeing and surveilling are different capabilities. We built only the first.
⚖️ Today's data is tomorrow's evidence
Video captured for "security" becomes evidence in lawsuits, divorce proceedings, immigration cases, insurance disputes. Footage intended to protect can be weaponized.
The safest data is data that doesn't exist. We output events, not footage.
🏛️ No single point of failure
When one person can export video, you have one person to bribe, coerce, or compromise. Multi-party approval means no single actor — not even the system owner — can extract footage alone.
Trust is distributed, not concentrated.
Same camera. Same question. Different futures.
Watch how "is the coffee ready?" becomes something else entirely.
Six months later, a policy changes...
"We need to identify who's been using the break room excessively."
HR requests usage report
"Generate a list of employees who spend more than 15 minutes in the break room daily."
Legal requests footage
"Lawsuit filed. Preserve all break room footage showing employee interactions."
New vendor integration
"Our wellness platform wants camera data to track employee stress patterns."
Retroactive analysis request
"Apply our new 'productivity scoring' algorithm to the last 6 months of footage."
The first webcam just watched a coffee pot.
In 1991, Cambridge researchers pointed a camera at their break room coffee maker. The only question: is it ready yet?
Thirty years later, that same question requires facial recognition, behavioral tracking, and cloud storage. We think the original idea had it right.
Different tools for different needs
This isn't a replacement for existing systems — it's a different category.
HomeKit Secure Video
Consumer cloud video with E2E encryption
Frigate NVR
Open-source local NVR for Home Assistant
witness-kernel
Semantic events only, no identification
Different goals, not better or worse
HomeKit and Frigate are excellent at what they do. If you need face recognition, plate reading, or easy video export — use them. Witness-kernel exists for cases where those capabilities are liabilities, not features.
How privacy is enforced
Six mechanisms that make surveillance architecturally impossible.
Frame data is private
Raw pixels are inaccessible. Analysis modules only receive abstract representations.
Export requires multiple approvals
No single user can extract video. A configurable quorum of trustees must authorize.
Timestamps are coarse
Events are bucketed to 10-minute windows. Precise timing isn't stored.
Log is tamper-evident
Each event includes a hash of the previous. Modifications break the chain.
Access tokens are single-use
Emergency access tokens work once, for one time window, then expire.
Rules apply forward only
New detection rules cannot be applied to historical data. No retroactive surveillance.
Data flow
Camera ──▶ RawFrame ──▶ InferenceView ──▶ Module ──▶ SealedEvent (private) (no pixels) (hash-chained) │ ▼ FrameBuffer ──▶ BreakGlass ──▶ VaultEnvelope (30s max) (N-of-M) (if authorized)
Don't trust us — check the code
Every claim on this page can be verified by reading the source.
Verify the log can't be tampered with
Run the verification tool to confirm the hash chain is intact.
cargo run --bin log_verify -- --db witness.db
Verify frame data is inaccessible
In src/frame.rs, confirm the raw bytes have no public getter.
Verify export requires quorum
In src/break_glass.rs, confirm approval counting before token issuance.
What's built, what's in progress
This is prototype software. Here's where we are.
Frame isolation types
Hash-chained event log
Break-glass quorum
Event contract enforcement
Cryptographic signatures
RTSP video ingestion
WASM module sandboxing
Encrypted vault envelopes
Read the source. Verify the claims.
Everything on this page is checkable. We're not asking for trust — we're asking for review.
View on GitHub