Home
use cameras to:
detect events
not people|

Configure what your cameras detect — without building a surveillance system. No face recognition. No tracking. No biometrics. By architecture, not settings.

This is the output. All of it.

Notice there are no images on this site. That's intentional. The system outputs semantic events — text that describes what happened. The surveillance data on the right? That code doesn't exist. Not disabled. Not hidden. Missing.

Event Output
HOME
Not Captured — No Code Exists
facial_identity:███████
race:███████
gender:███████
age_estimate:███████
body_description:███████
license_plate:███████
gait_signature:███████
tracking_id:███████

Your robot vacuum needs to see. Your doorbell needs to see. Your elder care system needs to see. None of them need to identify, track, or remember faces.

The cameras of the future will have eyes — they just don't need to surveil.

Privacy by absence, not by setting

Most camera software can be configured for privacy. This software lacks the code to violate it. There's no setting to turn off — the capability was never written.

Motion & object events

"Package detected at front door"

Face recognition

No code for this exists

Cross-camera tracking

No code for this exists

Solo video export

Requires multiple trustees to approve

Three intentional constraints

This isn't a feature gap — it's architecture. Every limitation is a deliberate choice.

Events only, not video

The output is a log of what happened. There's no API to stream or export continuous video.

No biometric code

Face, license plate, and gait recognition aren't disabled — the code was never written.

Multi-party video access

Raw footage requires approval from multiple trustees. No one person can extract video alone.

Why build it this way?

This wasn't an accident. Every missing feature is a deliberate design choice.

🔒 Settings can be changed

Every camera system has a "privacy mode." Every one can be turned off by someone with admin access — an employer, a landlord, a government, a hacker.

If the code exists, it can be enabled. We removed the code.

🤖 Machines are getting eyes

Robot vacuums, delivery drones, autonomous vehicles, elder care systems — they all need to see. That doesn't mean they need to identify.

Seeing and surveilling are different capabilities. We built only the first.

⚖️ Today's data is tomorrow's evidence

Video captured for "security" becomes evidence in lawsuits, divorce proceedings, immigration cases, insurance disputes. Footage intended to protect can be weaponized.

The safest data is data that doesn't exist. We output events, not footage.

🏛️ No single point of failure

When one person can export video, you have one person to bribe, coerce, or compromise. Multi-party approval means no single actor — not even the system owner — can extract footage alone.

Trust is distributed, not concentrated.

Same camera. Same question. Different futures.

Watch how "is the coffee ready?" becomes something else entirely.

Traditional System
Full Capture
Break Room Camera
System Output
[09:14]motion_detected zone:coffee_area
[09:14]face_identified: Sarah Chen (94%)
[09:14]badge_correlation: EMP-4821
[09:15]coffee_pot_lifted duration:12s
[09:15]dwell_time: 47s added_to_profile
30-day Storage
847 face events 12.4 GB video
witness-kernel
Events Only
Break Room Camera
System Output
[09:10]coffee_pot: EMPTY
[09:10]motion: zone_A activity
[09:20]coffee_pot: BREWING
[09:30]coffee_pot: READY
[09:30]notification: sent_to_channel
30-day Storage
2,341 events 4.2 MB total

Six months later, a policy changes...

"We need to identify who's been using the break room excessively."

Day 1

HR requests usage report

"Generate a list of employees who spend more than 15 minutes in the break room daily."

Traditional
Report generated: 23 employees flagged, with timestamps, durations, and photos
witness-kernel
Cannot comply. No identity data exists to query.
Day 30

Legal requests footage

"Lawsuit filed. Preserve all break room footage showing employee interactions."

Traditional
6 months of footage preserved. All faces, conversations, behaviors now legal evidence.
witness-kernel
Event log preserved. Shows coffee/motion patterns. No footage to subpoena.
Day 90

New vendor integration

"Our wellness platform wants camera data to track employee stress patterns."

Traditional
API enabled. Facial expressions, posture, interaction frequency now shared with third party.
witness-kernel
Nothing to share. System outputs "coffee ready" — not biometrics.
Day 180

Retroactive analysis request

"Apply our new 'productivity scoring' algorithm to the last 6 months of footage."

Traditional
Analysis complete. Employees scored and ranked by break room behavior. Used in performance reviews.
witness-kernel
Impossible. New rules cannot be applied to historical data. Forward-only by design.
☕ Did you know?

The first webcam just watched a coffee pot.

In 1991, Cambridge researchers pointed a camera at their break room coffee maker. The only question: is it ready yet?

Thirty years later, that same question requires facial recognition, behavioral tracking, and cloud storage. We think the original idea had it right.

Different tools for different needs

This isn't a replacement for existing systems — it's a different category.

Consumer cloud video with E2E encryption

Face recognition Yes
Object detection Yes
Video export Single user
Hash-chained log No
Open source No
Strong encryption, but identifies faces. Any account holder can export clips.

Open-source local NVR for Home Assistant

Face recognition Via add-ons
Object detection Yes
Video export Single user
Hash-chained log No
Open source Yes
Excellent local-first NVR. Anyone with access can export full video.

Different goals, not better or worse

HomeKit and Frigate are excellent at what they do. If you need face recognition, plate reading, or easy video export — use them. Witness-kernel exists for cases where those capabilities are liabilities, not features.

How privacy is enforced

Six mechanisms that make surveillance architecturally impossible.

I

Frame data is private

Raw pixels are inaccessible. Analysis modules only receive abstract representations.

II

Export requires multiple approvals

No single user can extract video. A configurable quorum of trustees must authorize.

III

Timestamps are coarse

Events are bucketed to 10-minute windows. Precise timing isn't stored.

IV

Log is tamper-evident

Each event includes a hash of the previous. Modifications break the chain.

V

Access tokens are single-use

Emergency access tokens work once, for one time window, then expire.

VI

Rules apply forward only

New detection rules cannot be applied to historical data. No retroactive surveillance.

Data flow

Data Flow
Camera ──▶ RawFrame ──▶ InferenceView ──▶ Module ──▶ SealedEvent
            (private)    (no pixels)               (hash-chained)
               │
               ▼
         FrameBuffer ──▶ BreakGlass ──▶ VaultEnvelope
           (30s max)       (N-of-M)       (if authorized)

Don't trust us — check the code

Every claim on this page can be verified by reading the source.

1

Verify the log can't be tampered with

Run the verification tool to confirm the hash chain is intact.

cargo run --bin log_verify -- --db witness.db
2

Verify frame data is inaccessible

In src/frame.rs, confirm the raw bytes have no public getter.

3

Verify export requires quorum

In src/break_glass.rs, confirm approval counting before token issuance.

What's built, what's in progress

This is prototype software. Here's where we are.

Frame isolation types

Hash-chained event log

Break-glass quorum

Event contract enforcement

Cryptographic signatures

RTSP video ingestion

WASM module sandboxing

Encrypted vault envelopes

Read the source. Verify the claims.

Everything on this page is checkable. We're not asking for trust — we're asking for review.

View on GitHub