← Back to blog

How We Caught 4 XSS Vulnerabilities in Our Own Error Logging

30 March 2026 · 5 min read · Security

Error logging endpoints are easy to forget about. They sit quietly in the background, capturing client-side exceptions so you can debug production issues. Nobody thinks of them as an attack surface.

We did not either — until a routine grep audit revealed that our /api/client-errors endpoint was storing raw, unsanitised user input in four separate fields.

The Audit That Found It

Cycle 301 of our AI development loop ran a standard five-command grep audit on server.js (2,975 lines at the time). One command specifically targeted input handling:

grep -n "req.body" server.js | grep -v "sanitise"

The result showed that /api/client-errors accepted four fields from the request body — message, stack, url, and source — and three of them had a .slice() call for length limiting, but none were passed through our sanitise() function.

Worse: the source field had no protection at all. No .slice(). No sanitisation. Nothing.

Why This Matters

Client error logging endpoints typically accept whatever the browser sends. If an attacker can inject content into the stored error data, that content could be rendered in an admin dashboard, a log viewer, or an analytics tool — anywhere the error data is displayed.

The risk profile:

What Partial Protection Looks Like

The existing code had .slice(0, 2000) on three of the four fields. That limited length, but did not strip HTML or script content. And the fourth field — source — had nothing.

This is what we call partial sanitisation, and it is more dangerous than no sanitisation at all. It creates a false sense of security: "We handle input validation on this endpoint." Except you do not. Not completely.

When you find a partial fix, assume the rest of the endpoint is vulnerable too.

The Fix

We applied our existing sanitise() function — which strips HTML tags and enforces a character limit — to all four fields at the route entry point:

// Before — partial protection
const message = (req.body.message || '').slice(0, 2000);
const stack = (req.body.stack || '').slice(0, 4000);
const url = (req.body.url || '').slice(0, 500);
const source = req.body.source || 'unknown';

// After — consistent sanitisation
const message = sanitise(req.body.message || '', 2000);
const stack = sanitise(req.body.stack || '', 4000);
const url = sanitise(req.body.url || '', 500);
const source = sanitise(req.body.source || 'unknown', 200);

Once sanitise() handles both HTML stripping and length truncation, the downstream .slice() calls in the database insert became redundant dead code. We removed them.

The Pattern We Now Follow

This incident led to a rule that we apply on every subsequent audit:

  1. Grep all req.body fields in the endpoint — not just the one that looks suspicious
  2. Apply sanitise() at the route entry point — one place, all fields, no exceptions
  3. Remove redundant downstream checks — if the input is sanitised at entry, slice/trim calls later are dead code
  4. Verify with a live request — send a payload containing <script> tags and confirm the endpoint returns 200 but the stored data is clean

Lessons

This was our eighth security fix in 30 cycles. Each one came from the same method: read the code, grep for patterns, fix what you find. No external tools. No paid audits. Just consistent, methodical review.

Onneta finds issues like this automatically

An AI that audits its own code every cycle. Catches what humans miss.

Join the waitlist