Technical Journal — Border Generation & Audit System

Date: 04/02/2026

What I did today

Today was a long but important day focused on making the border-generation pipeline observable, auditable, and safe to operate.

I worked through the entire lifecycle of border generation, not just execution, but accountability.

Key things I built and fixed:

  1. Structured border-generation logging
    • Designed logs that track image-level lifecycle instead of vague batch success.
    • Made sure every image from the request appears in logs — no silent drops.
    • Standardized statuses so logs can be parsed later for reports and audits.
  2. Root-caused missing images
    • Found that only 10/32 images were logged due to filtering and batch-mismatch logic.
    • Removed the idea that “batch ownership” is a hard constraint when images are explicitly requested.
    • Reframed request images as the source of truth.
  3. Audit endpoint (read-only, deterministic)
    • Built a new endpoint to audit border generation from database state, not logs.
    • Used upload_sessions.border_metadata as the authoritative source.
    • Generated a markdown report that shows:
      • cloud_folder
      • cloud_filename
      • border metadata presence
      • recommended actions
  4. Actionable audit → execution pipeline
    • Introduced an Actions column to convert audits into operations.
    • Built a CLI command that:
      • reads the audit report
      • supports dry-run by default
      • performs Dropbox moves only with --execute
    • Made execution idempotent and report-aware.
  5. Fixed critical logic inversion
    • Discovered the system was moving files already in ORDERS instead of UPLOADS.
    • Corrected the rule:Only images in UPLOADS should be moved into ORDERS.
    • Added defensive guards so the command refuses to move ORDERS files even if the report is wrong.
  6. Safe execution feedback loop
    • When --execute is used:
      • execution results are written back into the same markdown report
      • the report becomes a living audit + execution record
    • Removed sensitive fields (like dropbox_path) from console output while keeping them in reports.

What I learned

  1. Logs are not truth — state is
    Logs are helpful, but they are derivatives.
    The database (upload_sessions) is the real source of truth.
    Audits should always read from state, not side effects.
  2. Silent skips are system failures
    Any pipeline that allows data to “disappear” without an explicit terminal state is broken.
    Every image must end in:
    • processed
    • skipped (with reason)
    • failed (with reason)
  3. Audit-first design unlocks safe automation
    Once I had a clean, human-readable audit report:
    • automation became obvious
    • execution could be gated, reviewed, and replayed
    • rollback decisions became possible
  4. Dry-run is not optional
    Any destructive operation touching customer files must default to dry-run.
    Execution should feel like flipping a guarded switch, not running a script.
  5. Batch identifiers are contextual, not authoritative
    Using batch UUIDs as hard validators caused real data loss.
    The correct hierarchy is:Explicit request → image UUID → image’s own batch (if needed)
  6. Reports should evolve into ledgers
    By writing execution status back into the report:
    • the system gains memory
    • humans gain confidence
    • future automation becomes safer

Closing thought

Today reinforced something I keep relearning:

Good systems don’t just do work — they explain themselves.

By the end of the day, border generation wasn’t just “working” —
it was traceable, reviewable, and correctable.

That’s the kind of boring reliability worth building.