Local Privacy Firewall - Blocks PII and Secrets Before LLMs See Them
Published:Dec 9, 2025 16:10
•1 min read
•Hacker News
Analysis
This Hacker News article describes a Chrome extension designed to protect user privacy when interacting with large language models (LLMs) like ChatGPT and Claude. The extension acts as a local middleware, scrubbing Personally Identifiable Information (PII) and secrets from prompts before they are sent to the LLM. The solution uses a combination of regex and a local BERT model (via a Python FastAPI backend) for detection. The project is in early stages, with the developer seeking feedback on UX, detection quality, and the local-agent approach. The roadmap includes potentially moving the inference to the browser using WASM for improved performance and reduced friction.
Key Takeaways
- •A Chrome extension that acts as a local privacy firewall.
- •Intercepts prompts and scrubs PII and secrets before sending to LLMs.
- •Uses regex and a local BERT model for detection.
- •Runs entirely locally, no data sent to a server.
- •Early prototype, seeking feedback on UX and detection quality.
- •Roadmap includes moving inference to the browser using WASM.
Reference
“The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.”