Free Web Apps · Built on Foundry Local

Run AI Web Apps
On Your Own Device

Two fully functional AI applications that run entirely on your hardware — no cloud, no API keys, no subscriptions. Just download, install Foundry Local, and launch.

3Apps Available
$0Running Cost
100%Data Stays Local
1 cmdTo Start

Available Apps

App 01 of 03
📷
POE Photo Processor
PRIVACY · FACE BLUR · EXIF OVERLAY
Face Detection Privacy Protection EXIF Metadata AI Captions Batch Processing Foundry Local

A privacy-first event photo processor that automatically blurs all faces in your photos — except yours. Upload your selfie once, drop your event photos, and get processed images with date/time/location overlays. Your photos never leave your machine.

  • Upload your selfie — your face is automatically kept sharp, all others blurred
  • Adjustable similarity threshold, blur strength & face padding
  • EXIF date, time & GPS location overlaid on each photo automatically
  • AI-generated captions via Foundry Local (Phi-4-Mini, Qwen2.5)
  • Batch process entire event galleries — download as ZIP
  • Supports JPG, PNG, HEIC — all processing runs in-browser
localhost · POE Photo Processor
Your Face
😊
✓ Will NOT be blurred
Threshold
Blur Strength
Foundry AI
phi-4-mini-npu
👤 👤
BEFORE
😊
✓ KEPT BLURRED
AFTER
⚡ Process All
⬇ Download ZIP
App 02 of 03
✍️
LocalPen — Private AI Writer
WRITING ASSISTANT · 100% LOCAL · ZERO TELEMETRY
Writing Assistant 100% Local Privacy First Streaming Foundry Local

A distraction-free writing assistant that uses your locally running AI model to improve, expand, shorten, or formalize any selected text. Select text in the editor, pick a mode, hit Run — suggestions stream back in under 50ms. Zero cloud, zero telemetry.

  • Select any text and apply AI modes: Improve, Expand, Shorten, Formalize, Custom
  • Suggestions stream back in real-time — no waiting for full response
  • Configurable endpoint, model alias, and max output tokens
  • Privacy indicator — shows connection status and confirms all calls stay on localhost
  • Accept suggestion directly into editor or regenerate in one click
  • Works with any Foundry Local model (Phi-4-Mini NPU recommended)
localhost · LocalPen — Private AI Writer
● LOCAL
Foundry Local brings Azure AI
to your own hardware
without any cloud dependency.

The model runs on your NPU,
processing tokens locally...
✦ Improve
→ Expand
← Shorten
↑ Formalize
✎ Custom
localhost:60084
App 03 of 03
🎨
Atelier — Local Image Studio
STABLE DIFFUSION · AUTOMATIC1111 · IMAGE GENERATION
Stable Diffusion Juggernaut Reborn DreamShaper 8 Hires. Fix Prompt Chips Batch Download

A polished local image generation studio powered by Automatic1111 and Stable Diffusion. Switch between Juggernaut Reborn (photorealism) and DreamShaper 8 (artistic), build prompts with style chips, and control every parameter — all images generated on your hardware at zero cost per image.

  • Juggernaut Reborn for photorealistic portraits, landscapes, and product shots
  • DreamShaper 8 for artistic, fantasy, and concept art styles
  • Quick-style chips: RAW photo, cinematic, oil painting, cyberpunk, golden hour & more
  • Full sampler control: DPM++ 2M Karras, SDE Karras, Euler a, DDIM, UniPC
  • Resolution presets, CFG scale, steps, seed, clip skip controls
  • Hires. fix (2× upscale) and face restore (CodeFormer) in one toggle
127.0.0.1:7860 — Atelier
⬡ LOCAL
Model
Juggernaut
PHOTOREALISTIC
DreamShaper 8
Steps
28
RAW photo, portrait, cinematic lighting, sharp focus, photorealistic
RAW photo cinematic oil painting
🌲
🐉
🌆
⚡ Generate Ctrl+Enter
Step-by-Step Setup Guide
Get any app running on your device in under 5 minutes
🕐 ~5 min setup
📷
POE Photo Processor
Face blur · Privacy · EXIF overlay
✍️
LocalPen AI Writer
Writing assistant · Streaming · Local
🎨
Atelier
Image generation · Stable Diffusion
1
Install Foundry Local

Foundry Local is Microsoft's runtime for running AI models on your own hardware. It installs a background service that exposes an OpenAI-compatible API on your machine.

# Open PowerShell or Terminal as normal user (no admin needed)
winget install Microsoft.FoundryLocal
# Open Terminal
brew tap microsoft/foundry
brew install foundry-local
Verify: foundry --version should print the version number
2
Download & Run a Model

POE Photo Processor uses Foundry Local for AI caption generation. The Phi-4-Mini NPU model is recommended — small, fast, and runs on the dedicated NPU in Copilot+ PCs. If you don't have an NPU, it falls back to CPU automatically.

# Download and start the recommended model
foundry model run phi-4-mini

# Or list all available models first
foundry model list
You'll see: ✓ Server ready on localhost:5272 — the AI endpoint is now live
⚠️
Note: AI captions are optional in the POE Photo Processor. Face detection and blurring work without Foundry Local — they run entirely in your browser using WebAssembly. You only need Foundry Local if you want AI-generated captions on your photos.
3
Download the App File

The POE Photo Processor is a single self-contained HTML file. No installation, no server needed — just open it in your browser.

📄
Single HTML File
Everything bundled in one file — no dependencies
🔌
No Server Needed
Open directly in Chrome, Edge, Firefox, or Safari
🔒
Fully Offline
Photos never leave your device — all processing in-browser
⬇ Download POE Photo Processor
4
Open the App in Your Browser

Double-click the downloaded HTML file — it opens directly in your default browser. No localhost server required for the app itself.

# Option A — double-click the file in Explorer
# Option B — drag & drop onto your browser window
# Option C — right-click → Open with → Chrome / Edge

# Or open from terminal
start poe_photo_processor_v2_22.html
# Option A — double-click the file in Finder
# Option B — drag & drop onto your browser
# Option C — right-click → Open With → Chrome / Safari

# Or open from terminal
open poe_photo_processor_v2_22.html
The app opens with a photo drop zone, sidebar controls, and "Loading models…" briefly as face detection loads
5
Use the App

Three steps to process your event photos:

Step 1 — Upload your selfie in the left sidebar
→ Your face will be recognized and KEPT sharp in all photos

Step 2 — Drop your event photos into the main area
→ JPG, PNG, HEIC all supported · EXIF data read automatically

Step 3 — Click ⚡ Process All
→ All faces blurred except yours · date/time/GPS overlaid
→ Click ⬇ Download All to save as ZIP
⚠️
AI Captions: If you want AI captions, make sure Foundry Local is running (foundry model run phi-4-mini) before clicking Process All. The sidebar shows connection status.
Ready to try it?
Open the app directly in your browser — it works immediately, even without Foundry Local running for the face blur features.
🚀 Launch App →
1
Install Foundry Local

LocalPen requires Foundry Local to be running — all AI text processing calls your local model. Nothing is ever sent to the cloud.

# Open PowerShell or Terminal
winget install Microsoft.FoundryLocal
brew tap microsoft/foundry
brew install foundry-local
Verify: foundry --version should print the installed version
2
Start a Model

LocalPen works best with Phi-4-Mini NPU for fast, responsive writing suggestions. On a Copilot+ PC this runs on the dedicated NPU — near-instant responses. On any other device it uses the GPU or CPU.

# Start the recommended model for writing tasks
foundry model run phi-4-mini
✓ Downloading phi-4-mini (INT4, 2.5GB)...
✓ Optimizing for your hardware...
✓ Server ready on localhost:5272
⚠️
Important: LocalPen connects to port 60084 by default (the Foundry Local NPU port). If your model started on port 5272, open Settings in the app and change the port to 5272.
# Check which port Foundry Local is using
foundry service status
# Note the port number shown — use it in the app settings
3
Download the App File

LocalPen is a single HTML file. Download it and open in any modern browser. No installation, no server, no build step.

⬇ Download local-ai-writer.html
Chrome / Edge
Best experience — streaming works perfectly
Firefox
Fully supported
Safari
Works on macOS 14+ and iOS 17+
4
Open & Configure the App

Double-click the HTML file to open it. Then configure the connection settings to match your running Foundry Local instance.

start local-ai-writer.html
# Or double-click in File Explorer
open local-ai-writer.html
# Or double-click in Finder

In the app, click ⚙ Settings (top right) and configure:

Port:  5272    # or 60084 if using NPU endpoint
Model: phi-4-mini
       (copy exact alias from foundry model list)
Max tokens: 512   # increase for longer text expansions
After clicking Save & Test, the status indicator in the toolbar turns green and shows "Connected"
5
Write & Use AI Assist

Start writing in the editor. Select any text you want AI to work on, then choose a mode from the sidebar and click Run.

1. Type or paste your text in the editor

2. Select the text you want AI to improve
   (click & drag, or Ctrl+A for all)

3. Choose a mode in the right sidebar:
   ✦ Improve  — clarity and flow
   → Expand   — add detail and context
   ← Shorten  — remove filler, be concise
   ↑ Formalize — professional / business tone
   ✎ Custom   — describe what you want

4. The suggestion streams into the panel below
   → Apply   replaces your selection in the editor
   ↺ Retry   generates a new suggestion
Everything stays on your machine
The privacy indicator in the toolbar confirms every AI request goes to localhost — never the cloud. Your writing is never shared.
🚀 Launch App →
1
Install Prerequisites — Python & Git

Automatic1111 needs Python 3.10.6 and Git. Install them before running the startup script.

# Download Python 3.10.6 (check "Add Python to PATH")
https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
# Download Git
https://git-scm.com/download/win

# Verify in a new PowerShell window:
python --version && git --version
brew install python@3.10 git
2
Download the App & Startup Scripts

Download all three files into the same folder (e.g. C:\Atelier\). The script clones Automatic1111 into a subfolder automatically on first run.

Your folder should look like:
Atelier/
├── atelier.html
├── START_A1111.bat     ← Windows
└── start_a1111.sh      ← Mac / Linux
3
Download the Models from Civitai

Download both models and copy the .safetensors files to stable-diffusion-webui/models/Stable-diffusion/.

📸
Juggernaut Reborn
🎨
DreamShaper 8
💾
10 GB disk free
Models + A1111 installation
4
Start the A1111 Server

Run the startup script. First run takes 5–15 min — it clones Automatic1111, installs Python deps, and starts with API + CORS enabled. Subsequent runs take ~30 sec.

cd C:\Atelier
.\START_A1111.bat
# SmartScreen warning → More info → Run anyway
chmod +x ~/atelier/start_a1111.sh
bash ~/atelier/start_a1111.sh
Wait for: Running on local URL: http://127.0.0.1:7860 — keep the terminal open
5
Open Atelier & Generate

Double-click atelier.html or open it from terminal. It connects automatically to 127.0.0.1:7860.

1. Select model — Juggernaut Reborn or DreamShaper 8
2. Type or click prompt chips to build your prompt
3. Adjust steps, CFG, resolution as needed
4. Press ⚡ Generate (or Ctrl+Enter)
→ Images appear in the gallery · ↓ Save all as ZIP
Status bar shows "Connected · juggernaut_reborn.safetensors" — you're ready
Full setup guide
For advanced options, GPU VRAM requirements, and troubleshooting see the dedicated Atelier guide.
Full Guide →

Troubleshooting

Common Issues & Fixes

Most issues are caused by the model not running or a port mismatch. Here's how to fix them.

❌ App shows "Not connected"

Foundry Local is not running or the port is wrong.

# Start Foundry Local
foundry model run phi-4-mini
# Check the port in app Settings
# Default: 5272 or 60084 (NPU)
❌ "CORS error" in browser console

Open the file via a local server instead of double-clicking.

# Python (any OS)
python -m http.server 8080
# Then open: http://localhost:8080/app.html
❌ Face detection not loading

The face detection model requires internet for first load (downloads ~5MB WebAssembly). After first load, it caches in your browser.

# Check browser console for errors
# Try: Chrome or Edge (best WebAssembly support)
# Ensure you're not in Private/Incognito mode
❌ Slow AI responses

Large model loaded, or running on CPU. Switch to Phi-4-Mini for fast responses.

# Use the smallest fast model
foundry model run phi-4-mini
# Copilot+ PC NPU = fastest
# GPU = fast · CPU = slower but works
❌ Model not found error

The model alias in app settings doesn't match what's loaded.

# List loaded models
foundry model list
# Copy the exact alias shown
# Paste into app Settings → Model
❌ winget not found (Windows)

winget requires Windows 10 (1809+) or Windows 11. Update Windows or install from the Microsoft Store.

# Check Windows version
winver
# Or install App Installer from Microsoft Store
# Search: "App Installer" in Microsoft Store