Prompt Forge Studio: A Prompt Engineering IDE + PaaS Built on Gemini
If you’ve worked with LLMs, you already know the truth:
Most failures aren’t model problems.
They’re prompt problems.
Prompts are often vague, missing constraints, or poorly structured.
In production, that means inconsistent outputs, wasted tokens, and higher latency.
So I built Prompt Forge Studio.
Live:
Note : This is result my of vibe coding and my prompt engineering skills.
What It Is
Prompt Forge Studio is an Advanced Development Environment (ADE) for prompt engineering.
Instead of sending raw text to Gemini, the system:
- Analyzes intent
- Injects structure and constraints
- Routes to the optimal model
- Caches deterministic outputs
- Logs execution telemetry
The goal is simple:
Treat prompts like infrastructure.
Key Features
• Cognitive Depth Control
Adjust how deeply the system expands and structures your prompt.
• AI Prompt Auditor
Critiques your prompt before execution to improve clarity and reduce token waste.
• Automatic Model Routing
Long or reasoning-heavy prompts → higher-tier model
Simple prompts → fast, low-latency model
• Exact-Match Redis Caching
Repeated requests return in <50ms without calling the LLM.
V2: Prompt as a Service
The platform now exposes:
POST /api/v1/execute
You send:
{
“version_id”: “UUID”,
“variables”: { … }
}
The engine validates, checks cache, routes intelligently, executes, and logs performance — all cleanly separated from the web app.
Built With
- Next.js 15
- React 19 + TypeScript
- Supabase (Postgres)
- Upstash Redis
- Google Gemini SDK
- Clerk Auth
We’re moving toward a world where:
Prompts need versioning.
Prompts need performance optimization.
Prompts need auditability.
I’m building tooling for that future.
Would you use a Prompt PaaS in production?
Feedback welcome.
What’s Next
I’m currently working on:
a dedicated CLI for it Its almost done just making it more stable if you want to check out the beta checkout this :
