Full-Stack AI Application • SaaS • Serverless
Copy Coach is a full-stack AI branding application that transforms a single brand description into production-ready marketing copy and keyword sets — powered by OpenAI GPT, FastAPI, and a serverless AWS Lambda backend. Built before the mainstream explosion of generative AI tooling, the platform demonstrated how LLM APIs could be abstracted into an accessible, single-prompt UX. The Next.js frontend communicates with a Python FastAPI inference layer deployed as an AWS Lambda function, returning structured copy and keywords in real time via API Gateway.
Integrating large language models into a consumer-facing product in 2022 presented distinct engineering challenges before the modern AI tooling ecosystem existed:
Python Lambda functions with large ML dependencies suffered from significant cold start penalties. Initialising the FastAPI app, loading OpenAI client libraries, and establishing API connections on first invocation required careful packaging to stay within acceptable user-facing response times.
Returning consistently structured marketing copy and keyword arrays from a raw LLM required deterministic prompt design. Without modern function-calling APIs, the prompt itself had to enforce output format — requiring iterative refinement to ensure parseable, usable responses across diverse brand inputs.
Bridging a JavaScript/TypeScript frontend with a Python inference backend via AWS API Gateway introduced CORS configuration, request serialisation, and error boundary challenges that needed robust handling across both runtimes.
Early GPT API pricing was consumption-based with no built-in rate limiting. Input validation, token estimation, and enforcing a 32-character brand input cap (visible in the UI) were critical to controlling API costs while maintaining output quality.
Why Next.js? Server-side rendering for initial page load performance combined with React's component model for a reactive input/output interface. The app's file-based routing and API routes provided a clean separation between UI and any intermediary Node.js logic before handing off to the Lambda function.
Why TailwindCSS? Utility-first styling enabled a polished, dark-themed UI with gradient accents (teal-to-blue brand palette) built rapidly without a component library overhead — keeping the bundle lean for a single-page tool.
Why FastAPI? The async-first Python framework provided automatic OpenAPI documentation, Pydantic request validation, and a thin, performant HTTP layer ideal for wrapping OpenAI API calls. Its low overhead made it a natural fit for the Lambda execution model.
Why AWS Lambda + API Gateway? Zero server management, automatic scaling, and pay-per-invocation pricing aligned with an in-house tool with variable usage. Lambda's concurrency model handled burst traffic without pre-provisioned infrastructure.
Why structured prompts over raw completion? Before function-calling was available, deterministic output required embedding the output schema directly in the prompt — instructing the model to return copy in a JSON-compatible format with labelled fields (headline, tagline, keywords). This ensured the Node.js/Python parsing layer could reliably extract and present the data to the frontend.
The prompt template was designed to return three distinct output types from a single completion call:
Dependencies were carefully layered to minimise cold start impact:
| Approach | Impact |
|---|---|
| Lambda Layers | FastAPI + Pydantic separated from function code — cached independently |
| Slim base image | python3.9 runtime with only required packages — no full ML stack |
| Provisioned Concurrency | Pre-warmed instances for production traffic bursts |
| Async OpenAI client | Non-blocking API calls — Lambda execution time minimised |
| Metric | Result |
|---|---|
| End-to-End Response | < 2s (warm Lambda + GPT completion) |
| Cold Start Overhead | ~800ms (mitigated via Lambda Layers) |
| Input Validation | Client-side 32-char cap + server-side Pydantic schema |
| Output Consistency | Structured prompt → parseable response on 95%+ of calls |
| Metric | Result |
|---|---|
| Frontend Deploy | Vercel — automatic CI/CD from Git push |
| Backend Deploy | AWS Lambda + API Gateway (serverless) |
| Servers Managed | Zero — fully managed infrastructure |
| Scaling | Automatic Lambda concurrency — no pre-provisioning required |
Built in 2022 — ahead of the mainstream generative AI tooling wave. Copy Coach demonstrated the viability of LLM-powered SaaS before ChatGPT was publicly released.
Lambda's pay-per-invocation model and automatic scaling made it a natural fit for AI inference — cost tracks usage with zero idle overhead.
Before function-calling APIs, achieving consistent structured output required treating the prompt as code — versioned, tested, and iterated like any other system component.
FastAPI's async handlers, automatic validation, and minimal footprint made it an optimal framework for a Lambda function — handling request validation, OpenAI calls, and response serialisation in a single clean layer.
Reducing the entire brand generation workflow to a single 32-character input removed the friction barrier that more complex AI tools introduced — validating the single-prompt paradigm that became industry standard post-2023.
File-based routing, server-side rendering, and zero-config deployment freed development focus entirely to the AI integration — with no infrastructure overhead on the frontend side.
Copy Coach demonstrates early-mover full-stack AI engineering — integrating OpenAI GPT into a production SaaS product before generative AI tooling became mainstream. By combining a serverless FastAPI inference layer on AWS Lambda with a Next.js frontend deployed on Vercel, the platform achieved zero-server-management scalability with sub-2-second AI response times. The structured prompt engineering approach — enforcing parseable output without function-calling APIs — showcases deep understanding of LLM behaviour and practical AI system design.
Built for: AI-Powered SaaS • Branding & Copywriting • Serverless Architecture
Live Platform: copycoach.app
Technologies: Next.js, React, FastAPI, Python, AWS Lambda, API Gateway, OpenAI GPT, TailwindCSS, Vercel