Protocol + Language Agnostic Tooling Yielding Proxy-like Universal Semantics
just write your methods...then call them

From the same method surface, plat can create:

  • 🌐 An API with a full OpenAPI spec and Swagger/ReDoc playground
  • ⌨️ A CLI with full help tooling
  • 🤖 MCP and tool definitions for AI systems
  • 🧩 TypeScript/JavaScript proxies you can call directly
  • 🐍 Python proxies with the same method surface
  • 🛶 Browser-hosted client-side servers for fun live demos like this one
server
live
Open full server demo
client
waiting
42
Open full client demo
⚠️ WARNING: this demo is probably weirder than you think

What Plat Actually Is

The sacred part

What methods exist, what they accept, and what they return.

The flexible part

Transport, runtime, language, and generated caller surfaces.

The promise

Call it like you’re there, whether you’re in TS, Python, CLI, or an AI tool-call loop.

One Definition, Many Outputs

Server
Where the methods live

Run plat inside a normal Node app and expose your methods over HTTP without changing the class shape.

import { createServer } from "@modularizer/plat" createServer({}, OrdersApi).listen()

Use the same method-first model in Python, with Python-native validation and server tooling underneath.

from plat import create_server create_server({}, OrdersApi).listen()

This is the browser-hosted variant. It is great for demos, experiments, and peer-to-peer browser scenarios.

import { serveClientSideServer } from "@modularizer/plat/client-server" export default serveClientSideServer("browser-math", [DemoApi])

Other runtimes and transports can plug into the same model over time.

Interface
The shared contract

Plat can describe your method surface as a full OpenAPI document, including routes, schemas, summaries, and response shapes.

The same contract can power familiar interactive docs and exploration, so humans get a useful playground from the same source.

Docstrings, examples, and schemas become shared interface metadata instead of one-off docs that drift from the code.

Client
How callers meet it

Call methods from browser or Node code with a proxy that feels local even when the implementation lives elsewhere.

const order = await client.createOrder({ itemId: "sku_123", qty: 2, })

Keep the same method surface in Python callers so TS and Python apps can both speak the same contract.

Expose the same methods as AI tools instead of re-describing capabilities by hand for each model provider.

tools = client.tools # hand to your AI provider # then call client.createOrder(...)

Turn the same methods into a command-line interface with built-in help, argument parsing, and discoverable commands.

Core Philosophy

One flat method surface

What you should be thinking about is:

  • what methods exist
  • what input each method accepts
  • what result each method returns

That is the part plat treats as sacred.

await client.getOrder({ id: "ord_123" }) await client.createOrder({ itemId: "sku_123", qty: 2 })

Canonical routes:

  • GET /getOrder
  • POST /createOrder

Why this matters:

  • method names are globally unique and become the canonical route names
  • controllers organize code and docs, not URL hierarchies
  • proxies stay obvious and easy to call
  • CLI commands stay easy to expose and remember
  • MCP servers and AI tools get simple, stable tool names
  • AI systems are less likely to get confused by deep route hierarchies or duplicated naming schemes

Not nested route trees. Not manually synchronized operation names. Just a flat, obvious method surface.

One input object, one result

Each operation should feel tool-shaped:

  • one input object in
  • one result out
  • no path-param-heavy call shape to remember
const order = await client.createOrder({ itemId: "sku_123", qty: 2, })

Why this matters:

  • it maps naturally to JSON schemas and OpenAPI
  • tool calling systems want a single structured input object
  • generated clients stay predictable across languages
  • you pass an object that matches the method, instead of mentally juggling route params, query params, and body params as separate things

That shape works unusually well across HTTP APIs, CLIs, generated clients, MCP tools, and AI tool-calling systems.

Transport should not change the call

The same method call should stay usable even when the carrier changes:

await httpClient.createOrder({ itemId: "sku_123", qty: 2 }) await rpcClient.createOrder({ itemId: "sku_123", qty: 2 }) await fileClient.createOrder({ itemId: "sku_123", qty: 2 })

You should not have to think:

  • which HTTP method do I use here?
  • is this route encoded in the path or the body?
  • which client library do I need for this protocol?
  • HTTP
  • WebSockets
  • File queues
  • WebRTC
  • literally anything that can carry a JSON envelope

Why this matters:

  • you can change infrastructure without rewriting your callers
  • the same method stays usable from browsers, servers, CLIs, and workers
  • teams can experiment with transports without changing the mental model
  • the client call stays client.createOrder(...), not client.post('/createOrder', ...)

Carrier and plugin details are for transport authors. Normal users should still feel like they are just calling methods.

Generated surfaces should stay honest

Docs, clients, CLIs, and tool definitions should all come from the same underlying interface, not from separate handwritten copies.

  • OpenAPI stays aligned with the code
  • CLI commands stay aligned with the method names
  • SDK methods stay aligned with the operation IDs
  • Tool definitions stay aligned with the input schema

Why this matters: once these surfaces drift apart, users stop trusting the generated docs and clients. Honest generation keeps the whole stack coherent.

Transport details stay hidden

For normal plat users, the important thing is:

  • methods stay flat
  • typing stays strong
  • clients feel direct
  • transport details stay hidden
  • provider complexity stays hidden too

Why this matters: most users do not want to think about envelopes, retries, brokers, or provider-specific SDK quirks every time they make a call.

For plugin authors, transport and protocol mechanics are the extension story. For everyone else, it should just feel like calling methods.

Quickstart

Step 1 Install plat

Start with the package and the CLI. The rest of the flow builds from the same class methods.

npm i modularizer-plat
Step 2 Define a server

Write a controller method, give it input and output types, and start the server. That single method surface becomes the source of truth.

import { Controller, POST, createServer, type RouteContext } from "plat" @Controller() class OrdersApi { @POST() async createOrder( input: { itemId: string; qty: number }, ctx: RouteContext, ): Promise<{ orderId: string; status: string }> { return { orderId: "ord_123", status: "pending" } } } const server = createServer({ port: 3000 }, OrdersApi) server.listen()
Step 3 Serve it and inspect the docs

Run the CLI to launch the server, then open the generated docs and OpenAPI description in the browser.

plat serve open http://localhost:3000/
Step 4 Generate a typed client

Point plat at the running OpenAPI spec and generate a client you can import directly into your app.

plat gen client http://localhost:3000/ --dst client.ts
import { createClient } from "./client" const client = createClient("http://localhost:3000") const order = await client.createOrder({ itemId: "sku_123", qty: 2 }) console.log(order)
Step 5 Generate a CLI

The same method surface can become a real command-line interface with flags, help text, and argument parsing.

plat gen cli http://localhost:3000/ --dst cli.ts npx tsx cli.ts createOrder --itemId=sku_123 --qty=2
Step 6 Hand tools to AI

You can also build an OpenAPI client at runtime and hand its tool definitions to an AI provider without changing your underlying methods.

import { OpenAPIClient } from "plat" const spec = await fetch("http://localhost:3000/openapi.json").then((r) => r.json()) const client = new OpenAPIClient(spec, { baseUrl: "http://localhost:3000" }) const tools = client.tools // hand `tools` to your AI provider // then call back into `client.createOrder(...)`

Frequently Asked Questions

Is plat just an OpenAPI generator?

No. OpenAPI is one of the important outputs, but not the whole idea.

  • APIs are one surface
  • CLIs are another
  • MCP and tool definitions are another
  • generated TS/JS and Python clients are another

The core promise is: define useful methods once, then let clients, CLIs, docs, and AI tools all see the same surface.

Why is plat so opinionated about a flat method surface?

Because flat method names make every generated surface simpler.

  • easy for humans to remember
  • easy for CLIs to expose
  • easy for generated clients to mirror exactly
  • easy for AI agents and MCP tools to understand

Deep route hierarchies and duplicated naming schemes create confusion fast, especially once you add SDKs, CLIs, and tool calling.

Do I have to think about HTTP methods and route shapes when calling?

No. The calling experience is supposed to feel like this:

const order = await client.createOrder({ itemId: "sku_123", qty: 2 })

Not like this:

  • choosing between different client libraries
  • hand-authoring RPC envelopes
  • thinking about HTTP vs WS on every call
  • manually syncing route names and SDK method names

The carrier is important for plugin authors. For normal users, the method call should stay direct.

Does the same plat model work in Python too?

Yes. Plat supports Python servers and clients too.

  • write Python controllers
  • generate OpenAPI from Python sources
  • generate Python clients from OpenAPI
  • use sync, async, and promise-style Python clients

The language can change. The method surface is the point.

What about long-running jobs, progress, and async workflows?

Plat tries to keep the mental model the same even when a method is slow.

await client.importCatalog( { source: "s3://bucket/catalog.csv" }, { onRpcEvent(event) { console.log(event.event, event.data) }, }, )

You can still think in terms of the same method, while getting:

  • progress updates
  • logs
  • chunks/messages
  • deferred handles and cancellation
What is the weird browser demo actually proving?

It proves plat is not tied to a traditional backend process.

  • the “server” can live in a browser tab
  • the client can still discover and use a real OpenAPI surface
  • the same method-first model still works over odd transports like MQTT-signaled WebRTC

That demo is intentionally strange. Plat also works perfectly normally as a real TypeScript or Python server.