DSchau 5 hours ago

Postman is already Postman for MCP. We launched MCP support several weeks ago, both in generating MCP servers from the public APIs on our network (over 100k) and with the MCP client which can test, debug, and validate MCP servers with full support for streamable http, sse, and stdio and capabilities (tools, prompts, resources).

Check it out! https://postman.com/downloads

as well as our MCP catalog

https://getmcp.dev

  • andes314 2 hours ago

    My platform goes beyond being an automatic wrapper of an API and lets you specify to the model, in natural language, how it should parse inputs and outputs. I find LLMs are very responsive to this type of specification, and to the best of my knowledge no one is trying this yet.

    You also don’t seem to offer a simple chat-based client.

notpushkin 5 hours ago

Please note that posting the same thing multiple times in succession is frowned upon here (but congrats getting to the first page!)

I’m also confused by the title – why “Postman”? I do know about Postman the HTTP client, but I don’t get the parallel here.

phildenhoff 4 hours ago

andes314, can you expand on how you see this as Postman for MCP?

  • andes314 2 hours ago

    It is a platform to create MCP servers from API endpoints, and then chat with them without having to use Claude’s clunky integration process. It is simple and complete.

andes314 6 hours ago

If it enters hug of death, please try Claude client to test it! Anthropic allows me only 30req/s at the moment

  • fcarraldo 4 hours ago

    How much are you spending on tokens right now?

    • andes314 2 hours ago

      Less than you’d think

TZubiri 5 hours ago

"error, please clear and try again: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': 'This request would exceed the rate limit for your organization (67777b05-661c-4183-aa19-ec6e299f95ac) of 50,000 input tokens per minute. }}

This is a very bad idea buddy. Maybe try letting users set their API tokens.