bluehatbrit 3 days ago

The attempts at collaborative tools in Zed was always far more interesting to me than the AI stuff. Don't get me wrong, their AI stuff is nice and works well for me, but it's hardly necessary in an editor with how good Claude Code and others are.

But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.

I'm glad to hear they're still thinking about these kind of features.

  • hresvelgr 3 days ago

    The thing that made me go "oh damn" was finding out the debugger is multiplayer.

    • greazy a day ago

      What does it mean the debugger is multiplayer?

  • giancarlostoro 2 days ago

    Yeah, I am also glad that they are not exclusive about how you use AI which is what makes it better. They need to stop marketing the AI stuff it puts off some people. They need to advertise how versatile they are.

pmarreck 3 days ago

The choice to go to WebAssembly is an interesting one.

WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.

I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)

  • andyferris 3 days ago

    > I hope they add some POSIXy stuff to it

    Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.

    (Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).

    • syrusakbary 2 days ago

      I'd go a bit further. If you want full POSIX support, perhaps WASIX is the best alternative. It's WASI preview 1 + many missing features, such as: threads, fork, exec, dlopen, dlsym, longjmp, setjmp, ...

      https://wasix.org/

      • samarthr1 2 days ago

        My understanding of the wasm execution model was that it was fundamentally single threaded?

        • syrusakbary 3 hours ago

          I don't think that's accurate, although it's true that needs extra work to work properly in JS based environments.

          You can already create threads in Wasm environments (we got even fork working in WASIX!). However, there is an upcoming Wasm proposal that adds threads support natively to the spec: https://github.com/WebAssembly/shared-everything-threads

          • pmarreck 33 minutes ago

            what are the options regarding working with wasix? (compiling to it, running it?)

            is this something that is expected to "one day" be part of WASM proper in some form?

ashishb 3 days ago

How is Rust + Web Assembly + Cloudflare workers in pricing and performance compared to say deploying Rust-based Docker images on Google Cloud Run or AWS Fargate?

  • resonious 2 days ago

    Rust on CF Workers is horrible. >10x performance hit (compared to JS) for a non-trivial web app, and it's not only a 10x performance hit but 10x the cost since they charge for CPU time, and that's where the extra time is going.

    Realistically for a low traffic app it's fine, but it really makes you question how badly you want to be writing Rust.

    As far as I can tell, the problem stems from the fact that CF Workers is still V8 - it's just a web browser as a server. A Rust app in this environment has to compile the whole stdlib and include it in the payload, whereas a JS app is just the JS you wrote (and the libs you pulled in). Then the JS gets to use V8's data structures and JSON parsing which is faster than the wasm-compiled Rust equivalents.

    At least this is what I ran into when I tried a serious project on CF Workers with Rust. I tried going full Cloudflare but eventually migrated to AWS Lambda where the Rust code boots fast and runs natively.

    • jarjoura 2 days ago

      I thought WASM was no_std since there's no built in allocator?

      Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.

      • ashishb 2 days ago

        > Seems like the wrong tools for the job.

        Thanks for the confirmation. I was confused as well. I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.

  • ashwindharne 3 days ago

    I think performance takes a hit due to WASM, and I imagine pricing is worse at big qps numbers (where you can saturate instances), but I've found that deploying on CF workers is great for little-to-no devops burden. Scales up/down arbitrarily, pretty reasonable set of managed services, no cold start times to deal with, etc.

    Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.

    • jarjoura 2 days ago

      I ended up using container service on azure for a small rust project that I built in a docker container and published to GitHub. GitHub actions publishes to the azure service and in the 3 years I have been running it, it's basically been almost entirely free.

      • ashishb 2 days ago

        I have a similar experience except I use Go+GitHub Actions+Google Cloud Run.

mariopt 3 days ago

Been using CF Workers with JavaScript and I absolutely love it.

What is performance overhead when comparing rust against wasm?

Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

  • laktek 3 days ago

    > Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

    Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)

    • mariopt 2 days ago

      It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

      Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.

      Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.

      The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

      About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.

      • laktek 2 days ago

        >It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

        It's possible for you to self-host Edge Runtime on its own. Check the repo, it has Docker files and an example setup.

        > I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

        Even with Supabase's hosted option, you can choose to run Edge Functions and opt out of others. You can run Hono in Edge Functions, meaning you can easily switch between CF Workers and Supabase Edge Functions (and vice versa) https://supabase.com/docs/guides/functions/routing?queryGrou...

        > Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.

        Deno supports most of Node built-in API and npm packages. If your app uses modern Node it can be deployed on Edge Functions without having to worry about the runtime (having said that, I agree there are quirks and we are working on native Node support as well).

        • mariopt 2 days ago

          Cool, I'll check it out.

  • kevincox 3 days ago

    It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.

    Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)

  • pmarreck 3 days ago

    The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.

    It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/

    • jsheard 3 days ago

      Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.

      https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...

      • pmarreck 42 minutes ago

        1) This may be temporary.

        2) The bounds checking argument is a problem, I guess?

        3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks

  • tomComb 3 days ago

    Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.

    It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.

  • imron 3 days ago

    In code I’ve worked on, cold starts on AWS lambda for a rust binary that handled nontrivial requests was around 30ms.

    At that point it doesn’t really matter if it’s cold start or not.

  • wmf 3 days ago

    Workerd is already open source so that's a good start.

cedws 3 days ago

Post is a bit sparse on details and seems to be more about the backend than the infra itself. Would be interested to hear more.

kwikiel 3 days ago

I wish Zed would implement support for Jupyter notebooks first. Maybe this sounds like a thing I can contribute

  • Palmik 2 days ago

    I migrated to using the # %% syntax in plain .py files.

    For me, it's a superior experience anyway. I also prefer it in editors that support both (like VS code).

    You can run the REPL with a Jupiter kernel as well.

    https://zed.dev/docs/repl#cell-mode

esquire_900 2 days ago

Is the dependancy on Cloudflare worth the saved time in infrastructure? Getting a big bare metal and deploying a docker should go a long way.

This implementation sounds fully dependant on a service that Zed has little to say about.

  • hobofan 2 days ago

    FYI: Cloudflare provides an open source version of their Workers runtime[0], so the lock-in isn't as strong as it once was.

    [0]: https://github.com/cloudflare/workerd

    • hoppp 2 days ago

      I think if the end game is to run workers runtime then they could also run something else from the start.

      Its gonna be hard to compete with the scaling cloudflare offers if they migrate to their own dedicated infra, but it of course would become much cheaper than paying per request

orliesaurus 3 days ago

i didn't realize the cloud side of an editor had grown to ~70k lines of Rust already… and this work is laying the foundation for collaborative coding with DeltaDB.

BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.