Animats 17 hours ago

"5000 Erlangs" - oh, they meant 5000 instances of some Erlang interpreter. Not Erlang as a unit of measure.[1] One voice call for one hour is one Erlang.

[1] https://en.wikipedia.org/wiki/Erlang_(unit)

  • QuantumNomad_ 12 hours ago

    Neat! I always thought the name of the Erlang programming language just meant “Ericsson Language”, since this programming language was invented for Ericsson. Never knew there was anything more than that to the name!

    • RF_Savage 6 hours ago

      And it was a pun by Ericsson engineers, as they used Erlang to program telephone switches where the capacity planing included Erlangs.

    • cmrdporcupine 12 hours ago

      I believe it's both.

      • RossBencina 8 hours ago

        I believe it's neither:

        "The origin of queueing theory dates back to 1909, when Agner Krarup Erlang (1878–1929) published his fundamental paper on congestion in telephone traffic [for a brief account, see Saaty (1957), and for details on his life and work, see Brockmeyer et al. (1948)]." -- https://www.sciencedirect.com/topics/engineering/queueing-th...

        • Animats 5 hours ago

          In the early days of telephony, system load was measured by how much current was being drawn from the talk power supply. This was done with a watt-hour meter, calibrated in erlangs.[1]

          (It's amazing how little logging went on in the phone system before computerized switching. But that's another subject.)

          [1] https://physicsmuseum.uq.edu.au/erlangmeter

        • 0x69420 7 hours ago

          also the namesake of the unit fwiw

  • bravesoul2 13 hours ago

    Thanks for the rabbit hole!

lifeisstillgood 19 hours ago

So this is something like a 5000 USD machine (https://www.jeffgeerling.com/blog/2024/ampereone-cores-are-n...) And is designed as a cloud provider or telco edge machine (hence the erlang consultancy)

But if you are looking at a hosted erlang VM for a capex of one dollar then these folks are onto something

Cores really are the only way to escape the broken moores law - and this does look like a real step in the important direction. Less LLMs more tiny cores

  • ethan_smith 17 hours ago

    The article is about 5000 Erlang nodes (BEAM VMs), not processes - a single BEAM instance can efficiently handle millions of lightweight processes, making this even more impressive from a density perspective.

  • alberth 17 hours ago

    While not this exact server, from Hetzner, you can get an 80-core Ampere for just ~$200 per month.

    (And that also includes hosting, egress, power, etc).

    https://www.hetzner.com/dedicated-rootserver/rx170/

    • bravesoul2 13 hours ago

      Is that cheaper? 7200 over 3 years. Obviously more convenient though and less capex.

      • alberth 12 hours ago

        Don’t forgot the cost of …

        > “(And that also includes hosting, egress, power, etc).

        • bravesoul2 11 hours ago

          Yes indeed. I feel like probably both are a similar price so its not a financial decision (unless you just dont have 5k) as much as do you need intense control (buy the server) or do you prefer less hassle (have them host it).

    • znpy 14 hours ago

      > Product currently not available

      in practice you can't though

  • sargun 19 hours ago

    I really like the manycores approach, but we haven’t seen it come to fruition — at least not on general purpose machines. I think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).

    Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.

    • toast0 17 hours ago

      > think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).

      Epyc has a mode where it does 4 numa nodes per socket, IIRC. It seems like that should be good if your software is NUMA aware or NUMA friendly.

      But most of the desktop class hardware has all the cores sharing a single memory controller anyway, so if you had separate NUMA nodes, it wouldn't reflect reality.

      Reducing cross core communication (NUMA or not) is the key to getting high performance parallelism. Erlang helps because any cross process communication is explicit, so there's no hidden communication as can sometimes happen in languages with shared memory between threads. (Yes, ets is shared, but it's also explicit communication in my book)

    • zozbot234 18 hours ago

      > Erlang, at least the programming model, lends itself well to this, where each process has a local heap.

      That loosely describes plenty of multithreaded workloads, perhaps even most of them. A thread that doesn't keep its memory writes "local" to itself as much as possible will run into heavy contention with other threads and performance will suffer a lot. It's usual to try and write multithreaded workloads in a way that tries to minimize the chance of contention, even though this may not involve a literal "one local heap per core".

    • to11mtm 14 hours ago

      > Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.

      I tend to agree.

      Where it gets -really- interesting to think about, are concepts like 'core parking' actors of a given type on specific cores; e.x. 'somebusinessprocess' actor code all happens on a specific fixed set of cores and 'account' actors run on a different fixed set of cores, versus having all the cores going back and forth between both.

      Could theoretically get a benefit due to instruction cache being very consistent per core, giving benefits due to the mechanical sympathy (I think Disruptors also take advantage of this).

      On the other hand, it may not be as big a benefit, in the sense that cross process writes are cross core writes and those tend to lead to their own issues...

      fun to think about.

      • LtdJorge 3 hours ago

        The BEAM launches a scheduler process per CPU thread in SMP mode, although I don't know if it moves Erlang processes between them.

    • leoc 17 hours ago

      Who knows what will really happen, but there have been rumours of significant core-count bumps in Ryzen 6, which would edge the mainstream significantly closer to manycore.

    • felixgallo 18 hours ago

      Paraphrasing the late great Joe Armstrong, the great thing about Erlang as opposed to just about any other language is that every year the same program gets twice as fast as last year.

      Manycores hasn't succeeded because frankly the programming model of essentially every other language is stuck in 1950. I, the program, am the entire and sole thing running on this computer, and must manually manage resources to match its capabilities. Hence async/await, mutable memory, race checkers, function coloring, all that nonsense. If half the effort spent straining to get the ghost PDP-11 ruling all the programming languages had been spent on cleaning up the (several) warts in the actor model and its few implementations, we'd all be driving Waymos on Jupiter by now.

      • RossBencina 8 hours ago

        I'm curious, which actor model warts are you referring to exactly?

        [The obvious candidates from my point of view are (1) it's an abstract mathematical model with dispersed application/implementations, most of which introduce additional constraints (in other words, there is no central theory of the actor model implementation space), and (2) the message transport semantics are fixed: the model assumes eventual out-of-order delivery of an unbounded stream of messages. I think they should have enumerated the space of transport capabilities including ordered/unordered, reliable/unreliable within the core model. Treatment of bounded queuing in the core model would also be nice, but you can model that as an unreliable intermediate actor that drops messages or implements a backpressure handshake when the queue is full.]

      • ncgl 11 hours ago

        Can you explain the joe armstrong quote a bit to someone not familiar with the language?

        • sam-cop-vimes 3 hours ago

          Erlang's runtime system, the BEAM, automatically takes care of scheduling the execution of lightweight erlang processes across many cpus/cores. So a well written Erlang program can be sped up almost linearly by adding more cpus/cores. And since we are seeing more and more cores being crammed into cpus each year, what Joe meant is that by deploying your code on the latest cpu, you've doubled the performance without touching your code.

  • hinkley 18 hours ago

    Azul did something like this back in the ‘10s for Java. But it’s one of those products for when you’ve put all you eggs in one basket and you need the biggest basket money can buy. Sort of like early battery backed storage. T was only fit for WAL writing on mission critical databases because one cost more than a car.

  • slashdave 11 hours ago

    You mean, with something like "multiprocessing"?

elteto 20 hours ago

  “ Underjord is an artisanal consultancy …”
If they don’t weave Erlang threads by hand I’m going to be mildly disappointed.
  • temp0826 19 hours ago

    Single origin, farm-to-bytecode processes with our signature rustic garbage collection and heirloom fault tolerance...

    • antonvs 18 hours ago

      > heirloom fault tolerance...

      In other words, nepobaby fault tolerance

  • bevr1337 19 hours ago

    All process messages written in beautiful calligraphy.

    • hinkley 18 hours ago

      All constants are haiku.

      • thechao 16 hours ago

        Hand computed in the finest morning rays by monks in the Dolomites.

ThinkBeat 11 hours ago

I would be much more interesting in seeing 5000 under heavy load.

Just being able to star that many instances is not that exciting until we know what they can do.