lucb1e a day ago

> The Framework Desktop with 64GB RAM + 2TB NVMe is $1,876. To get a Mac Studio with similar specs [...] you'll literally spend nearly twice as much [...] The Framework Desktop is simply a great deal.

Wow, someone managed to beat Apple on price??

I don't know that it logically follows that anything is a great deal when it undercuts Apple. Half sounds about right -- I thought Apple was a bit more competitive these days than ×2 actually, but apparently not, also considering that Framework comes with a (totally fair) niche-vendor premium

  • rafaelmn 2 hours ago

    Others in this thread already pointed out Apple is price competitive at mac mini, and Apple is extremely price competitive since they switched to their own chips.

    I would love it if PC was in a better state, but ever since Intel started flopping everyone in the PC land is praising AMD while ignoring the fact that both of them are just underdelivering compared to Apple. You cannot even get a consumer grade CPU on PC with >128 bit memory bus other than this strix halo chip - and it happened by accident. There is nothing on roadmap in the mainstream that will bump it - industry is years away from adding it. Meanwhile Apple is benefiting from this since M1, and the AI workloads just made it an even larger win.

  • scosman 20 hours ago

    He's comparing to a studio when he should compare to the mini for this performance. They are almost the same price at a 64gb RAM + 500gb storage config (CAD).

    - Framework, Max+ 64GB: $2,861.16

    - Apple Mini M4 Pro, 64GB: $2,899.00

    Apple does charge way too much for integrated storage, but Apple is only a 25% premium at 2TB not double (if you compare to the mini instead of the studio). Plug in a NVMe SSD if you're building an Apple desktop.

    • dijit 20 hours ago

      Not to defend Apple here, but it's also a bit apples to oranges (heh) because the power consumption is not easily comparable.

      I would hazard a guess and say: at that spec, if you're looking at 1Y TCO, the Apple could easily be cost competitive performance per dollar.

      Since they're in spitting distance of each other, just get the one you're most comfortable with. Personally I prefer Linux and I'm really happy that non-Apple machines are starting to get closer to the same efficiency and performance!

      • Aurornis 17 hours ago

        > I would hazard a guess and say: at that spec, if you're looking at 1Y TCO, the Apple could easily be cost competitive performance per dollar.

        The two systems aren't that different in power consumption. The Strix Halo part has a 55W default TDP but I would assume Framework customized it to be higher. A comparable M4 Pro Mac Mini can easily pull 80W during an AI workload.

        Apple has a slight edge in power efficiency, but it's not going to make a massive difference.

      • ajross 17 hours ago

        > if you're looking at 1Y TCO, the Apple could easily be cost competitive performance per dollar.

        Only if you're buying artesinal single source electricity sustainably sourced from trained dolphins.

        Average US electrical power is $.18/kWh per google. Figure the desktop draws 300W continuous (which it almost certainly can't), and that's 0.3 kW * 24 hr/day * 365.2425 days/yr == $473/year. So even if the mac was completely free you'd be looking at crossover in 5 years, or longer than the expected life of the device.

        • scosman 16 minutes ago

          But the price difference is only $38…

        • lucb1e 16 hours ago

          A device with no moving parts, only 5 years of expected life?!

          I understand if you say that high-performance users will want a newer system after 5 years, but I'd be very surprised if this 64GB RAM machine doesn't suffice for 98% of people, including those who want to play then-common games at default settings

          Good to have some concrete figures nonetheless of course, it's always useful as a starting point

          • ajross 16 hours ago

            First: it's not five years. It's five years if you posit that macs are magic and use no energy[1]. In practice they're 40-70% the consumption of a competing desktop (depends on usage and specific model, yada yada yada). So figure a few decades or thereabouts.

            But even so: I'm not sure I know a single new-device Apple customer who has a single unit older than five years. The comment about power implied that you'd make up the big Mac price tag on power savings, and no, you won't, not nearly, before you hawk it on eBay to buy another.

            [1] And also that you posit that the device is in a compute farm or something and pumped at full load 24/7. Average consumption for real development hardware (that spends 60% of its life in suspend and 90% of the remainder in idle) is much, much, much closer to zero than it is to 300W!

            • BirAdam 14 hours ago

              Well, I know exactly what you're saying, but to be fair to Apple, my 68K Mac that's nearly 40 still works. My iMac G4 is fine. PowerMac G3 is fine. First gen Mac Pro is fine. Meanwhile, the only PCs I know of that survive time quite as well (and I collect old crap) were all nearly equivalent to Apple level pricing at the time of their introduction.

              While Apple does charge nearly criminal markup for RAM and storage, they at least make some products that last (except for the TouchBar MacBooks' keyboards). I just hope my Mac Studio lasts too.

              • glitchc 13 hours ago

                My 4930K system is over a decade old and it still serves as a NAS. I have a Core 2 Duo kicking around that I'm sure will boot. There's a perfectly usable T520 sitting on my desk running Windows 11. But I must admit that Macbooks outlast most other laptops.

              • ajross 14 hours ago

                > Meanwhile, the only PCs I know of that survive time quite as well

                Nah, everything works forever, it's just that no one cares. My younger child retired a 3770K Ivy Bridge box last year that I'd bought at or near release in 2012, so ~11 years of steady use.

                People fetishize Apple hardware, but the truth is that modern electronics manufacturing is bulletproof up and down the assembly lines.

                • fc417fc802 10 hours ago

                  The ICs, yeah. Not so much power supplies, screens, etc. I've had the former go up in flames and the later delaminate - my current laptop has small bubbles around the edges of the screen.

                  I've also had high wattage GPUs inexplicably fail and lost a few SSDs to unexpected catastrophic failure. TBF I haven't lost any halfway decent SSDs within the past 5 years.

                  I don't think I've ever lost a motherboard, CPU, or RAM though. Even secondhand recycled data center stuff. It seems to just keep working forever.

            • prmoustache 16 hours ago

              Also judging from the state of 5 and 10y old mac computers on the second hand market, you quickly realize they aren't very reliable machines.

              • seemaze 13 hours ago

                My 2012 mbp is still going strong, only needed a new battery. Great keyboard, great screen, great trackpad. The second hand market seems to reflect this in my experience if you look at resale prices vs non-mac equivalents.

                • prmoustache 2 hours ago

                  Resale price does not mean anything if most units have hardware issues which was largely the case for most +5y macs when I looked for them out of curiosity.

                  Granted this is also the case of many brands but I found it was easier to find old thinkpad, fujitsu and dell business laptops in good shape than it was to find Apple ones.

                  Maybe this is biased and it has more to do with professional vs personal use. I guess you are a bit more cautious with a laptop your employer is lending you.

            • lucb1e 15 hours ago

              > First: it's not five years. It's five years if you posit that macs are magic and use no energy[1].

              Huh? I'm not "positing" anything, I'm responding to the longevity you stated:

              > you'd be looking at crossover in 5 years, or longer than the expected life of the device.

              As for Apple users not having devices older than five years... ehm, yeah: the brand targets an audience that really likes shiny new toys (either because they're lured into thinking they need it, or because it's a status symbol for them). Not sure how that's relevant here though

        • obmelvin 14 hours ago

          During peak hours (4-9pm daily) in San Diego you can be paying nearly $1/kWh (generation + transmission cost) to SDGE, so at least in certain areas, the running cost is very much relevant even for consumers.

        • steveBK123 15 hours ago

          Some VHCOL areas are more than 2x or even near 3x that electric rate

      • ramesh31 19 hours ago

        You have to account for resale when it comes to TCO. Resale value for a non-Apple PC is essentially zero - i.e. no one will buy it, or you'll get pennies on the dollar if they do. Whereas there's a strong market for used Apple hardware, and you can easily recover 50% or more.

        • Aurornis 18 hours ago

          This is an exaggeration, or you're not comparing apples to apples.

          The used market for high performance PC gear is quite efficient. You're not going to get a high-end CPU or GPU from the past several years for pennies on the dollar.

          Likewise, the resale market for Apple products doesn't guarantee 50% or more unless you're only looking at resale value of very new hardware. For example, you can pick up an M1 Max MacBook Pro (not that old) for closer to 1/3 of the original price.

        • dismalaf 17 hours ago

          Do people actually buy used computers and sell them? Like, apart from graphics cards...

          Seems odd to me. Most computers I've ever had last for ~10 years, at which point resale value is definitely zero...

          • lucb1e 16 hours ago

            Similar with phones and tablets: yes, people do that, but it's not the common case. I've only ever sold computing devices within the first month of ownership, after realising I made a mistake in purchasing it (not a broken device, but just not meeting my needs), but on second-hand sites you can also see various people offering them up after 1 or 2 years because they want something shinier and the old system still has value. I've bought my current phone from someone like that, and as a student I also bought such a laptop

          • prmoustache 15 hours ago

            I don't sell them but I buy exclusively used computers. All of them coming from the refurbishing market of business computers.

            From a purely financial perspective buying a computer or vehicle brand new is a very bad idea.

          • dijit 16 hours ago

            yeah, I buy preowned laptops fairly often to be fair.

        • lucb1e 18 hours ago

          So you recover what you spent extra, per the article?

    • zamadatix 12 hours ago

      One note is the Mac Mini surprisingly has 3rd party SSD upgrades now as well. Both gauge the storage price a bit, so probably best to just go 3rd party for both unless you're going to stick with the base storage.

    • arp242 18 hours ago

      Framework also charges quite a surplus on the SSD. For example "WD_BLACK SN7100 NVMe 2TB" is €229, but a more typical price is around ~€140, so that's about €90 extra. Don't know how that compares to Apple.

      • mananaysiempre 18 hours ago

        Framework absolutely does fleece you on the SSD, so don’t buy SSDs from them if you’re price-sensitive. But compared to Apple it’s nowhere close: per Apple’s website, upgrading a current Mac Mini or Mac Studio from 512 GB to 2 TB costs $600. You can buy a WD_BLACK SN850X 8TB for that much money (or a WD_BLACK SN8100 4TB and still have a quarter of it left over, if you really want PCIe 5.0, but there’s no point at the moment).

        • arp242 18 hours ago

          I checked and it's €750 extra to get a 2TB here in Ireland (just to compare apples with apples wrt. Framework Desktop). Wahahaha, this is the most ridiculous thing I've seen.

  • kingstnap 20 hours ago

    Apple base models tend to be fairly competitive but they have some of the most extreme margins on RAM and SSDs in the industry.

    They charge $600 CAD to go from 16GB -> 32 GB.

    They charge $900 CAD for 512 GB -> 2 TB SSD.

    • CharlesW 19 hours ago

      Unless I'm misunderstanding their store page, Framework charges $687 CAD ($500 USD) to go from 16GB → 32 GB RAM.

      They do only charge $214 CAD ($156 USD) to 512 GB → 2 TB SSD, thanks to it just being an NVMe stick.

      • masterj 18 hours ago

        > Unless I'm misunderstanding their store page, Framework charges $687 CAD ($500 USD) to go from 16GB → 32 GB RAM.

        That doesn't seem accurate for any of their computers? There is a pretty big leap from 32GB -> 64GB for the Desktop, but that is also a different processor.

    • GeekyBear 19 hours ago

      > They charge $900 CAD for 512 GB -> 2 TB SSD.

      The SSD is user replaceable, so you can replace it with a cheaper third party option.

      • kllrnohj 12 hours ago

        The Mac Pro is the only Apple system that has user-replaceable storage. The Mac Mini & Studio both feature slotted storage modules, but Apple firmware locks it and so it can't be replaced much less upgraded.

        • skibble 6 hours ago

          This is incorrect. Mac minis (starting with the M4 model) and Mac Studios simply need to be restored via DFU mode following physical installation of another SSD (be it a larger one or simply a replacement) to function. There are third parties who have reverse engineered the design of the PCB and paired it with the same Hynix and Sandisk NAND chips Apple use.

  • masterj 20 hours ago

    Apple charges an increased premium as you get further away from the base models. It’s really hard to find a better deal that the M4 base models

  • wmf 21 hours ago

    That's purely due to Apple's ridiculous SSD pricing. You can save a lot of money by using an external SSD.

    • alt227 20 hours ago

      Except they keep making it harder and harder to install your own drives into apple machines.

      • stackskipton 20 hours ago

        You don’t install it but run over Thunderbolt which is plenty fast.

      • LoganDark 19 hours ago

        Thunderbolt 5 is plenty fast for an external SSD. In fact you'd be hard-pressed to find an external SSD that even goes that fast

    • cmrdporcupine 20 hours ago

      Don't forget how Apple rips you off on RAM. Always.

      • rubyn00bie 17 hours ago

        RAM isn’t cheap but it’s not awful, or as bad as it once was, especially in their laptops. One of the primary things that has kept me from buying a new Apple desktop or laptop is honestly SSD pricing.

        It’s absolutely ridiculous given how cheap 1TB or 2TB drives are. And generally, in my experience, the ones Apple uses have had subpar performance compared to what’s available for the PC market for less than half the price. Not to mention their base configuration machines usually have subtly crappier SSDs than a tier up.

        I haven’t bought a new Mac since Apple released the M-series. I’ve wanted to multiple times, but the value just isn’t there for me personally.

  • Aurornis 17 hours ago

    > Wow, someone managed to beat Apple on price??

    He didn't pick the equivalent Mac to compare to. The closest Mac would be a an M4 Pro Mac Mini, not a Mac Studio.

    Right now I see a 64GB M4 Pro Mac Mini with 2TB SSD is $2600. Still more expensive, but not double the price. The Apple refurbished store has the same model for $2200.

    The M4 Pro Mac Mini probably has the edge in performance, though I can't say how much. The Ryzen platform driver and software support is still early.

    I think these Strix Halo boxes are great, but I also think they're a little high in the hype cycle right now. Once supply and demand evens out I think they should be cheaper, though.

    • motorest 16 hours ago

      > The Apple refurbished store has the same model (...)

      You have to admit this reads as grade A cope.

      Is it that hard to acknowledge that Apple price gouges all their product line?

      • 3836293648 a few seconds ago

        No, the very cheapest (new) one is competetive, everything else is price gouged

  • benreesman 11 hours ago

    Strix Halo is one of the tier-1 out-of-the-box perfect targets for for // hypermodern // nixos so I'm tuning it for the EVO-X2 which was the first desktop one available (and we'll support omarchy's hyprland rice out of the box, its a nice rice even if I prefer Ono-Sendai).

    That's getting you twice the RAM for the same price. Now the Framework has both intangible cool factor and scope for more upgrades, so if money is no object, get the Framework.

    But I can vouch for the EVO-X2 as the real Strix Halo experience: its thermals are solid, and even under sustained 100W+ its quieter than the average gaming PC. Obviously an elite ITX build can do better, but it's jaw-dropping at the price point: great ports, plenty of M.2 capacity, stable under extreme load, and a lot cheaper.

  • bestham 20 hours ago

    The point is that before the AMD Ryzen Al Max+ 395 chip there was only Apple that offered something comparable for the desktop / laptop that could do these AI relate tasks. Were else could you find a GPU with 64-128 memory?

    • goosedragons 14 hours ago

      Not as good but there's loads of devices with the Ryzen AI 9 HX 370 with 64-128GB of RAM.

reactordev a day ago

>The AMD 395+ uses unified memory, like Apple, so nearly all of it is addressable to be used by the GPU.

This is why they went with the “laptop” cpu. While it’s slightly slower than dedicated memory, it allows you to run the big models, at decent token speeds.

  • beala 13 hours ago

    Geerling benchmarked LLM performance on the Framework Desktop and the results look pretty lackluster to me. First, the software seems really immature. He couldn't get ROCm or the NPU working. When he finally got the iGPU working with Vulkan, he could only generate 5 tok/sec with Llama 3.1 70b (40 GB model). That's intolerably slow for anything interactive like coding or chatting imo, but I suppose that's a matter of opinion.

    https://github.com/geerlingguy/ollama-benchmark/issues/21

    • Carstairs 2 hours ago

      Prompt processing speeds are pretty poor too imo. I was interested in one to be able to run some of the 100b moe's but since they only give 50-150 tk/s (depending on model) it will take 5ish mins to process a 32k context which wouldn't be unbarebly slow for me. I've just looked at the results in that link and it's even worse for the 70b's. It will be nearly 20 mins to process a 32k context.

  • dajonker 17 hours ago

    I guess that depends on your definition of "decent". For the smaller models that can run on a 16/24/32 GB nvidia card, the chip is anywhere between 3x and 10x slower compared to say a 4080 super or a 3090 which are relatively cheap used.

    Biggest limitations are the memory bandwidth which limits token generation and the fact it's not a CUDA chip, meaning longer time until first token for theoretically similar hardware specifications.

    Any model bigger than what fits in 32 GB VRAM is - in my opinion - currently unusable on "consumer" hardware. Perhaps a tinybox with 144 GB of VRAM and close to 6 TB/s memory bandwidth will get you a nice experience for consumer grade hardware but it's quite the investment (and power draw)

    • rubyn00bie 17 hours ago

      I think it depends on the use case, slow isn’t that bad if you’re asking questions infrequently. I downloaded a model a few weeks ago that was roughly 80GBs in size and ran it on my 3090 just to see how it was… and it was okay. Fast? Nope. But it did it. If the answers were materially better I’d be happy to wait a minute for the output, but they weren’t. I’d like to try one of the really large ones, just to see how slow it is, but need to clear some space to even download it.

  • nottorp 20 hours ago

    unified and soldered :(

    I understand it's faster but still...

    Did they at least do an internal PSU if they went the Apple way or does it come with a power brick twice the size of the case?

    Edit: wait. They do have an internal PSU! Goodness!

    • ivape 15 hours ago

      LPDDR5, it looks like this is the only ram type that is going to work with Ryzen AI chips.

      • heavyset_go 14 hours ago

        Framework offers Ryzen AI laptops with replaceable memory.

        • ivape 13 hours ago

          That ram is slower than soldered down lpddr5 at the moment.

  • heavyset_go 14 hours ago

    I'm curious how something like CUDIMM memory would perform under the same workloads.

    Currently avoid machines with soldered memory, but if memory can be replaced and still have similar performance, that would change things.

    • reactordev 14 hours ago

      PCI lanes just aren’t there yet which is why the soldered memory approach to shorten the gap and increase the speed. The memory the framework desktop uses doesn’t come in slotted form (LPDDR5X).

      You absolutely can (and should) build your own for slightly cheaper. Just find the fastest DDR5 CUDIMMs you can paired with the fastest memory bus mobo.

  • SV_BubbleTime 18 hours ago

    > it allows you to run the big models, at decent token speeds

    Without CUDA, being an AMD GPU. Big warning depending on the tools you want to use.

    • reactordev 17 hours ago

      There's solutions for that though.

      https://docs.scale-lang.com/stable/

      https://github.com/vosen/ZLUDA

      It's not perfect but it's a start towards unification. In the end though, we're at the same crossroads that graphics drivers were in 2013 with the sunsetting of OpenGL by Apple and the announcement of Vulkan by Khronos Group. CUDA has been around for a while and only recently has it gotten attention from the other chip makers. Thank goodness for open source and the collective minds that participate.

aomix a day ago

I’ve been agonizing over getting the Framework Desktop for weeks as a dev machine/local LLM box/home server. It checks a lot of boxes but the only reason to look at the Framework Desktop over something like a Minisforum MS-A2 is for the LLM and that seems super janky right now. So I guess I’ll wait a beat and see where we are later in the year.

  • danieldk 21 hours ago

    My main worry about all the Minisforum, Beelink, etc. PCs is: potential lack of UEFI firmware updates (does anyone have experience with how good they are with updates?) and potential backdoors in the UEFI firmware (either intentionally or unintentionally). A China-aligned/sponsored group has made an UEFI rootkit targetting ASUS/Gigabyte mainboards: https://www.spiceworks.com/it-security/vulnerability-managem... Why not require/compel certain companies to implement them directly?

    • Shadowmist 17 hours ago

      I bought 3 Minisforum machines for a Kubernetes cluster and they didn't make it 11 months. They weren't even powered on most of that time. They just completely freeze with a black screen, and randomly enough to where every time I think maybe I figured out a fix it just crashes again a day later.

      • d3Xt3r 9 hours ago

        No issues with my UM 780 XTX, been running mine for about two years as a homelab, running k3s and a bunch of random VMs.

        Are you sure you didn't buy an Intel one by any change? Because Intel is garbage.

      • laurencerowe 15 hours ago

        My Minisforum M780 XTX has been rock solid for 20 months now. Bought it as a remote development box since I needed more RAM than my MacBook Air and didn't feel like shelling out $3K for a 64GB MacBook Pro. Generally prefer the remote development experience since it means the laptop stays cool, just a pain not being able to work at a cafe now and then.

  • laweijfmvo 21 hours ago

    probably doesn’t make sense as a home server unless you need the massive compute. i have a couple lenovo mini pcs (m75q, various generations, AMD) that I paid a total of $500 for on ebay. they’re so easy to find and handle most tasks swimmingly.

    • Arn_Thor 18 hours ago

      What kind of tasks do you use it for, if I may ask? Does it include local LLM/AI?

  • oblio 21 hours ago

    How quiet is the Minisforum?

    • porphyra 18 hours ago

      I'm not sure, but you could always just buy the Minisforum BD790i X3D [1]. Then, you'd be able to choose your own fan and case, and you can make it arbitrarily quiet by selecting a good fan. Early BD790i boards had a bug where losing power causes it to reset all BIOS settings, but they fixed that in later batches. I wonder when they will come out with a 9955HX version. Another good thing about this board is that it has two PCIe 5 NVME SSD slots with active cooling, which is a lot better than most other mini ITX boards out there, including the Framework.

      [1] https://store.minisforum.com/products/minisforum-bd790i-x3d

      • bketelsen 14 hours ago

        I did exactly this, put it in a 2U case. Fantastic performance, but even with the best Noctua I could put on it the CPU fan sounds like a Hawker Harrier doing VTOL when it's under load. Don't regret the board, but wish my rack was in another room now.

Kirth a day ago

I was baffled by the comparison to the M4 Max. Does this mean that recent AMD chips will be performing at the same level, and what does that mean for on-device LLMs? .. or am I misunderstanding this whole ordeal?

  • izacus a day ago

    Yes, the Strix series of AMD uses a similar architecture as M series with massive memory bandwidth and big caches.

    That results in significantly better performance.

    • sidewndr46 a day ago

      Isn't this the desktop architecture that Torvalds suggested years ago?

      • numpad0 6 hours ago

        Faster and bigger SRAM cache is as complicated of a solution as adding moar boosters to your rocket. It works, but expensive. RP2040 uses ~8x more die space as its dual CPU just for the RAM.

      • izacus 17 hours ago

        I don't know, but it's primarily very expensive to manufacture and hard to make expandable. You can see people in rage due to soldered RAM in this thread.

        There's always tradeoffs and people propose many things. Selling those things as a product is another game entirely.

      • JonChesterfield 17 hours ago

        It basically looks like a games console. Its not a conceptually difficult architecture, "what if the GPU and the CPU had the same memory?". Good things indeed.

      • yonisto 13 hours ago

        This how the Amiga worked 40 years ago...

    • schmorptron a day ago

      Will we be able to get similar bandwidth with socketed ram with CAMM / LPCAMM modules in the near future?

      • topspin a day ago

        Maybe, but due to the physics of signal integrity, socketed RAM will always be slower than RAM integrated onto the same PCB as whatever processing element is using it, so by the time CAMM / LPCAMM catches up, some newer integrated RAM solution will be faster yet.

        This is a matter of physics. It can't be "fixed." Signal integrity is why classic GPU cards have GiBs of integrated RAM chips: GPUs with non-upgradeable RAM that people have been happily buying for years now.

        Today, the RAM requirements of GPU and their applications has become so large that the extra, low cost, slow, socketed RAM is now a false economy. Naturally, therefore, it's being eliminated as PCs evolve into big GPUs, with one flavor or other of traditional ISA processing elements attached.

        • kbolino 21 hours ago

          Is the problem truly down to physics or is it down to the stovepiped and conservative attitudes of PC part manufacturers and their trade groups like JEDEC? (Not that consumers don't play a role here too).

          The only essential part of sockets vs solder is the metal-metal contacts. The size of the modules and the distance from the CPU/GPU are all adjustable parameters if the will exists to change them.

          • jmb99 12 hours ago

            > The only essential part of sockets vs solder is the metal-metal contacts.

            Yeah... And that’s a pretty damn big difference. A connector is always going to result in worse signal integrity than a high-quality solder joint in the real world.

            • kbolino 11 hours ago

              Is that really the long pole in the tent, though?

              No doubt the most tightly integrated package can outperform a looser collection of components. But if we could shorten the distances, tighten the tolerances, and have the IC companies work on improving the whole landscape instead of just narrow, disjointed pieces slowly one at a time, then would the unsoldered connections still cause a massive performance loss or just a minor one?

              • numpad0 6 hours ago

                Yes. Signal integrity is so finicky at frequencies DRAM operates that whether you drill the plated holes on boards that complete the circuit to go completely through the board or stop it halfway starts to matter due to signals permeating into the stubs of the holes and reflecting back into the trace causing interference. Adding a connector between RAM and CPU is like extending that long pole in the tent in the middle by inserting a stack of elephant into what is already shaped like an engine crankshaft found in a crashed wreck of a car.

                Besides, no one strictly need mid-life upgradable RAMs. You're just wanting to be able to upgrade RAM later after purchase because it's cheaper upfront and also because it leaves less room for supply side for price gouging. Those aren't technical reasons you can't option a 2TB RAM on purchase and be done for 10 years.

                • kbolino an hour ago

                  In the past, at least, RAM upgrades weren't just about filling in the slots you couldn't afford to fill on day one. RAM modules also got denser and faster over time too. This meant you could add more and better RAM to your system after waiting a couple years than it was even physically possible to install upfront.

                  Part of the reason I have doubts about the physical necessity here is because PCI Express (x16) is roughly keeping up with GDDR in terms of bandwidth. Of course they are not completely apples-to-apples comparable, but it proves at least that it's possible to have a high-bandwidth unsoldered interface. I will admit though that what I can find indicates that signal integrity is the biggest issue each new generation of PCIe has to overcome.

                  It's possible that the best solution for discrete PC components will be to move what we today call RAM onto the CPU package (which is also very likely to become a CPU+GPU package) and then keep PCIe x16 around to provide another tier of fast but upgradeable storage.

          • topspin 14 hours ago

            > Is the problem truly down to physics

            Yes. The "conservative attitudes" of JEDEC et al. are a consequence of physics and the capabilities of every party involved in dealing with it, from the RAM chip fabricators and PCB manufacturers, all the way to you, the consumer, and the price you're willing to pay for motherboards, power supplies, memory controllers, and yield costs incurred trying to build all of this stuff, such that you can sort by price, mail order some likely untested combination of affordable components and stick them together with a fair chance that it will all "work" within the power consumption envelope, thermal envelope, and failure rate you're likely to tolerate. Every iteration of the standards is another attempt to strike the right balance all the way up and down this chain, and at the root of everything is the physics of signal integrity, power consumption, thermals and component reliability.

            • kbolino 12 hours ago

              As I said, consumers play a part here too. But I don't see the causal line from the physics to the stagnation, stovepiping, artificial market segmentation, and cartelization we see in the computer component industries.

              Soldering RAM has always been around and it has its benefits. I'm not convinced of its necessity however. We're just now getting a new memory socket form factor but the need was emerging a decade ago.

          • throw-qqqqq 20 hours ago

            > The only essential part of sockets vs solder is the metal-metal contacts

            And at GHz speeds that matters more than you may think.

        • cge 21 hours ago

          It’s possible that Apple really did a disservice to soldered RAM by making it a key profit-increasing option for them, exploiting the inability of buyers to buy RAM elsewhere or upgrade later, but in turn making soldered RAM seem like a scam, when it does have fundamental advantages, as you point out.

          Going from 64 GB to 128 GB of soldered RAM on the Framework Desktop costs €470, which doesn’t seem that much more expensive than fast socketed RAM. Going from 64 GB to 128 GB on a Mac Studio costs €1000.

          • topspin 12 hours ago

            Ask yourself this: what is the correct markup for delivering this nearly four years before everyone else? Because that's what Apple did, and why customers have been eagerly paying the cost.

            Let us all know when you've computed that answer. I'll be interested, because I have no idea how to go about it.

        • gautamcgoel 18 hours ago

          How much higher bandwidth, percentage wise, can one expect from integrated DRAM vs socketed DRAM? 10%?

          • wtallis 18 hours ago

            Intel's Arrow Lake platform launched in fall 2024 is the first to support CUDIMMs (clock redriver on each memory module) and as a result is the first desktop CPU to officially support 6400MT/s without overclocking (albeit only reaching that speed for single-rank modules with only one module per channel). Apple's M1 Pro and M1 Max processors launched in fall 2021 used 6400MT/s LPDDR5.

            Intel's Lunar Lake low-power laptop processors launched in fall 2024 use on-package LPDDR5x running at 8533MT/s, as do Apple's M4 Pro and M4 Max.

            So at the moment, soldered DRAM offers 33% more bandwidth for the same bus width, and is the only way to get more than a 128-bit bus width in anything smaller than a desktop workstation.

            Smartphones are already moving beyond 9600MT/s for their RAM, in part because they typically only use a 64-bit bus with. GPUs are at 30000MT/s with GDDR7 memory.

        • cptskippy 18 hours ago

          Perhaps it's time to introduce L4 Cache and a new Slot CPU design where RAM/L4 is incorporated into the CPU package? The original Slot CPUs that Intel and AMD released in the late 90s were to address similar issues with L2 cache.

  • cdavid a day ago

    I was surprised at previous comparison on omarchy website, because apple m* work really well for data science work that don't require GPU.

    It may be explained by integer vs float performance, though I am too lazy to investigate. A weak data point, using a matrix product of N=6000 matrix by itself on numpy:

      - SER 8 8745, linux: 280 ms -> 1.53 Tflops (single prec)
      - my m2 macbook air: it is ~180ms ms -> ~2.4 Tflops (single prec)
    
    This is 2 mins of benchmarking on the computers I have. It is not apple to orange comparison (e.g. I use the numpy default blas on each platform), but not completely irrelevant to what people will do w/o much effort. And floating point is what matters for LLM, not integer computation (which is what the ruby test suite is most likely bottlenecked by)
    • Tuna-Fish a day ago

      It's all about the memory bandwidth.

      Apple M chips are slower on the computation that AMD chips, but they have soldered on-package fast ram with a wide memory interface, which is very useful on workloads that handle lots of data.

      Strix halo has a 256-bit LPDDR5X interface, twice as wide as the typical desktop chip, roughly equal to the M4 Pro and half of that of the M4 Max.

    • jychang a day ago

      You're most likely bottlenecked by memory bandwidth for a LLM.

      The AMD AI MAX 395+ gives you 256GB/sec. The M4 gives you 120GB/s, and the M4 Pro gives you 273GB/s. The M4 Max: 410GB/s (14‑core CPU/32‑core GPU) or 546GB/s (16‑core CPU/40‑core GPU).

      • zargon 20 hours ago

        It’s both. If you’re using any real amount of context, you need compute too.

      • cdavid a day ago

        Yeah, memory bandwidth is often the limitation for floating point operations.

  • Aurornis 17 hours ago

    An M4 Max has double the memory bandwidth and should run away with similarly optimized benchmarks.

    An M4 Pro is the more appropriate comparison. I don't know why he's doing price comparisons to a Mac Studio when you can get a 64GB M4 Pro Mac Mini (the closest price/performance comparison point) for much less.

    • dismalaf 17 hours ago

      > don't know why he's doing price comparisons to a Mac Studio when you can get a 64GB M4 Pro Mac Mini (the closest price/performance comparison point) for much less.

      Where?

      An M4 Pro Mac Mini is priced higher than the Framework here in Canada...

  • biehl a day ago

    I think DHH compares them because they are both the latest, top-line chips. I think DHHs benchmarks show that they have different performance characteristics. But DHHs favorite benchmark favors whatever runs native linux and docker.

    For local LLM the higher memory bandwith of M4 Max makes it much more performant.

    Arstechnica has more benchmarks for non-llm things https://arstechnica.com/gadgets/2025/08/review-framework-des...

    • rr808 a day ago

      After the appstore fight, DHH's favorite is whatever is not Apple lol. TBF it just opened his eyes to alternatives now is happy off that platform.

      • rramon a day ago

        How long until he clashes with the GPL and discovers the BSDs?

        • dismalaf 21 hours ago

          Why would that happen? The GPL doesn't conflict at all with anything 37Signals does nor the Rails ecosystem...

          • rramon 6 hours ago

            Now, but after listening to podcasts with him I think he's someone who would tackle hard stuff like drivers or DSP, so called math genius level coding as soon as it becomes more accessible for him through AI assisted coding.

            There is a chance to build a real MacOS/iOS alternatives without a JVM abstraction layer on top like Android. The reason it didn't happen yet is the GPL firewall around the Linux kernel imo.

  • discordance a day ago

    Not in perf/watt but perf, yes.

    • jchw a day ago

      Depends on the benchmark I think. In this case it's probably close. Apple is cagey when it comes to power draw or clock metrics but I believe the M4 max has been seen drawing around 50W in loaded scenarios. Meanwhile, Phoronix clocked the 395+ as drawing an average of 91 watts during their benchmarks. If the performance is ~twice as fast that should be a similar performance per watt. Needless to say it's at least not a dramatic difference the way it was when the M1 came out.

      edit: Though the M4 Max may be more power hungry than I'm giving it credit, but it's hard to say because I can't figure out if some of these power draw metrics from random Internet posts actually isolate the M4 itself. It looks like when the GPU is loaded it goes much, much higher.

      https://old.reddit.com/r/macbookpro/comments/1hkhtpp/m4_max_...

  • ekianjo 15 hours ago

    macs have faster memory access so No, Macs are faster for llms

  • pengaru a day ago

    It's not baffling once you realize TSMC is the main defining factor for all these chips, Apple Silicon is simply not that special in the grand scheme of things.

    Why do you think TSMC's production being in Taiwan is basically a national security issue for the U.S. at this point?

    • toasterlovin a day ago

      > Apple Silicon is simply not that special in the grand scheme of things

      Apple Silicon might not be that special from an architecture perspective (although treating integrated GPUs as appropriate for workloads other than low end laptops was a break with industry trends), but it’s very special from an economic perspective. The Apple Silicon unit volumes from iPhones have financed TSMC’s rise to semiconductor process dominance and, it would appear, permanently dethroned Intel.

      • MegaDeKay 20 hours ago

        Apple was just the highest bidder for getting the latest TSMC process. They wouldn't have had a problem getting other customers to buy up that capacity. And Intel's missteps counted for a substantial part of the process dominance you refer to. So I'd argue that Apple isn't that special here either.

        • toasterlovin 10 hours ago

          Until Apple forced other chip makers to respond, nobody else was making high end phone processors. And their A series processors are competitive with and have transistor counts comparable to most mobile and desktop PC processors (and have for years). So the alternate reality where Apple isn't a TSMC customer means that TSMC is no longer manufacturing several hundred million high transistor count processors per year. In my opinion, it’s pretty likely TSMC isn’t able to achieve or maintain process dominance without that.

          Update: To give an idea of the scales involved here, Apple had iPhone revenue in 2024 of about $200B. At an average selling price of $1k, we get 200 million units. Thats a ballpark estimate, they don’t release unit volumes, AFAIK. This link from IDC[1] has the global PC market in 2024 at about 267 million units. Apple also has iPads and Macs, so their unit processor volume is roughly comparable to the entire PC market. But, and this is hugely important: every single processor that Apple ships is comparable in performance (and, thus, transistor counts) to high end PC processors. So their transistor volume probably exceeds the entire PC CPU market. And the majority of it is fabbed on TSMC’s leading process node in any given year.

          [1]: https://my.idc.com/getdoc.jsp?containerId=prUS53061925

    • ozgrakkurt 21 hours ago

      I don’t think there is a laptop that comes close to battery life or performance while on battery of m1 macbook pro

      I hate apple but there is obviously something special about it

      • alternatex 19 hours ago

        I'm pretty sure many of the Windows laptops with the Qualcomm Snapdragon Elite chip have the same or better battery life and comparable performance in a similar form factor. There are many videos online of comparisons.

jamesgill 2 days ago

I like Framework and own one of their laptops. But the desktop seems more a triumph of gimmicky marketing than a desktop that's meaningfully different. And, it seems significantly overpriced.

  • rs186 a day ago

    If you can't find an sufficiently similar alternative that is priced at a much better price, it is not overpriced.

    • baby_souffle a day ago

      I am very on board with the framework mission. I can afford the premium just to keep their lights on and doors open. The other Chinese OEMs almost certainly won’t offer quite the support for that ~10% discount…

    • mrbluecoat a day ago

      I guess the original Raspberry Pi team missed the memo on that.

    • esafak a day ago

      For the purposes of running LLM models, a Mac Mini. The PC is cheaper, but it doesn't have MacOS, Apple's service or resale value.

      • jchw a day ago

        Actually the pricing is pretty similar.

        Framework Desktop price with default selections, 32GB of RAM, 500 GB storage: $1,242.00 USD

        Mac Mini with 32GB of RAM, 512 GB storage: $1,199.00

        Post changed a bit since I started replying, so:

        > For the purposes of running LLM models, a Mac Mini

        The M4 Max is the one that actually gives you a shit load of memory bandwidth. If you just get a normal M4 it's not going to be especially good at that.

        > it doesn't have MacOS

        The Mac can't run Windows, which is used by ~75% of all desktop computer users and the main operating system that video games target. I'd say that would be the bigger problem for many.

        > Apple's service

        What advantage does that get you over Framework's service?

        > resale value

        Framework resale value has proven to be excellent by the way. Go to eBay, search "Framework Laptop", go to "Sold Items". Many SKUs seem to be retaining most of their value.

        (Nevermind the ease of repair for the Framework, or the superior expandability. If you want to expand the disk space on an M4 you need to get sketchy parts, possibly solder things, and rescue your Mac with another Mac. For framework devices you plug in another M.2 card.)

        • m_mueller 10 hours ago

          Macs can run Windows just fine, through Parallels. It’s more efficient at doing so than most ARM based windows machines on sale still. And I found software compatibility with Windows 11 for ARM to be a non issue nowadays.

          • jchw 8 hours ago

            I can't tell you how much I disagree with this take.

            Microsoft's AMD64 emulator is slow and buggy compared to Rosetta, and you will need it a lot more, too. Many apps will need to rely on this, including programs many users will immediately try to use, like Visual Studio. Neither Visual Studio nor its compilers support running on an ARM host; it does seem to basically work, but is slow, which is not good considering Visual Studio is already not particularly fast. It will even display a message box warning you that it is not supported during setup, so you couldn't miss it (note that this applies to the Visual Studio compilers needed by Python and Node for installing packages with C extensions). MSys2 gave me a lot of trouble, too; the setup just doesn't seem to work on ARM64. Chocolatey often installs AMD64 or x86 binaries on ARM instead of native ones; sometimes native ones don't exist anyways. Third party thing that needs to load a kernel module? Don't bet on there being an ARM64 build of it; sometimes there is, sometimes there isn't. WinFSP has a build, many hardware drivers I've looked at don't seem to (don't laugh: you can pass through USB devices, there is sense in installing actual hardware drivers.) I just set up a fresh copy of Parallels on an M3 Mac a couple months ago, I'm stopping now to be terse, this paragraph could easily be much longer. It would suffice to say that basic Windows software usage is a lot worse on Parallels than a native Windows AMD64 machine. Very useful, sure. At parity, oh no. Not close.

            That's just the basics though. For GPU, Parallels does do some D3D11 translation which is (a lot!) better than nothing, but you are not getting native performance, you are not getting Vulkan support, and you are certainly not getting ROCm or CUDA or anything equivalent, so actually a lot of apps that practically need GPU acceleration are not going to be usable anyways. Even if a video game would run, anti-piracy and anti-cheat measures in lots of modern games detect and block users using VM software, not that you can expect that all of the games you want to run would even work anyways on ARM; plenty of games are known to be unstable and some don't work at all. There are other side effects of trying to do actual gaming in VMs in general, but I really think this gets the point across: Windows games and multimedia are significantly worse in Parallels than on a native machine.

            Parallels filesystem bridging is impractical, it's not fast enough and it is buggy, i.e. running Bazel on bridged files will not work. This means you need to copy stuff back and forth basically all the time if you want to work on stuff natively but then test on Windows. Maybe this is partly Window's fault, but in any case it would suffice to say that workflows that involve Windows will be a lot clunkier than they would be on a native Windows machine.

            I think these conclusions, that a native Windows machine would be a lot better for doing Windows things than a Mac running Parallels, is actually pretty obvious and self-evident, but reading what you said might literally give someone the opposite impression, that there is little reason to possibly want to run Windows. This is just misleading. Parallels is a great option as a last resort or to fill a gap, but if you have anything that regularly requires you to use Windows software or test on Windows, Parallels is not a very serious option. It may be cheaper than two machines in fiat currency, but probably not in sanity.

            I don't know you, so I can't and won't, based on a single post, accuse you of being a fanboy. However, this genre of retort is a serious issue with fanboyism. It's easy to say "Windows? just use VMs!", but that's because for some people, actually just using Windows is probably not a serious option they would consider anyways; if the VM didn't work for a use case they'd back up and reconsider almost anything else before they reconsider their choice of OS or hardware vendor, but they probably barely need (if at all) a VM with Windows anyways. If this feels like a personal attack, I'd like to clarify that I am describing myself. I will not use Windows. I don't have a bare metal Windows machine in my house, and I do my best to limit Windows usage in VMs, too. Hell, I will basically only use macOS under duress these days, I'm not a fan of the direction it has gone either.

            Still, I do not go around telling people that they should just go switch to Linux if they don't like Windows, and that Virtualbox or Wine will solve all of their problems, because that's probably not true and it's downright dishonest when deep down I know how well the experience will go. The honest and respectful thing to tell people about Linux is that it will suck, some of the random hardware you use might not work, some of your software won't work well under Wine or VMs, and you might spend more time troubleshooting. If they're still interested even after proper cautioning, chances are they'll actually go through with it and figure it out: people do not need to be sold a romantic vision, if anything they need the opposite, because they may struggle to foresee what kinds of problems they might run into. Telling people that virtual machines are a magic solution and you don't have to care about software compatibility is insane, and I say that with full awareness that Parallels is better than many of the other options in terms of user friendliness and out of the box capabilities.

            I think the same thing is fair to do for macOS. With macOS there is the advantage that the entire experience is nicer as long as everything you want to do fits nicely into Apple's opinionated playbook and you buy into Apple's ecosystem, but I rarely hear people actually mention those latter caveats. I rarely hear people mention that, oh yeah, a lot of cool features I use only work because i use Apple hardware and services throughout my entire life, and your Android phone might not work as well, especially not with those expensive headphones I think you should get for your Mac. Fanboys of things have a dastardly way of only showing people the compelling side of things and leaving out the caveats. I don't appreciate this, and I think it ultimately has an overtone of thinking you know what someone wants better than they do. If someone is really going to be interested in living the Mac life, they don't need to be mislead to be compelled.

        • wtallis 21 hours ago

          > Mac Mini with 32GB of RAM, 512 GB storage: $1,199.00

          You're looking at the wrong Mac Mini. The model with the M4 Pro is the right comparison, on account of also having a 256-bit memory bus giving substantially higher bandwidth than a typical desktop computer. The M4 Pro model doesn't have a 32GB option.

          The M4 Max (not available in a Mac Mini) has an even larger memory bus, giving it far more bandwidth than either the M4 Pro or the AMD Strix Halo part used by Framework.

          • jchw 20 hours ago

            I was just going for a head-to-head comparison, that's the closest you can get in price/performance. The closest M4 Pro Mac Mini is already a lot more expensive than the baseline Framework Desktop.

            The Framework Desktop Max+ 395 with 128 GB of RAM, and a 500 GB SSD costs around $2,147.00 USD before tax. The M4 Pro with the 20-core GPU, 64 GB of RAM, and a 512 GB SSD costs around $2,199.00 USD. That's still short 64 GB of RAM, of course.*

            The lowest-end M4 Max Mac Studio that can support 128 GB of RAM seems to cost $3,499.00 with 128 GB of RAM and a 512 GB SSD. For that you get 546GB/s of maximum memory bandwidth according to Apple, which is definitely a step up from the 256GB/s maximum for the Ryzen AI Max+ 395, but obviously also at a price that is quite higher too.

            Apparently though, 128 GB of RAM is currently the ceiling for the M4 Max right now. So it seems like if you were going for a maximum performance local AI cluster at any price, the M3 Ultra Mac Studios are definitely in the running, though at that point it probably is starting to get to the price where AMD and NVIDIA's data center GPUs start to enter the picture, and AMD Instinct cards measure memory bandwidth in terabytes per second.

            * Regarding GPUs: The Framework Desktop Max+ 395 Radeon 8060S seems to be vastly faster than all of the non-Max M4 SKUs, for anyone that cares a lot about GPU performance. The M4 Max seems to outperform the 8060S by a bit though, and obviously it has some stand-out features like a shit load of video encoding/decoding hardware. This complicates the value comparison a lot. The Radeon core definitely gets a much better value for the performance in any case. I'm really impressed by what they managed to do there.

      • pimeys a day ago

        I count not needing to use macOS a big plus. Full Linux support out of the box.

      • croes 21 hours ago

        > but it doesn't have MacOS, Apple's service or resale value.

        If the purpose is running LLMs non of that matters.

        But Linux support is an advantage. Does the M4 have that?

        • esafak 21 hours ago

          Why doesn't it matter? Does your computer magically stop needing to be serviced or eventually sold because you're running LLMs?

          I run Linux containers all the time.

          • croes 19 hours ago

            None of my computers ever needed service.

            The LLM point is that Linux is better suited for most AI tools and their toolchains

      • dismalaf 21 hours ago

        The M4 has half the memory bandwidth of the 395+ and the specs on those models are absolute trash. To get an M4 Pro APU and decent specs you're spending at least as much as the Framework, at least here in Canada.

  • zozbot234 2 days ago

    It's taking a newly released mobile- and mini-PC-focused platform that's usually paired with proprietary technology, and building something that's as close as possible to a standard desktop with it. Seems very much in the Framework spirit once you account for that side of it.

    • mschild a day ago

      Right, but why go with mobile at all? I get the laptops.

      For desktop you already have thousands of choices though and reparability, assuming its not some proprietary Dell/HP desktop, is already as good as it gets without breaking out your soldering iron.

      That said, they'll know more about the market demand than I do and another option won't hurt :)

      • MindSpunk a day ago

        The specific chip powering the Framework Desktop is something very unique in the PC landscape in general, even in desktop. The Strix Halo chip pairs a 16 core CPU with a huge iGPU that performs like a desktop discrete GPU, and 128GB of RAM (accessible on the GPU).

        Strix Halo is almost like having a PS5 or Xbox chip but available for the PC ecosystem. It's a super interesting and unique part for GPU compute, AI, or small form factor gaming.

      • signal11 a day ago

        Quiet desktop PCs with good thermals have been getting increased interest — not everyone needs a tower, for some a Mac Mini-like device would work great, but not everyone wants to get into the Apple ecosystem for various reasons.

        Of course this PC is interesting in that it’s more “workstation class” and I’m not sure how much thermals matter there, but maybe this is an iteration towards a Mac Studio like device.

      • zozbot234 a day ago

        > Right, but why go with mobile at all? I get the laptops.

        Pair a power-efficient mobile chip with a mini-desktop form factor and a good (i.e. probably overengineered, to some extent) cooling solution, and it will give you a kind of sustained performance and reliability over time that you just aren't going to get from the average consumer/enthusiast desktop chip. Great for workstation-like use cases that still don't quite need the raw performance and official support you'd get from a real, honest-to-goodness HEDT.

        • extraisland 5 hours ago

          The most reliable computers I have are the "consumer/enthusiasts" computers I have.

      • timc3 a day ago

        Because its using a Strix Halo APU which to some is kinda interesting, and to others all they need for sometime.

      • wiseowise a day ago

        Supporting OSS and repairable hardware?

        • mtzet a day ago

          A normal desktop with non-soldered components is more repairable, cheaper and can also run on stock Linux?

          The only selling point is the form factor and a massive amount of GPU memory, but a dGPU offers more raw compute.

          • bsimpson a day ago

            AMD APUs can run stock Linux.

            All those SteamOS handhelds are on AMD.

          • wiseowise a day ago

            > with non-soldered components is more repairable

            This is literally the limitation of the platform. Why even bring that up? Framework took a part made by AMD and put in their devices.

        • mschild a day ago

          OSS is fair.

          From the product page I don't see how that mainboard is more repairable than a typical ITX one though. As far as I can tell, you also cannot change the CPU on it so even less than a typical desktop mainboard.

          • wiseowise a day ago

            By buying their devices you directly support company and mission that they're on. I'm not a diehard OSS supporter (Mac user here), but I consider buying The Framework Desktop just to support the company over, say, Dell or HP.

            • baby_souffle a day ago

              > but I consider buying The Framework Desktop just to support the company over, say, Dell or HP.

              Exactly. Between those three companies, only one of them is likely to even try to make something like core boot possible on this machine. That’s something I can afford to encourage.

  • komali2 a day ago

    I'm realizing that I may have misunderstood Framework's market. I thought it was tinkerers and environmentally conscious FOSS nerds like me, but I think there maybe be a huge enterprise segment whose employees in charging of purchasing are like me but answer to much more strict business needs than "Isn't it cool that it comes with a screwdriver in the box?" So for example the underpowered cpu in the fw12 makes no sense to me until I found out that it's also designed for mass purchases by schools and designed to be flung around by angsty teens. The desktop seems to be meant to be strapped to the underside of 40 identical cubicals in an office as much as it's meant to be apparently hauled around by people that want to have CSGO lan parties.

    • zozbot234 a day ago

      > So for example the underpowered cpu in the fw12 makes no sense to me until I found out that it's also designed for mass purchases by schools and designed to be flung around by angsty teens.

      I think that might be overstating it a bit. Real "rugged" laptops do exist, and would be quite at home in that kind of use (well, usually you'd worry a lot more about how kids in primary school will treat your hardware than teenagers) but the Framework 12 is not one.

      • FLHerne a day ago

        Real "rugged" laptops are far too expensive for schools to buy by the dozen. Also, while robust against the environment they're not so much against deliberate vandalism or theft. The target market for those seems to be construction/industrial and similar, and of course the military.

        All school laptop fleets I've seen are simply the cheapest thing they can buy in bulk, when it breaks provision a new one.

fmajid 2 days ago

I cancelled my Framework Desktop order and ordered a HP Z2 Mini G1a instead, the goal being to replace my Mac Studio as I've had it with Apple's arrogance and lousy software quality. The HP is much smaller, has ECC RAM and 10G Ethernet. Significantly more expensive, however.

  • DrBenCarson 2 days ago

    Apple have certainly taken a couple steps back re: overall reliability, but if you think that the grass is greener on the other side…pray tell how that goes

    Plus, you can now deploy [MLX projects on CUDA](https://github.com/ml-explore/mlx/pull/1983)

    • paxys a day ago

      Even if the grass is the same on the other side a 50% discount for the same performance doesn’t seem too bad.

  • mixmastamyk 2 days ago

    Was scoffing at HP, but then you got my attention with ECC RAM. Looks nice as well.

    • sliken a day ago

      Keep scoffing, it's not real end to end ECC, just "link" ecc, which is just part of the chip -> CPU pipeline.

      So it's not full ECC like servers have with dimms with a multiple of 9 chips with ECC protecting everything from the dimms to the CPU.

      Keep in mind the ram is inside the strix halo package, not something HP has control over.

      • wtallis 21 hours ago

        > Keep in mind the ram is inside the strix halo package, not something HP has control over.

        It's not in the package, it's on the motherboard spread around the SoC package: https://www.hp.com/content/dam/sites/worldwide/personal-comp...

        The 8 DRAM packages pretty clearly indicate you're not getting the extra capacity for end-to-end ECC as you would on a typical workstation or server memory module.

      • Marsymars a day ago

        Wait is there any actual difference in the RAM between the HP and the Framework Desktop?

        • justincormack a day ago

          No I dont think so, all ddr5 has some "ECC" but not full ECC, ie you cant see corrections. And all the AI maxes have the same LPDDR5

          • geerlingguy 15 hours ago

            Many mobile devices have on-chip ECC because they have to for signal integrity reasons... technically the Raspberry Pi 5 has ECC too, by that definition!

  • snvzz 12 hours ago

    Lack of ECC is what prevents the Framework Desktop from being anything else than an expensive toy.

    • bpye 9 hours ago

      Apple isn't using ECC either. If your only aim is to run LLMs how much do you actually care about ECC?

      • snvzz 6 hours ago

        >Apple isn't using ECC either.

        And thus their machines are expensive toys as well.

        (I type this on a system I built, a 9800x3d with 96GB (2x 48GB) ECC RAM.)

        >If your only aim is to run LLMs how much do you actually care about ECC?

        That is not me, nor do I know anyone who exclusively uses computers to run LLMs.

  • jeffbee 14 hours ago

    It is also, unfortunately, insanely loud.

  • worthless-trash a day ago

    Do you linux on this ? if so.. does everything work ?

    • fmajid 21 hours ago

      Haven't received it yet, but of course I will install Linux on it.

      • worthless-trash 2 hours ago

        Please respond to this with a link to your quick synopsis.

ilaksh a day ago

How does that Framework Desktop compare with the "GMKtec AI Mini Ryzen Al Max+ 395 128GB" mini PC?

I suspect this one is very similar hardware and a slightly better deal if you give up the cool factor of Framework. Although I don't really know.

Anyone compared them head-to-head?

  • Fnoord a day ago

    Hehe, that is just a Chinese company, isn't it? Say bye bye to warranty and support, repairability. I am not saying you shouldn't consider it but describing Framework merely as cool factor is the other extreme.

    • qingcharles 19 hours ago

      I've been turned off GMKTek since I was helping a buddy fresh install Windows and found their drivers are on a Google Drive folder they don't pay for which hits its download quota regularly and so you have to go back day after day and play roulette and hope you get in at the right time. And the drivers are literally nowhere else that I could find, even with driver search tools.

    • makeitdouble a day ago

      For nuance:

      - framework only sells to specific countries. Warranty won't even be an issue if you can't buy one in the first place.

      - Chinese manufacturers offer support and warranty. In particular GMKTek does[0].

      - Repairability will be at best on par with framework, but better than a random western brand. Think HP and their repairability track record.

      "just a Chinese company" feels pretty weird as a qualificative in this day and age when Chinese companies are on average ahead of the curve.

      [0] https://de.gmktec.com/en/pages/ruckgabe-umtausch

      • lucianbr a day ago

        Feels like in terms of warranty, support, repairability, it's not so much that Chinese brands have advanced, but that the west has seriously regressed. Every company now is looking to lock me out of my own hardware, extract as much information about me, extract as much value possible by degrading support and compatibility and whatever else they can...

        Maybe when we run out of reasons to buy american or european or japanese they will wake up, but I don't see it.

        • cmrdporcupine 20 hours ago

          Doubling down on mediocrity and then protecting it with tariffs is the American way now it seems.

          For me as a Canadian, Framework being an American company was always a problem because their shipping and availability to here were actually inferior to overseas suppliers despite being on the same continent (this is frankly often the case because of American business arrogance and blindness to the Canadian market -- things ship faster from Europe than from American suppliers for me, on the whole).

          But now with the idiotic trade war talk, it's even worse since I'm likely to be hit by a retalliatory tariff situation.

          Some day hopefully all this dollar store economic nationalism will blow over, but in the meantime it's too bad Framework has a good product and isn't European or Asian, because I won't buy it now.

      • Fnoord a day ago

        My post is the nuance, and as a buyer/user of GPD, Minisforum, and Xiaomi products I respectfully disagree.

        Chinese companies are not on par with Western ones. The QA, safety measures, hazard compliance, warranty, or even proper English (they use an online translator service) isn't there. Cha bu duo is an accurate description of Chinese products.

        From the link you send they offer 7 days return policy. In EU you got 2 weeks, legally enforced. Companies like Amazon offer even a month. Then they have a restock fee of 15%. This is AFAIK allowed (if proportional to the damage to the product) but it does not seem proportional. Companies like Amazon don't do this. And Amazon isn't great; they have a lot of cheap Chinese dropshipping brand. Then they often lie in China as well. They claim leather, when you buy it it is fake leather.

        Cha bu duo can be good enough if you are on a tight budget, or if the product isn't available otherwise (how I ended up with GPD Pocket 2 back around 2018). But I have personally witnessed how Xiaomi smartphones fucked up a mid sized non-profit who dealt with very sensitive personal details. They went for budget, ended up with a support nightmare, and something which shouldn't be GDPR compliant. Cause yeah, spyware and bloatware is another issue.

        Furthermore, Framework sell to Western countries.

        • throw554552 20 hours ago

          > Cha bu duo is an accurate description of Chinese products.

          Didn’t realize companies like DJI, BYD, CATL, Insta360, and Anker have a fail fast, fail early mentality.

        • makeitdouble 15 hours ago

          > proper English (they use an online translator service)

          Pet peeve: why should I care ?

          People take grammatical errors as some ultimate gotcha and indicator of character flaw. I don't pay for the marketing nor value that they asked 3 translators to double-check each other to have flawless text for the damn first boot guide of a computer.

          I see it the same as Framework's shaky YouTube presentations: they couldn't bother hiring a cameraman for their product presentations. What does that say about their computers ? To me absolutely nothing. I'll still buy one if it sounds nice enough.

          > 7 days return policy.

          It's from delivery.

          For comparison, framework offers 30 days, but from shipping. Which means if your laptop takes more than a month to get delivered for whatever reason you virtually have no return window.

          https://knowledgebase.frame.work/what-is-the-framework-retur...

          > Cha bu duo

          I hear you, but this is all relative to a market. There's no maker I can blindly trust whatever the country they operate in, and if we're going for cultural generalizations I'd set Tesla as the poster child of US manufacturers at this moment.

          All in all, I get why we should be wary about Chinese makers. It's just the same reason we should be wary of every other makers, including those who'll screw their customer base on any other aspect that won't get picked up by a review (repairability has been one of these. Compatibility, durability, vendor lockin, standard parts etc. are other aspects.

        • throw090329 a day ago

          Lenovo is a Chinese company.

          • danieldk 21 hours ago

            I would rather say it's a Chinese conglomerate. E.g. Think.* are said to be designed in the US, Taiwan, and Japan. They are often assembled locally (e.g. my ThinkStation was assembled in Hungary). They have offices in many local markets and can be held accountable for warranty, issues, etc.

            I agree with your general point though, 'Chinese' does not mean bad quality, warranty, etc. It's more a property of a bunch of Chinese computer companies selling through AliExpress, Amazon, etc. Their quality and service might improve as they grow.

            As an aside, Lenovo are pretty awesome. For ThinkStations, ThinkPads, etc. they have in-depth guides for supported memory configurations, disassembly, repair etc., often with part numbers. Their hardware also works well with fwupdmgr and they provide their own Linux support (like WWAN FCC unlock scripts).

            • zargon 20 hours ago

              > As an aside, Lenovo are pretty awesome.

              These things would be important if they made products that actually work. My T14s Gen 3 AMD simply doesn’t work. Half the time I go to wake it from sleep, the firmware has crashed and I have to hard-reset it. I spent months trying to get Lenovo to fix this. They did replace the motherboard twice (once on-site, once shipped to them) and eventually replaced the entire laptop with a new one. None of this is useful when they can’t make a laptop that doesn’t crash while it sleeps.

              • danieldk 19 hours ago

                Sorry to hear that! I have a T14 (non-s) Gen 5 AMD and everything works great, also suspend-resume. A few years ago I had a Gen 1 AMD and it was certainly much worse. It would resume, but the trackpad or trackpoint would often not come up. But I'm very happy with the Gen 5.

              • jdswain 18 hours ago

                My company provided Dell has the same issue (Intel CPU). Comes and goes a bit with firmware updates.

  • jychang a day ago

    They're the same price. Both 395 machines (with 128GB of RAM) are $1999.

  • sfjailbird a day ago

    Oh wow, it does indeed have "AI" in its name twice. Cant't wait for this shit to blow over.

    • marcusb a day ago

      AI Mini and AI Max. It has everything.

      • sidewndr46 a day ago

        someone min-max'd their marketing strategy!

        • oblio a day ago

          These days they're using spreadsheets for marketing copy :-))

Archit3ch a day ago

RDNA 3.5, which means you don't get Matrix Cores. Those are reserved for RDNA 4, which comes to laptop chips later this year. Desktop RDNA 4 only shipped in 2025.

For comparison, Nvidia brought Tensor Cores to consumer cards in 2022 with the 4000 series and Apple had simdgroup_matrix since 2020!

We are moving towards a world where this hardware is ubiquitous. It's uncertain what that means for non-ML workloads.

  • zozbot234 a day ago

    What do you need Matrix Cores for when you already have a NPU which can access the same memory, and even seems to include more flexible FPGA fabric? It's six of one, half a dozen of another.

    • SomeHacker44 a day ago

      I have the HP Zbook G1a running the same CPU and RAM under HP Ubuntu. I have not seen any OOTB way to use the TPU. I can get ROCm software to run but it does not use it. No system tools show its activity that I can see. It seems to be a marketing gimmick. Shame.

      • transpute a day ago

        https://news.ycombinator.com/item?id=43671940#43674311

        > The PFB is found in many different application domains such as radio astronomy, wireless communication, radar, ultrasound imaging and quantum computing.. the authors worked on the evaluation of a PFB on the AIE.. [developing] a performant dataflow implementation.. which made us curious about the AMD Ryzen NPU.

        > The [NPU] PFB figure shows.. speedup of circa 9.5x compared to the Ryzen CPU.. TINA allows running a non-NN algorithm on the NPU with just two extra operations or approximately 20 lines of added code.. on [Nvidia] GPUs CUDA memory is a limiting factor.. This limitation is alleviated on the AMD Ryzen NPU since it shares the same memory with the CPU providing up to 64GB of memory.

    • Archit3ch a day ago

      Can you do GPU -> NPU -> GPU for streaming workloads? The GPU can be more flexible than Tensor HW for preprocessing, light branching, etc.

      Also, Strix Halo NPU is 50 TOPS. The desktop RDNA 4 chips are into the 100s.

      As for consumer uses, I mentioned it's an open question. Blender? FFmpeg? Database queries? Audio?

    • bigyabai a day ago

      The NPU is generally pretty weak and not pipelined into the GPU's logic (which is already quite large on-die). It feels like the past 10 years have taught us that if you're going to create tensor-specific hardware then it makes the most sense to put it in your GPU and not a dark-silicon coprocessor.

ashleyn 2 days ago

How is AMD GPU compatibility with leading generative AI workflows? I'm under the impression everything is CUDA.

  • ftvkyo a day ago

    There is a project called SCALE that allows building CUDA code natively for AMD GPUs. It is designed as a drop-in replacement for Nvidia CUDA, and it is free for personal and educational use.

    You can find out more here: https://docs.scale-lang.com/stable/

    There are still many things that need implementing, most important ones being cuDNN and CUDA Graph API, but in my opinion, the list of things that are supported now is already quite impressive (and keeps improving): https://github.com/spectral-compute/scale-validation/tree/ma...

    Disclaimer: I am one of the developers of SCALE.

  • Aeolun a day ago

    All of Ollama and Stable Diffusion based stuff now works on my AMD cards. Maybe it’s different if you want to actually train things, but I have no issues running anything that fits in memory any more.

  • pja 2 days ago

    llama.cpp combined with Mesa’s Vulkan support for AMD GPUs has worked pretty well with everything I’ve thrown it at.

    • throwdbaaway 2 days ago

      https://llm-tracker.info/_TOORG/Strix-Halo has very comprehensive test results for running llama.cpp with Strix Halo. This one is particularly interesting:

      > But when we switch to longer context, we see something interesting happen. WMMA + FA basically loses no performance at this longer context length!

      > Vulkan + FA still has better pp but tg is significantly lower. More data points would be better, but seems like Vulkan performance may continue to decrease as context extends while the HIP+rocWMMA backend should perform better.

      lhl has also been sharing these test results in https://forum.level1techs.com/t/strix-halo-ryzen-ai-max-395-..., and his latest comment provides a great summary of the current state:

      > (What is bad is that basically every single model has a different optimal backend, and most of them have different optimal backends for pp (handling context) vs tg (new text)).

      Anyway, for me, the greatest thing about the Strix Halo + llama.cpp combo is that we can throw one or more egpu into the mix, as echoed by level1tech video (https://youtu.be/ziZDzrDI7AM?t=485), which should help a lot with PP performance.

  • nh43215rgb a day ago

    In practical generative AI workflows (LLMs), I think AMD Max+395 chips with unified memory is as good as Mac Studio or MacBook Pro configurations in handling big models locally and support fast inference speeds (However Top-end Apple silicon (M4 Max, Studio Ultra) can reach 546GB/s memory bandwidth, while the AMD unified memory system is around 256GB/s). I think for inference use either will work fine. For everything else I think CUDA ecosystem is a better bet (correct me if I'm wrong).

  • sbinnee a day ago

    My impression is the same. To train anything you just need to have CUDA gpus. For inference I think AMD and Apple M chips are getting better and better.

    • jychang a day ago

      For inference, Nvidia/AMD/Intel/Apple are all generally on the same tier now.

      There's a post on github of a madman who got llama.cpp generating tokens for an AI model that's running on an Intel Arc, Nvidia 3090, and AMD gpu at the same time. https://github.com/ggml-org/llama.cpp/pull/5321

  • DiabloD3 2 days ago

    CUDA isn't really used for new code. Its used for legacy codebases.

    In the LLM world, you really only see CUDA being used with Triton and/or PyTorch consumers that haven't moved onto better pastures (mainly because they only know Python and aren't actually programmers).

    That said, AMD can run most CUDA code through ROCm, and AMD officially supports Triton and PyTorch, so even the academics have a way out of Nvidia hell.

    • sexeriy237 a day ago

      If you're not doing machine code by hand, you're not a programmer

      • phanimahesh a day ago

        If you are not winding copper around magnets by hand, you are not a real programmer

        • DiabloD3 a day ago

          I get the joke you two are making, but I've seen what academics have written in Python. Somehow, its worse than what academics used to write when Java was taught has the only language for CompSci degrees.

          At least Java has types and can be performant. The world was ever so slightly better back then.

          • sevensor a day ago

            There is some truly execrable Python code out there, but it’s there because the barrier to entry is so low. Especially back in the day, Java had so many guardrails that the really bad Java code came from intermediate programmers pushing up against the limitations of the language rather than from novices pasting garbage into a notebook. As a result there was less of it, but I’m not convinced that’s a good thing.

            Edit: my point being that out of a large pool of novices, some of them will get better. Java was always more gate kept.

            Second edit: Java’s intermediate programmer malaise was of course fueled by the Gang of Four’s promise to lead them out of confusion and into the blessed realm of reusable software.

    • smokel 17 hours ago

      > CUDA isn't really used for new code.

      I don't think this is particularly correct, or at least worded a bit too strongly.

      For Nvidia hardware, CUDA just gives the best performance, and there are many optimized libraries that you'd have to replace as well.

      Granted, new ML frameworks tend to be more backend agnostic, but saying that CUDA is no longer being used, seems a bit odd.

    • dgan a day ago

      sooo what's the successor of cuda?

      • DiabloD3 a day ago

        CUDA largely was Nvidias attempt at swaying Khronos and Microsoft's DirectX team. In the end, Khronos went with something based on a blend of AMD's and Nvidia's ideas, and that became Vulkan, and Microsoft just duplicated the effort in a Direct3D-flavored way.

        So, just use Vulkan and stop fucking around with the Nvidia moat.

        • kcb a day ago

          A great thing about CUDA is that it doesn't have to deal with any of the graphics and rendering stuff or shader languages. Vulkan compute is way less dev friendly than CUDA. Not to mention the real benefit of CUDA which is that it's also a massive ecosystem of libraries and tools.

        • apitman 14 hours ago

          As much as I wish it were otherwise, Vulkan is nowhere near a good alternative for CUDA currently. Maybe eventually, but not without additions to nothing the core API and especially available libraries.

    • komali2 a day ago

      What are non legacy codebases using, then?

      • DiabloD3 a day ago

        Largely Vulkan. Microsoft internally is a huge consumer of DirectML for specifically the LLM team doing Phi and the Copilot deployment that lives at Azure.

        • 1gn15 2 hours ago

          I'm not sure if it's just the implementation, but I tried using llama.cpp on Vulkan and it is much slower than using it on CUDA.

          • DiabloD3 13 minutes ago

            It is on Nvidia. Nvidia's code generation for Vulkan kind of sucks, it also effects games. llama.cpp is almost as optimal as it can be on the Vulkan target; it uses VK_NV_cooperative_matrix2, which turning that off loses something like 20% performance. AMD does not implement this extension yet, and due to better matrix ALU design, might not actually benefit from it.

            Game engines that have singular code generator paths that support multiple targets (eg, Vulkan, DX12, Xbone/XSX DX12, and PS4/5 GNM) have virtually identical performance on the DX12 and Vulkan outputs on Windows on AMD, have virtually identical performance on apples-to-apples Xbox to PS comparisons (scaled to their relative hardware specs), and have expected DX12 but not Vulkan performance on Windows on Nvidia.

            Now, obviously, I'm giving a rather broad statement on that, all engines are different, some games on the same engine (especially UE4 and 5) are faster than one or the other on AMD, or purely faster entirely on any vendor, and some games are faster on Xbox than on PS, or vice versa, due to edge cases or porting mistakes. I suggest looking at, GamersNexus's benchmarks when looking at specific games or DigitalFoundry's work on benchmarking and analyzing consoles.

            It is in Nvidia's best interest to make Vulkan look bad, but even now they're starting to understand that is a bad strategy, and the compute accelerator market is starting to become a bit crowded, so the Vulkan frontend for their compiler has slowly been getting better.

        • bl0b 11 hours ago

          Such a huge consumer that they deprecated it

    • TiredOfLife a day ago

      ROCm doesn't work on this device

      • geerlingguy a day ago

        You mean the AI Max chips? ROCm works fine there, as long as you're running 6.4.1 or later, no hacks required. I tested on Fedora Rawhide and it was just dnf install rocm.

      • DiabloD3 a day ago

        Yes it does. ROCm support for new chips, due to being available for paid support contracts, comes like 1-2 months after the chip comes out (ie, when they're 100% sure it works with the current, also new, driver).

        I'd rather it works and ships late than doesn't work and ships early and then get gaslit about the bugs (lol Nvidia, why are you like this?)

  • trenchpilgrim 2 days ago

    Certain chips can work with useful local models, but compatibility is far behind CUDA.

  • wolfgangK 2 days ago

    Indeed, recent Flash Attention is a pain point for non CUDA.

  • dismalaf 21 hours ago

    > I'm under the impression everything is CUDA

    A very quick Google search would show that pretty much everything also runs on ROCm.

    Torch runs on CUDA and ROCm. Llama.cpp runs on CUDA, ROCm, SYCL, Vulkan and others...

kristianp 2 days ago

> There's at least a little flexibility with the graphics card if you move the board into a different case—there's a single PCIe x4 slot on the board that you could put an external GPU into, though many PCIe x16 graphics cards will be bandwidth starved.

https://arstechnica.com/gadgets/2025/08/review-framework-des...

  • monster_truck 2 days ago

    There are no situations where this matters yet. You have to drop down to an 8x slot on PCIe 3.0 to even begin to see any meaningful impact on benchmarks (synthetic or otherwise)

  • wolfgangK 2 days ago

    For LLM inference, I don't think the PCIe bandwidth matters much and a GPU could improve greatly the prompt processing speed.

    • zozbot234 a day ago

      The Strix Halo iGPU is quite special, like the Apple iGPU it has such good memory bandwidth to system RAM that it manages to improve both prompt processing and token generation compared to pure CPU inference. You really can't say that about the average iGPU or low-end dGPU: usually their memory bandwidth is way too anemic, hence the CPU wins when it comes to emitting tokens.

    • ElectricalUnion 2 days ago

      Only if your entire model fits the GPU VRAM.

      To me this reads like "if you can afford those 256GB VRAM GPUs, you don't need PCIe bandwidth!"

      • jychang a day ago

        No, that's not true. Prompt processing just needs attention tensors in VRAM, the MLP weights aren't needed for the heavy calculations that a GPU speeds up. (After attention, you only need to pass the activations from GPU to system RAM, which is about ~40KB so you're not very limited here).

        That's pretty small.

        Even Deepseek R1 0528 685b only has like ~16GB of attention weights. Kimi K2 with 1T parameters has 6168951472 attention params, which means ~12GB.

        It's pretty easy to do prompt processing for massive models like Deepseek R1, Kimi K2, or Qwen 3 235b with only a single Nvidia 3090 gpu. Just do --n-cpu-moe 99 in llama.cpp or something similar.

      • tgma a day ago

        If you can't, your performance will likely be abysmal though, so there's almost no middle ground for the LLM workload.

    • jgalt212 2 days ago

      Yeah, I think so. Once the whole model is on the GPU (potentially slower start-up), there really isn't much traffic between the GPU and the motherboard. That's how I think about it. But mostly saying this as I'm interested in being corrected if I'm wrong.

  • conradev 2 days ago

    You can also use an adapter to repurpose an M.2 slot as PCIe x16, but the bandwidth is the same x4

    • tgma 2 days ago

      That's just called a PCIe x4 [1]. Each PCIe lane is an independent channel. The wider slot will simply have disconnected pins. You can actually do this with regular motherboard PCIe x4 slots by cutting the plastic at the end of the slot so you can insert a wider card and most cards work just fine.

      [1]: It sounds like a nitpick but a PCIe x16 with x4 effective bandwidth can exist and is a different thing: if the actual PCIe interface is x16, but there is an upstream bottleneck (e.g. aggregate bandwidth from chipset to CPU is not enough to handle all peripherals at once at full rate.)

lvl155 a day ago

Amazing thing is this is a laptop-grade chip. Think AMD should make a full-on desktop-grade chip and possibly have two of them on one board.

That’d really drive compute.

  • zozbot234 a day ago

    > possibly have two of them on one board.

    That would involve NUMA, and your memory bandwidth for cross-chip compute would probably suck. Would that even beat a simple cluster in performance?

  • JonChesterfield 15 hours ago

    Sadly not. It appears to be too high wattage for laptops, almost no availability there.

    The full on high power one is the MI300A which is indeed superb.

  • Marsymars a day ago

    A desktop-grade chip would nerf the APU and memory bandwidth, and would need a discrete GPU for comparable compute, which is a completely different class of machine. (One which would be, IMO, much less interesting.)

cuu508 a day ago

What are more budget friendly options for similar workloads (running web app test suite in parallel)?

My test suite currently runs in ~6 seconds on 9700K. Would be nice to speed it up, but maybe not for $2000 :-) Last I checked 13700K or 13900K looked like the price/performance sweet spot, but perhaps there are better options?

  • SomeoneOnTheWeb a day ago

    Minisforum 790S7/795S7, mini-ITX desktop.

    16 cores, 32 threads, only a bit less powerful than a desktop Ryzen 7950X or a 14900K, but with a comparatively low power usage.

    About 500€ barebones, then you add your own SSD and SO-DIMM RAM.

    • nextos a day ago

      How is the cooling system on that Minisforum?

      Is it noisy? Does it keep up with the 7950X?

  • yencabulator a day ago

    I bought https://store.minisforum.com/products/minisforum-um890pro to compile Rust faster than my laptop, with 96 GB RAM and 2x4 TB NVMe as a ZFS mirror. Back before Framework Desktop existed.

    It has the 8945HS CPU, the article benchmarks against 8745H which is a little bit slower. It's a very worthy price point to consider, tiny and very quiet.

    qwen3:30b-a3b runs locally at 23.25 tokens/sec. I know 395+ chip would about approximately double that, but I'm not quite willing to put $2000 into that upgrade.

  • ohdeargodno a day ago

    >My test suite currently runs in ~6 seconds on 9700K

    Absolutely nothing. 6 seconds is about the time it will take you to tab to your terminal, press up arrow, find your test task and run it. There's no amount of money that makes it go from 6 to 3, and no world in which there's any value to it.

    In addition, upgrading to a 13900K means you're playing the Intel Dance: sockets have (again) changed, in an (again) incompatible manner. So you're looking at, at the very least, a new CPU, a new motherboard, potentially a new cooler, and if you're going too forward with CPUs, new RAM since Intel's Z890 is not DDR4 compatible (and the Z390 was not DDR5 compatible). Or buying an entire new PC.

    Since you're behind a socket wall, the reasonable option for an upgrade would rather be a sizeable one, and most likely abandoning Intel to its stupid decisions for a while and instead going for Zen 5 CPUs, which are going to be socket compatible for a good 7 years at least.

    • christophilus 21 hours ago

      It’s really nice to save and have your tests automatically run and go green (or red) nearly instantly. There is value to that. Performance matters.

      • ohdeargodno 19 hours ago

        That's called not rerunning all the tests in your project and having test harnesses that know of module boundaries.

        In addition, considering "saving" is something that happens on pretty much any non-code interaction, it means your tests are broken half the time when you're working. That's useless noise.

    • cuu508 20 hours ago

      6 seconds is the time it takes for the tests to run, after I've switched to the terminal and ran the command. If I switch from 8 cores to, say, 16 faster cores, IMHO it is not unthinkable the tests could speed up to 3 seconds. How much money to invest for this speedup is a subjective question.

      I'm thinking about a new system, not upgrading the existing one.

babl-yc 17 hours ago

I was in the market for a Linux desktop machine and considered the Framework Desktop. I respect their mission, but ended up purchasing a Geekom mini PC instead.

Given repair isn't practical with soldered DRAM and such, I prioritized small form factor, price, and quick delivery.

The Framework Desktop was a much larger form factor, 3x the price, and delivery wasn't for months.

That said, I still hope the company is successful so they have more competitive offerings in the future.

paolomainardi a day ago

Is there a real alternative, where is not not needed Windows in any way, to the Remote Device Management baked into firmware as Apple does with its hardware ? This is biggest missing to bring Linux to enterprises.

  • vaylian a day ago

    > the Remote Device Management baked into firmware as Apple does with its hardware?

    What do you mean? Linux had SSH (and before that rlogin) for a very long time already.

    • aesh2Xa1 21 hours ago

      Apple devices support MDM. When you purchase the device, the device's firmware is configured to check in with an account owner. The firmware has an integrity feature such that this configuration cannot be removed by a user: https://it-training.apple.com/tutorials/deployment/dm005/

      If OP just meant remote management through a BMC then that's not common except for server hardware, and it would have features like Redfish to configure the hardware itself. Apple devices don't have this.

      You can also buy hardware to act as a remote keyboard/mouse/monitor and power button, and it supports systems whose motherboards have the right headers: https://pikvm.org/

      • wtallis 20 hours ago

        I don't think it's fair to describe MDM as a firmware-level feature. I think it's entirely implemented and enforced within the environment of a booted macOS; the firmware isn't going to be bringing up a whole network stack to phone home.

        If you had Linux on a MDM-enrolled Mac there wouldn't be anything MDM-related running during or after the boot process. But presumably any sane MDM config would prevent the end user from accessing the settings necessary to relax boot security to get Linux installed in the first place.

        • aesh2Xa1 20 hours ago

          Yeah, your point about implementation is correct -- much of the MDM functionality runs within macOS.

          But, eh, I still think it's fair to describe it as a feature of the firmware. The enrollment and prevention of removal have firmware-level components through Apple's Secure Boot and System Integrity Protection. A user can't simply disable MDM because these firmware-level protections prevent tampering with the enrollment.

          Case in point, getting Linux installed in the first place would be blocked by firmware-level boot policies, right? I'm not too knowledge about this, and maybe you are more so.

          • wtallis 19 hours ago

            I think it's important to make a distinction between secure boot features that are local-only, and remote management features. The "Remote Device Management baked into firmware" claim above carries with it some pretty important implications that are, as far as I can tell, not actually true.

            It's not too different from scaremongering about Intel ME/AMT which is often maligned even in the context of computers that don't have the necessary Intel NICs for the remote management features.

            • aesh2Xa1 18 hours ago

              I agree with your point about OP's statement regarding "where is not not needed Windows in any way, to the Remote Device Management baked into firmware as Apple does with its hardware" I also read that to mean that the firmware solution is self-contained and complete, even though that's pretty misaligned when you consider the meaning of a "remotely" managed device (remotely managed by what?).

              But it's still entirely factual in my own description. When a device checks in during initial setup, the firmware-level boot process can receive policies that block alternative OS installation, and that absolutely is a feature of the firmware.

              Anyway, I tried to interpret OP's meaning, and provided more detail on how Apple's firmware is special.

    • surajrmal 20 hours ago

      Do you think ssh is an equivalent to remote device management? Half the reason Chromebook does well is because it's really good at remote device management.

    • 7jjjjjjj 21 hours ago

      "Device Management" means "hardware-level backdoor."

      • oblio 20 hours ago

        Only if it's your hardware. If it's corporate hardware, it's their hardware and you're a guest.

politelemon a day ago

I could do with someone explaining to me (sorry not very advanced knowledge in this area), how did Framework manage this performance at a lower price point? And why can't average Joe at PCPartPicker do something like this?

  • vaylian a day ago

    The key piece is AMD's high-end AI CPU. The whole desktop is literally built around it, to take full advantage of it.

    AMD reached out to Framework and said "Hey, we have this new CPU, do you want to do something with it?". And the engineers at Framework jumped at the opportunity.

  • 12345hn6789 18 hours ago

    Real question, it's not. It's close to the m4 mini performance. Which puts the price point within spitting difference ($50). Basically identical price wise to equivalent Apple products.

ThinkBeat 2 days ago

Reads like an advertisement to me.

damonll 2 days ago

You need to use Orbstack as the engine on Mac, otherwise it's not a fair fight. It's at least 2x as fast as Docker

alberth 13 hours ago

Physical Size:

  Framework: 4.5L (~5.5x larger than Mac Mini)
  Mac Mini:  0.8L
That’s a lot of extra cooling capacity, and sizably larger space it’s taking up.
p0w3n3d 8 hours ago

Okay so why is the ram non upgradable again? Because this was the main reason I was holding to PC architecture (I'm considering non upgradable ram as a method of upselling for a ridiculous price of higher amounts)

  • Doxin 7 hours ago

    I assume it's related to the following:

    > The AMD 395+ uses unified memory, like Apple, so nearly all of it is addressable to be used by the GPU.

    There's probably some fairly strict timing requirements etc on that. Still it'd be nice if they figured out a way to have swappable RAM.

syphia 14 hours ago

> The Framework Desktop with 64GB RAM + 2TB NVMe is $1,876

And it's ~$1000 to build a PC with a similar CPU, somewhat larger form factor, and fans. Unless the AI processor is actually useful for AI, and you need that, this is silly.

Framework desktop dimensions are 20.6 x 9.7 x 22.6 LWH. My IM01 case is 37.2 x 18.5 x 28.7. It won't be going in my bag, but it fits nicely on a desktop.

Pre-builts are so expensive these days...

  • khuey 14 hours ago

    The upcharge for decent specs, especially on RAM, is insane. I just checked Dell and upgrading from the base 16GB of RAM to 64GB of RAM on their generic tower costs $450. You can buy 2x32GB DIMMs of the same speed for less than half of that.

  • vikramkr 13 hours ago

    The memory on chip that's shared between CPU/GPU is the main thing, for AI stuff it's more competing with gpus and apple silicon than comparable CPUs.

hamandcheese 3 hours ago

> But so what? Docker is an integral part of the workflow for tons of developers. We use it to be able to run different versions of MySQL, Redis, and ElasticSearch for different applications on the same machine at the same time. You can't really do that without Docker.

Nix + Process Compose[0] make a great combo, and runs completely native.

[0]: https://github.com/F1bonacc1/process-compose

dustingetz 2 days ago

for $2k that is a lot of computer

focusgroup0 15 hours ago

I loved my Framework. and yet, Linux is free if your time is free

trelane 15 hours ago

Wonder how this compares to a Thelio's performance.

arp242 18 hours ago

I looked at the Framework Desktop a few week ago; mainly to get something that has nicer gaming performance than my laptop's iGPU. I'm not a big gamer really, but as of late I've been wanting to play some things, like The Witcher 3 (saw thread on Reddit celebrating its 10 year anniversary: "that's the new one I still need to play", so that's about how up to date I am with games – some of these "new" games are already classified as "good old game" on gog.com).

My take-away was/is that the Framework Desktop is a very nice machine, but it is expensive IMHO. You can get better performance at a lower price by building your own machine; in this article the 9950X scores lower than the Max 395, and I'm not entirely sure that's accurate – that wasn't my take-away at all (don't have links at the ready). This is also what you'd expect when comparing a 55W laptop chip vs. a 170W desktop chip.

That said, Linux compatibility is a bit of a pain, for example some MediaTek WiFi/Bluetooth chips that ASUS boards use don't have a Linux driver. In general figuring out what components to get is a bit time-consuming. In one of the videos Nirav mentioned that "just get the Framework Desktop" is so much easer, and 100% agree with that.

In the end, I decided to get a USB4/Thunderbold eGPU, which gives me more than enough gaming performance for me and is much cheaper. I already have a laptop that's more than performant enough for my needs, which I got last year mainly due to some idiotic JS build process I had to deal with last year that took forever on my old laptop. On the new machine it's merely "ludicrously slow". Sometimes I think JS people are in cahoots with Intel and/or AMD...

For LLM stuff the considerations might be different. I don't care about that so I didn't look into it.

zyx321 a day ago

There's been some theories floating around that the 128gb version could be the best value for on-premise LLM inference. The RAM is split between CPU and GPU at a user-configurable ratio.

So this might be the holy grail of "good enough GPU" and "over 100GB of VRAM" if the rest of the system can keep up.

  • yencabulator a day ago

    > The RAM is split between CPU and GPU at a user-configurable ratio.

    I believe the fixed split thing is a historical remnant. These days, the OS can allocate memory for the GPU to use on the fly.

    • geerlingguy 15 hours ago

      Indeed it can be reallocated, needs a reboot though. I've gotten up to around 110 GB before running into OOM issues. I set it at 108 GB to provide a little headroom: https://www.jeffgeerling.com/blog/2025/increasing-vram-alloc...

      • yencabulator 14 hours ago

        Also, from your link:

        > It seems like tools will have to adapt to dynamic VRAM allocation, as none of the monitoring tools I've tested assume VRAM can be increased on the fly.

        amdgpu_top shows VRAM (the old fixed thing) and GTT (dynamic) separately.

      • yencabulator 14 hours ago

        No need for a reboot, echo 9999 >/sys/module/ttm/parameters/pages_limit

        You're talking about an allocator policy for when to allow GTT and when not, not the old firmware-level VRAM split thing where whatever size the BIOS sets for VRAM is permanently away from the CPU. The max GTT limit is there to decrease accidental footguns, it's not a technological limitation; at least earlier the default policy was to reserve 1/4 of RAM for non-GPU use, and 1/4*128 GB=32GB is more than enough so you're looking to adjust the policy. It's just an if statement in the kernel, GTT the mechanism doesn't limit it, and deallocating a chunk of memory used by the GPU returns it to the general kernel memory pool, where it can next be used by the CPU again.

    • zyx321 a day ago

      It's not a fixed split. I don't know if it's possible live, or if it requires a reboot, but it's not hardwired.

      I want to know if it's possible. 4GB for Linux, a bit of room for the calculations, and then you can load a 122GB model entirely into VRAM.

      How would that perform in real life? Someone please benchmark it!

      • yencabulator a day ago

        You're still thinking of the old school thing, where you set the split in the firmware and it's fixed for that boot. There's dynamic allocation on top of it these days.

        I have that split set at the minimum 2 GB and I'm giving the GPU a 20 GB model to process.

Jtsummers 3 days ago

> In some ways, the Framework Desktop is a curious machine. Desktop PCs are already very user-repairable! So why is Framework even bringing their talents to this domain? In the laptop realm, they're basically alone with that concept, but in the desktop space, it's rather crowded already. Yet it somehow still makes sense.

And even more curious, Framework Desktop is deliberately less repairable than their laptops. They soldered on the RAM. Which makes it a very strange entry for a brand marketing itself as the DIY dream manufacturer. They threw away their user-repairable mantra when they made the Desktop, it's less user repairable than most other desktops you could go out and buy today.

  • sethops1 2 days ago

    The RAM is soldered on all Halo Strix platforms because physics is getting in the way. With pluggable DIMMs the memory bandwidth would be halved, at best.

    • onli 2 days ago

      He is still right. It is a desktop PC that is less repairable than all other desktop PCs, from a brand that is known to champion repairability. They had a reason for it, but could've chosen to not create more throwaway things.

      • epistasis 2 days ago

        I have been continuously baffled by the people that think that soldered on RAM is somehow "throwaway". My last desktop build is eight years old and I have never upgraded the ram. Never will. My next build will have an entirely new motherboard, ram, and GPU, and the last set will end up at the ewaste recycler, because who could I find that wants that old hardware?

        Soldered RAM, CPU, and GPU, that give space benefits and performance benefits is exactly what I want, and results in no more ewaste at all. In fact less ewaste, because if I had a smaller form factor I could justify keeping the older computer around for longer. The size of the thing is a bigger cause of waste for me than the ability to upgrade RAM.

        Not everybody upgrades RAM, and those people deserve computers too. Framework's brand appears to be offering something that other suppliers are not, rather than expand ability. That's a much better brand and niche overall.

        • distances a day ago

          > the last set will end up at the ewaste recycler, because who could I find that wants that old hardware?

          You might be surprised. Living in a large city, everything I have put for sale has found a new owner. Old and seemingly useless computer hardware, HDMI cables that don't support 4K, worn-out cutlery, hairdryer that's missing parts, non-functional amplifier, the list goes on. If the price is right (=very low), someone has always showed up in person to carry these away. And I'm always very upfront about any deficiencies so that they know what they're getting.

          I'd say a common profile for the new owner is young people who have just moved and are on a shoestring budget.

        • v5v3 2 days ago

          >I have been continuously baffled by the people that think that soldered on RAM is somehow "throwaway"

          One of the primary objections to soldered RAM was/is the cost to purchase. As the likes of Apple priced Ram upgrade at a hefty premium to retail prices.

          • chrismorgan a day ago

            Also, that they often simply don’t sell what you want with enough memory, or pair memory upgrades with other upgrades you don’t need (e.g. more powerful CPU or GPU beyond your needs), or occasionally that you actively don’t want (e.g. iGPU → dGPU may be that). With socketed RAM you can buy the model you want that just lacks RAM, and upgrade that.

            My current laptop (ASUS GA503QM) had 8GB soldered and 8GB socketed. I didn’t want to go for the 16+16 model because it was way more expensive due to shifting from a decent GPU to a top-of-the-line GPU, and a more-expensive-but-barely-faster CPU. (I would have preferred it with no dedicated GPU—it would have been a couple of hundred USD cheaper, a little lighter, probably more reliable, less fiddly, and have supported two external displays under Linux (which I’ve never managed, even with nvidia drivers); but at the time no one was selling a high-DPI laptop with a Ryzen 5800H or similar without dedicated graphics.) So after some time I got a 32GB stick and now I have 40GB of RAM. And I gave my sister the 8GB stick to replace a 4GB stick in her laptop, and that improved it significantly for her.

          • epistasis 2 days ago

            I can see that objection too, and it seems far more reasonable than assuming that soldered RAM automatically means a reduced lifespan machine.

            But are Framework's RAM prices unreasonable? $400 for 64GB more of LPDDR5x seems OK. I haven't seen anybody object to Framework's RAM on those grounds.

            • beeflet 2 days ago

              With modular RAM, someone can buy old boards and RAM and use it for high-RAM applications down the line.

              • wtallis 20 hours ago

                The workloads that people care most about today that need high RAM capacity are workloads that also need very high memory bandwidth. Old server hardware from eBay doesn't do a good job of satisfying the bandwidth side of things.

        • pabs3 a day ago

          I will take your old builds, because my current PC is from a dumpster and was made in 2013. I can't afford to buy hardware.

        • onli 2 days ago

          > Not everybody upgrades RAM, and those people deserve computers too.

          No. It's end of the line with consumerism and we either start repairing and recycling or we die. Framework catered to people who agree with that, and this product is not in line.

          I have no idea why you would not upgrade your memory, I have done so in all PCs I ever owned and all laptops, and it's a very common (and cheap) upgrade. It reduces waste because people can then use their system longer, which means less garbage over the lifetime of a person. And as was already commented, it is not only about upgrades, but also about repairs. Ram breaks rather often.

          • epistasis 2 days ago

            I have had the system for eight years and at no point would upgrading RAM have increased performance.

            Upgrading the RAM would have created more waste than properly sizing the RAM to COU proportion from the beginning.

            It is very odd to encounter someone who has such a narrow view of computing that they cannot imagine someone not upgrading their RAM.

            I have not once, literally not once have RAM break either. I have been part of the management of clusters of hundreds of compute nodes, that would occasionally each have their failures, but not once was RAM the cause of failure. I'm fairly shocked to hear that anybody's RAM has failed, honestly, unless it's been overlocked or something else.

            • oblio 18 hours ago

              I'm with you on this one. I've had.. 6? PCs. Basically every time I thought that they were falling behind performance wise, I realized that they generally had stopped selling RAM for them and even if I only wanted to upgrade the RAM, it wasn't enough anymore. The CPU was also falling behind and a new one needed a new socket and motherboard.

            • onli 2 days ago

              > It is very odd to encounter someone who has such a narrow view of computing that they cannot imagine someone not upgrading their RAM.

              Uncalled for and means the end of the discussion after this reaction. Ofc I can imagine that, it's just usually a dumb decision.

              That you did not have to upgrade the ram means one of two things: You either had completely linear workloads, so unlike me did not switch to a compiled programming language or experimented with local LLMs etc. Or you bought a lot of ram in the beginning, so 8 years ago with a hefty premium.

              Changes nothing about the fundamental disagreement with the existence of such machines. Especially from a company that knows better. I do not expect ethical behaviour from a bottom of the barrel company like Apple, but it was completely reasonable to expect better from framework.

      • trenchpilgrim 2 days ago

        The 128GB version wouldn't be throwaway, since that's the maximum the platform as a whole supports anyway- more memory than that would require a new mainboard and CPU at the same time.

        • nemomarx 2 days ago

          Not for upgrade reasons but what if you have a fault in one of the dimms or etc? Now you can't just drop in a replacement without changing everything.

          • sidewndr46 a day ago

            How often have you had a memory chip fail? I ask because I only have had it happen once in my lifetime that I can recall and it was for a very dumb reason.

          • rs186 a day ago

            I wonder why so few people ask Apple products of such questions -- "what if my SSD goes bad, does it mean my computer is now completely useless?"

            • nemomarx a day ago

              I would also ask this! But part of the framework brand was selling a laptop where the answer to that question was being able to replace it on your own. So it's just kind of a contrast for their desktop to be less repairable than their laptop.

              Personally I think it's bad that apple products are so poorly repairable and so expensive to upgrade.

            • alpaca128 a day ago

              Worse, Apple also still claims to care about the environment. While not allowing the iMac to be used as external screen, cutting its useful lifetime by at least a decade.

          • cyanydeez 2 days ago

            it's basically the same as asking what happens when your M4 Apple has a fault. It's soldered based on the desire to use the ram as part of the GPU.

            Without that, it's really not a interesting solution.

            demanding replaceable ram means also not wanting the benefits of the integrated memory

        • richardw 2 days ago

          That applies to all computers when you buy the fully specced versions on day 1. A maxxed out iPad isn’t throwaway, but framework represents a higher standard of upgradability.

        • beeflet 2 days ago

          the platform should support more memory

      • linotype 2 days ago

        The only part I’ve ever had not fail on a PC or Laptop is RAM.

        • trenchpilgrim a day ago

          RAM failure is actually pretty common on non-JEDEC profiles, I've seen it happen a lot on gaming PCs. Very uncommon on "regular" computers that aren't pushing the clock and timings, though.

          • linotype a day ago

            Oh yeah I never push the limits of my systems like that. At least for PC gamers replacing RAM is easy and they’ll probably upgrade it eventually anyway.

    • undersuit 2 days ago

      I wonder if there were similar complaints when cache moved from motherboards to soldered on the cpu package.

      • beeflet 2 days ago

        The difference in performance between DRAM and flash memory is far greater than SRAM and DRAM. The total RAM of a system is a hard limit on the type of programs you can practically run because swapping is slow.

        • undersuit a day ago

          The old motherboard cache was socketed SRAM and it was replaced with soldered SRAM just as the socketed DRAM was replaced with soldered DRAM.

          L2 CPU cache used to be on the motherboard and user expandable.

    • wishinghand 2 days ago

      Why is that? Why would soldering the connections vs plugging them in affect how much data per second they transfer?

      • colejohnson66 2 days ago

        Sockets have resistance and crosstalk, which affects signal integrity.

        • aeonik 2 days ago

          Wait, your telling me, I should have been desoldering the sockets off my motherboard, and directly soldering my RAM to the leads this entire time?

          • yencabulator a day ago

            Only if you were pushing data through so fast that the bits got corrupted before. That's literally why AMD told Framework they won't support any other configuration than soldered RAM, in this case.

          • ElectricalUnion 2 days ago

            Compression Attached Memory Module (CAMM) tries to be a middle-term solution for that, by reducing how crappy your average RAM socket is to latency and signal integrity issues. But, at this point, I can see CAMM delivered memory being reduced to a sort of slower, "CXL.mem" device.

            • aeonik 2 days ago

              Seriously though,

              Would desoldering the sockets help?

              Why are the sockets bad?

              • komali2 a day ago

                As stated previously, the sockets reduce signal integrity, which doesn't necessarily make them "bad," but is why Framework wasn't able to used socketed ram to maximize the potential of this CPU.

                This sort-of-interview of Nirav Patel (ceo of framework) explains in a bit more detail: https://www.youtube.com/watch?v=-lErGZZgUbY

                Basically, they need to use LPDDR5X memory, which isn't available in socketed form, because of signal integrity reasons.

                Which means you won't see an improvement if you solder your ram directly, I think mostly because your home soldering job will suffer signal integrity issues, but also because your RAM isn't LPCAMM and isn't spread across a 256 bit bus.

                • aeonik 21 hours ago

                  They "why" hasn't been answered. I understand the previous statements very clearly. It makes intuitive sense to me, but I want to know more.

                  Like physics PhD-level more.

                  • 418tpot 20 hours ago

                    I believe the reason is, at the frequencies these CPUs are talking to RAM, the reflection coefficient[1] starts playing a big role. This means any change in impedance in the wire cause reflections of the signal.

                    This is also the reasoning why you can't just have a dumb female to female HDMI coupling and expect video to work. All of such devices are active and read the stream on the input and relay them on the output.

                    [1]: https://en.wikipedia.org/wiki/Reflection_coefficient

                    • geerlingguy 15 hours ago

                      See also RF insertion loss and how that's dealt with, PCIe retimers, etc.

                      Above certain frequencies, you start running into major issues with signal integrity, and fixing them is very difficult without any extra circuitry.

          • undersuit a day ago

            You might be able to dial in a higher memory overclock.

          • wmf 2 days ago

            Yes. (That isn't actually possible because the pinouts are different but soldered RAM is faster.)

      • tejtm 2 days ago

        mind the gap

    • Jtsummers 2 days ago

      That's fine, but not what I was commenting on so your comment is mostly irrelevant.

      I was commenting on a brand based on repairability selling a product that's deliberately not repairable. It's a curious choice to throw away the branding that brought them to where they are, and hopefully not the start of a trend for their other devices.

      • rs186 a day ago

        The only non repairable part is RAM due to technical constraints. The rest is as repairable as any other desktop, if not more so (e.g. power supply). Why are you accusing them of "throwing away the branding" and holding such a purist view, when they are doing their best and just making compromises in order to release a decent product.

    • beeflet 2 days ago

      Why not make a platform with a greater number of channels

      • sliken a day ago

        Sure, you could. The design would do something like:

        We need a bigger memory controller.

        To get more traces to the memory controller We need more pins on the CPU.

        Now need a bigger CPU package to accommodate the pins.

        Now we need a motherboard with more traces, which requires more layers, which requires a more expensive motherboard.

        We need a bigger motherboard to accommodate the 6 or 8 dimm sockets.

        The additional traces, longer traces, more layers on the motherboard, and related makes the signalling harder, likely needs ECC or even registered ECC.

        We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel). All larger, more expensive more than 2x the power, and is likely to be in a $5-$15k workstation/server not a $2k framework desktop the size of a liter of milk or so.

        • zozbot234 a day ago

          > We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel).

          This is the real story not the conspiracy-tinged market segmentation one. Which is silly because at levels where high-end consumer/enthusiast Ryzen (say, 9950 X3D) and lowest-end Threadripper/EPYC (most likely a previous-gen chip) just happen to truly overlap in performance, the former will generally cost you more!

          • sliken a day ago

            Well sort of. Apple makes a competitive mac mini and macbook air with a 128 bit memory interface, decent design, solid build, nice materials, etc starting at $1k. PC laptops can match nearly any aspect, but rarely match the quality of the build, keyboard, trackpad, display, aluminum chassis, etc.

            However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.

            Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

            • wtallis 20 hours ago

              > Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

              The market dynamics are pretty clear. Having that much memory bandwidth only makes sense if you're going to provide an integrated GPU that can use that bandwidth; CPU-based laptop/desktop workloads that bandwidth-hungry are too rare. The PC market has long been relying on discrete GPUs for any high-performance GPU configuration, and the GPU market leader is the one that doesn't make x86 CPUs.

              Intel's consumer CPU product line is a confusing mess, but at the silicon level it comes down to one or two designs for laptops (a low-power and a mid-power design) that are both adequately served by a 128-bit memory bus, and one or two desktop designs with only a token iGPU. The rest of the complexity comes from binning on clock speeds and core counts, and sometimes putting the desktop CPU in a BGA package for high-power laptops.

              For Intel to make a part following the Strix Halo and Apple strategy, Intel would need to add a third major category of consumer CPU silicon, using far more than twice the total die size of any of their existing consumer CPUs, to go after a niche that's pretty small and very hard for Intel to break into given the poor quality of their current GPU IP. Intel doesn't have the cash to burn pursuing something like this.

              It's a bit surprising AMD actually went for it, but they were in a better position than Intel to make a part like Strix Halo from both a CPU and GPU IP perspective. But they still ended up not including their latest GPU architecture, and only went for a 256-bit bus rather than 512-bit.

            • zozbot234 a day ago

              Yes, but that platform has in-package memory? Which is a higher degree of integration than even "soldered". That's the kind of platform Strix Halo is most comparable to.

              (I suppose that you could devise a platform with support for mixing both "fast" in-package and "slow" DIMM-socketed memory, which could become interesting for all sorts of high-end RAM-hungry workloads, not just AI. No idea how that would impact the overall tradeoffs though, might just be infeasible.

              ...Also if persistent memory (phase-change or MRAM) can solve the well-known endurance issues with flash, maybe that ultimately becomes the preferred substrate for "slow" bulk RAM? Not sure about that either.)

      • v5v3 2 days ago

        Risk cannabalising sales from their other products?

        For example Nvidia seek to ban consumer GPU use in datacenters as they to sell datacentre GPUs.

        If they made consumer platforms that can take 1tb of ram etc, then people may choose to not buy EYPC.

        Afterall many cloud providers already offer Ryzen VPS's.

        • beeflet 2 days ago

          my thoughts exactly

      • aDyslecticCrow 2 days ago

        That's a question for AMD and TCMC. They only have so much space on the silica. More memory channels means less of something else. This is not a "framework platform" issue, it's the specification of that CPU.

        • beeflet 2 days ago

          Well they chose to use this hardware platform. It all sounds like market segmentation to me, now that AMD is on top.

          • wmf 2 days ago

            To be clear, AMD is giving you 2x the bandwidth of competing chips and you're complaining that it isn't 4x.

            • beeflet 2 days ago

              My complaints are the maximum RAM of the system and the modularity of the RAM.

              With an increased number of channels, you could have a greater amount of RAM at a lower frequency but at the same bandwidth. So you would at least be able to run some of these much larger AI models.

              • aDyslecticCrow a day ago

                This isn't ram. this is unified memory. It's shared between GPU and CPU. Soldered VRAM for GPUs have been the norm for probably 20 years because of the latency and reliability required, so why is this any different?

                The only way to achieve what you're after is to do any of;

                - Give up on unified memory and switch to a traditional platform (which there are thousands of alternatives for)

                - Cripple the GPU for games and some productivity software by raising latency beyond the norm.

                - Change to a server-class chip for 5x the price.

                This is an amazing chip giving server-class specs in a cheap mobile platform, that fill a special nieche in the market for for both productivity and local AI at a very competitive price. What you're arguing for makes no sense.

              • wmf 2 days ago

                I don't think that would fit in a laptop which was the original market for this chip.

  • jeroenhd 2 days ago

    According to the Framework CEO on the Linus Tech Tips video about this thing [1], they tried and AMD assigned an engineer on getting modular memory to work and decided it's not possible.

    Unless there's another company out there shipping this CPU with replaceable memory, I'll believe them. Even with LPCAMM, physics just doesn't work out.

    Of course, there are plenty of slower desktops you can buy where all the memory can be swapped out. If you want a repairable desktop like that, you're not stuck with Framework in the same way you would be with laptops. You'll probably struggle to get the same memory bandwidth to make full use of AMD's chips, though.

    [1]: https://youtu.be/-lErGZZgUbY?feature=shared&t=446

    • cyanydeez 2 days ago

      there isn't. This thing was designed the same way apple designed their unified memory. This thing is meant to work hand in hand with it's iGPU.

  • suspended_state 2 days ago

    > They soldered on the RAM. Which makes it a very strange entry for a brand marketing itself as the DIY dream

    This was also my first thought when discovering this new model, but I think it was a pragmatic design decision.

    The questions you should ask yourself are:

    - which upgradable memory module format could be used with the same speed and bandwidth as the soldered in solution,

    - if this solution exists, how much would it cost

    - what's the maximum supported amount of ram for this CPU

    • beeflet 2 days ago

      >which upgradable memory module format could be used with the same speed and bandwidth as the soldered in solution

      CAMM perhaps? The modular memory is important, because they are selling them to two different markets: gamers that want a small powerful desktop, and people running LLMs at home. The modularity of the RAM allows you to convert the former into the latter at a later date, so it seems pretty critical to me.

      For this reason alone, I am going to buy a used epyc server instead of one of these desktop things. I will be able to equip it with a greater amount of RAM as I see fit and run a greater range of models (albeit at lower speed). The ability to run larger models slowly at a later date is more important than the ability for me to run smaller models faster now. So I am an example of a consumer who does not like framework's tradeoff.

      You would think that they would at least offer some type of service where they take it into the factory and refit it with new ram chips. Perhaps they could just buy used low-ram boards at a later date and use them to make refurbished high-ram boards.

      Another solution is to make it so that it supports both soldered and unsoldered ram (but at a lower frequency). Gaming is frequency-limited but does not require much ram, but a lot of workloads like AI are bandwidth limited. Hell, if you're going to have some high-frequency RAM irreplacibly soldered to the motherboard, it might as well be a chiplet!

      • suspended_state a day ago

        You should do what better fit your usecase.

        I don't know how large is framework's market, nor how deep their pockets are, which condition their ability to produce 2 different models.

        It's clear that a modular design is preferable, hopefully once a standard emerges they will use it in their next devices. Perhaps framework will help in that process, but I don't know if they can afford to put up with the initial costs, particularly in a market they don't have a strong foothold yet.

      • yencabulator a day ago

        The price jump from 64 GB to 128 GB is $400. $400 does not get you "some type of service where they take it into the factory and refit it with new ram chips".

  • trenchpilgrim 2 days ago

    The CEO mentioned in an LTT video that they worked with AMD to try to make CAMM memory work and hit some technical problems.

  • G3rn0ti a day ago

    > They threw away their user-repairable mantra when they made the Desktop

    You forget the value proposition of Framework products is not only they allow you to bring your own hardware but they also promise to provide you with replacement parts and upgrades directly from the vendor.

    In this case they could not make the RAM replaceable (it’s a limitation of the platform) but you can expect an upgrade board in about 2 years that’s actually going to be easy to install for much less cost than buying a new desktop computer.

    • amaranth 20 hours ago

      That's less of a thing here since this is "just" an ITX motherboard, a case, and a power supply. With the laptops replacing the board saves a bunch of other parts but here the board is basically the only part that matters.

  • sliken a day ago

    Dunno, nice, quiet, small machine, using standard parts (power supply, motherboard, etc).

    If you want the high memory bandwidth get the strix halo, if not get any normal PC. Sure apple has the bandwidth as well, and also soldered memory as well.

    If you want dimms and you want the memory bandwidth get a threadripper (4 channel), siena (6 channel), thread ripper pro (8 channel), or Epyc (12 channel). But be prepared to double your power (at least), quadruple your space, double your cost, and still not have a decent GPU.

    • heraldgeezer 19 hours ago

      Nice is subjective. Fractal cases he compares to looks nicer to me.

      Quiet? A real PC with bigger fans = more airflow = quieter

      Smaller - yes, this is the tradeoff

      GPU is always best separate, that is true since the ages.

      "double the power" oh no from 100W to 200W wowwww

      "quadruple your space" - not a problem

  • tgma 2 days ago

    Even on regular AMD 7000 and 9000 series the DDR5 memory controller is very sensitive and hard to get a stable system with fast RAM on many motherboards when all 4 modules are present. At today's RAM speeds, I definitely think a stable soldered system is increasingly a better trade-off.

    • SomeHacker44 a day ago

      Indeed. I have an AMD 9950X and an Asus X870E* motherboard. I can barely get the system to boot with one 32G DIMM let alone 2 or 4, and 48G DIMMS are even worse for some reason. I have tried 3 different "matched" sets. Sometimes I have to reset the system 3 times or more before it will boot; it hangs or crashes on DDR timing BIOS codes. I have given up and just use a single 32G despite how useless 32G is in a high end desktop today. Real joke. Huge waste of money. I will buy prebiilt systems in the future. In the meantime if I need a lot of RAM I use an Intel 14900K desktop or my new HP G1a Zbook.

      * ASUS X870E ROG CROSSHAIR HERO AMD AM5 ATX Motherboard

      • tgma 20 hours ago

        Yes, AM5 DDR5 support has been similarly painful to me on four systems (7950X and 9950X). Try forcing the memory speed down to 3600MHz (ouch) when you want to install lots of memory and stay on modules specified on the motherboard's QVL.

        I'd probably go Framework Desktop next if I won't need peripherals.

  • antonvs 2 days ago

    > And even more curious

    It's easy to find out the reason for this. And the article's benchmarks confirm the validity of this reason. Why comment from a place of ignorance, unless you're asking a question?

    • aniviacat 2 days ago

      Your comment is unnecessarily hostile.

      There are plenty of components to choose from which do not need soldered-on RAM. Giving up modularity to gain access to higher memory bandwidth is certainly a trafeoff many are willing to make, but to take that tradeoff as a company known for modularity is, as the parent comment put it, curious.

      • antonvs 2 days ago

        Every single description of the Framework desktop that I've seen has addressed this issue. To comment as though it's some sort of mystery is disingenuous at best. My comment was precisely as friendly as the commenter deserved.

        And as I said, if you read the article, you'll see that the tradeoff in question has paid off very well.

        • Jtsummers 2 days ago

          You completely missed the point of my original comment, I'll take a second stab at it:

          1. Framework branded themselves as the company for DIY computer repairability and maintainability in the laptop space.

          2. They've now released a desktop that is less repairable than their laptops, and much less repairable than most desktops you can buy today.

          That's what I consider a curious move.

          The hardware choice may provide a good reason to solder on the RAM, but I wasn't commenting on that and have no idea how anyone could read my comment and have that be their takeaway.

          I was commenting on a brand throwing away the thing it's marketed itself for. In exchange for repairability, you now get shiny baubles like custom tiles for the case.

          • tomnipotent a day ago

            > brand throwing away the thing it's marketed itself for

            I don't see what you see. It's a single product, not a realignment of their business model. They saw an opportunity and brought to market a product that will likely sell out, which tells us that customers are happy to make the trade-off of modularity and repairability for what the Strix Halo brings to the table. I think your interpretation of their mission is a bit uncharitable, maybe naive, and leaves the company little room to be a company.

            • alt227 a day ago

              I disagree, the framework name has been intertwined with repairability since the inception. That is their USP and has been their marketing angle from day 1, not only that but the fact they are supposedly championing repairability due to their 'ethos' as a company.

              Fair enough that a company might bring out products which differ from their core market, but in this instance I have to agree that releasing a desktop PC with soldered on RAM very much goes against the place they have positioned themselves in the market.

              Perhaps a better solution would be to start releasing a newly branded product line of not so repairable machines, but keeping the name 'Framwork' for their main offerings.

  • bluescrn 2 days ago

    Gotta chase that AI bubble at any cost

zargon 19 hours ago

So DHH fell for Sam’s scam. He tried OSS 20b and wasn’t impressed, and apparently dismisses all local models based on that experience with a known awful model.

Xenoamorphous 2 days ago

Oh, to be young again and care about benchmarks, bogomips and stuff like that.

  • Terr_ 2 days ago

    Or to generally be back in the era where such a thing would be useful for LAN-parties...

  • TiredOfLife 2 days ago

    Note that the main benchmark in the post is practical. How long a test suite for the product he makes runs.

  • beeflet 2 days ago

    Do you even bench, mark?

  • LargoLasskhyfv a day ago

    Das Balkenspiel ist schlechter Stil. (The bar game is bad style/lame)

    • ggm a day ago

      Pong!

      • LargoLasskhyfv a day ago

        Errm, No :-) I meant bars as in benchmarks, often rather meaningless, because within the range of statistic noise.

        For instance, something having 100.200 points in one config, in another 100.220, with the bars/scales distorted to make that difference seem much larger.

        Gaming the bar-game, so to speak.

        • alpaca128 a day ago

          OpenAI recently played a bit too hard with their GPT-5 announcement. Two bars with the same height but wildly different values, things like that. Such a lack of subtlety that their claim it was accidental is actually almost believable.

neuroelectron a day ago

You know it would be a great use for this chip besides AI generative slop? A desktop server with AI-enhanced firewall and a silent home server. This could be effective for zero-days and other weird traffic patterns and maybe even add enterprise-ish email server with spam detection.

There has been several projects started that are experimenting with this including Suricata and pfSense. I wonder how well this chip could handle packet inspection.

  • alt227 20 hours ago

    If you call what AI generates 'slop' then what do you think the quality of firewall rules it generates will be?

    I dont think any enterprise is going to hand over the keys to its firewall any time soon.

znpy 18 hours ago

> Linux is really efficient. Especially when you're using a window manager like Hyprland, as we do in Omarchy.

Window managers are usually the last issue on modern Linux. Pretty much any native app is okay.

Troubles really start (and start fast) when you open any browser and load any modern website. Had we more native applications (so not cloud stuff, and not javascript stuff with a bundled chromium) we'd all have a much better overall computing experience.

einpoklum a day ago

I don't really understand Why one should bother with the pluggable ports in something that's not a laptop. Even in 4.5L, shouldn't we just have "all the ports"?

Also, this reminds me of "SkyReach Tiny":

https://store.nfc-systems.com/products/skyreach-4-tiny

an even smaller case, very versatile. No pluggable gimmicks though.

  • sowbug 5 hours ago

    If you already have a Framework laptop, it's nice to be able to reuse the extra cards that you might have lying around. "Leveraging the benefits of the Framework ecosystem," as a marketing person might say.

  • mixmastamyk 21 hours ago

    Only two on the front. So you get to decide what to put there. Also protects the port. Kind of a gimmick, but could be worse.

OrvalWintermute 2 days ago

very impressive...

Max+ 395 specced with: 128GB of non-upgradeable LPDDR5x WD_BLACK SN850X NVMe - M.2 2280 - 8TB Noctua fan 3x + 3x extra USB-A & USB-C ports No OS option.

only $2,776.00!!!

  • everfrustrated 2 days ago

    Bear in mind the gpu has access to all of that 128GB as well, so for AI thats very very cheap.

    • cyanydeez 2 days ago

      yeah, slap the Apple brand on this and it's basically the same thing.

      People seem to really not understand the limits of wanting unified memory architecture.

      • mkl a day ago

        Apple CPUs have up to more than 2.5 times the memory bandwidth as this (and you pay for it).

  • micromacrofoot 2 days ago

    everything's cheap when you're rich

    • OrvalWintermute 2 days ago

      A mac studio configured similarly With 128 GB of memory and an 8 TB SSD is $5799

      • jeroenhd 2 days ago

        The Mac Studio (at least the M3 Ultra model) blows the Framework out of the water when it comes to AI performance, though, at least according to this (Dutch language) benchmark: https://tweakers.net/reviews/13614/framework-desktop-de-fram...

        Paying twice the price for twice to seven times the performance may not be such a bad thing. Then again, with Apple you're kind of stuck with macOS and the like, so Framework may still be the better option depending on your use case.

        • Rohansi a day ago

          The M3 Ultra also consumes a lot more power because Strix Halo is actually a chip for laptops. The Framework here is a desktop but that doesn't change what AMD prioritized when building this chip.

          • geerlingguy a day ago

            The M3 Ultra is more efficient both on the CPU and GPU side.

            • Rohansi 20 hours ago

              I'd hope so since M3 Ultra is using a newer/more efficient manufacturing process than Strix Halo!

      • mft_ 2 days ago

        One detail is that the memory bandwidth on the M4 Max and M3 Ultra (especially) is considerably higher than the 395+.

heraldgeezer a day ago

Not really.

A laptop CPU defeats the purpose. Get a 9800X3D for gaming will be waaaay faster or Threadripper for productivity or the 9950X3D chips with 16 cores/32 threads.

Why this laptop crap when you can get a nice PC case.

Then again he thinks the Fractal North was "bulky"? What?

  • mkl a day ago

    The 9950X appears in one of the benchmarks in the article, and this machine beats it. It has 16 cores and 32 threads itself. You might want to read more details instead of dismissing it out of ignorance.

    • heraldgeezer 19 hours ago

      okay the new real PC CPUs are better. there.

pzmarzly 2 days ago

HN admins: can the domain extractor be changed to say "world.hey.com/dhh" here instead of just the domain name? From what I see, Hey World is a blogging platform, similar to Medium but markdown and email based. And the username (blog name) is in the second part of the URL.

  • mtmail 2 days ago

    Best to email the moderators (link in footer). I've made similar suggestions about other blogging platforms and got a positive reply.

pragmatic a day ago

[flagged]

  • christophilus 20 hours ago

    This is the way I make my tech decisions. If a programmer says something I don’t like, especially about a domain completely unrelated to tech, I don’t use their stuff. That’s why I don’t use EMacs, rust, zig, Jai (whenever it’s released), go, JavaScript, neovim, or any OS I don’t personally build from assembly, any digital devices— I’m going to burn this iPhone when I’m done typing. That’ll show them.

    It’s a tough life, but this way I don’t accidentally appear to support someone who is different from me.

  • smithcoin a day ago

    IMHO for a comment of this level of vitriol you should probably cite some sources rather than rely on anecdotal evidence.

  • mixmastamyk 20 hours ago

    Oh no, someone’s an optimist… get him!

  • dismalaf 20 hours ago

    So what do you use now instead of Rails? And what kind of app?

chvid 2 days ago

[flagged]

  • andrepd 2 days ago

    What could you possibly be talking about? The only benchmark I see in the article where Apple M4 isn't near the top is the rails test suite.

  • TheRealDunkirk 2 days ago

    Sorry, but what's "bogus" about benchmarking your specific workloads? What benchmark do you think he should have run?

    • Rohansi 2 days ago

      One more that favors Apple, obviously. 2/3 of the benchmarks having Apple Silicon on top is not enough!

  • OrvalWintermute 2 days ago

    I don’t know about pro or anti Apple, but the framework optional 8TB NVME SSD is only +$699 whereas Apple charges $2200 for a similar 8TB NVME SSD

  • micromacrofoot 2 days ago

    he's mad they take a cut of his app store subscriptions, this is a huge saga that's been spanning years for him

    (not sure why I'm being downvoted, it's true... https://x.com/dhh/status/1747697778455962014)

    • linotype 2 days ago

      Not sure why you’re being downvoted. He’s also the guy that inflicted Ruby on Rails upon us.

  • dlivingston 2 days ago

    Context on DHH vs. Apple: https://youtu.be/mzDi8u3WMj0

    I find myself agreeing with much of what he says. The Apple of today is not the same as the Apple of 15 years ago.

    • beeflet 2 days ago

      is the apple fief facing resistance? I was under the impression that Epic v. Apple was a nothingburger

rtpg a day ago

huhm the ars review makes it seem like this thing does worse on benchmarks than stuff like the mac but the test suite in DHH's example goes way faster. Bursty perf perhaps?

ahmedfromtunis 2 days ago

It fun to see that, in an era where most CEOs are all-in with AI both on a personal level and at their companies, dhh chose to rather take a deep dive into the world of linux and config files and indie computer brands.

Curious what will the long term impact of this be on the longtime viability of Basecamp and its sister/daughter brands.