Essential insights from Hacker News discussions

PCIe 8.0 announced by the PCI-Sig will double throughput again

Here's a summary of the themes discussed in the Hacker News thread, presented in markdown format with direct quotations:

Shifting Computing Architecture: GPU as Motherboard

A core idea presented is the radical reimagining of PC architecture, where the graphics processing unit's (GPU) printed circuit board (PCB) could effectively become the motherboard. This would involve components like the CPU and memory residing on PCIe slots or modules connected to this GPU-centric foundation.

  • "Any EEs that can comment on at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory lives on a PCIe slot?" - SlightlyLeftPad
  • "Wedging gargantuan GPUs onto boards and into cases, sometimes needing support struts even, and pumping hundreds of watts through a power cable makes little sense to me. The CPU, RAM, these should be modules or cards on the GPU." - vincheezel
  • "Maybe the GPU becomes the motherboard and the CPU plugs into it." - mensetmanusman
  • "No, for a gaming computer what we need is the motherboard and gpu to be side by side. That way the heat sinks for the CPU and GPU have similar amounts of space available." - db48x
  • "If you look at a any of the nvidia DGX boards it's already pretty close." - verall
  • "It’s always going to be a back and forth on how you attach stuff." - mensetmanusman

Power Delivery and Electrical Grid Limitations

The discussion frequently touches upon the increasing power demands of modern CPUs and GPUs, raising questions about the capacity of existing residential electrical infrastructure. Users debate the feasibility of supporting increasingly power-hungry components within typical home wiring standards, considering voltage, amperage, and the safety margins required for continuous loads.

  • "It seems like that would also have some power delivery advantages." - SlightlyLeftPad (referring to the GPU-as-motherboard idea)
  • "Both AMD and Intel have roadmap for 800W CPU. At 50-100W for IO, this only leaves 11W per Core on a 64 Core CPU." - ksec
  • "800 watt CPU with a 600 watt GPU, I mean at a certain point people are going to need different wiring for outlets right?" - linotype
  • "At least with U.S. wiring we have 15 amps at 120 volts. For continuous power draw I know you'd want an 80% margin of safety, so let's say you have 1440 Watts of AC power you can safely draw continuously." - jchw
  • "Where things get hairy are old houses with wiring that’s somewhere between shaky and a housefire waiting to happen, which are numerous." - cosmic_cheese
  • "Yeah, but it ain't nothing that microwaves, space heaters, and hair dryers haven't already given a run for their money." - kube-system
  • "A simple kitchen top water cooker is 2000W, so a 1500W PC sounds like no big deal." - t0mas88
  • "Generally, in the US homes get two phases of 120v that are 180 degrees out of phase with the neutral. Using either phase and the neutral gives you 120v. Using the two out of phase 120v phases together gives you a difference of 240v." - kube-system (clarifying US residential power splitting)
  • "So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall." - atonse
  • "It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace." - ender341341
  • "I got a quote for over 2 thousand to run a 24v line literally 9 feet from my electrical panel across my garage to put a EV charger in." - com2kid
  • "Consumers with desktop computers are not winning any AI race anywhere." - kube-system
  • "A lot of people think that baud rate represents bits per second, which it only does in systems where the symbol set is binary." - throwway120385 (discussing signal transmission rates)
  • "It seems like it would be more difficult to incorporate lessons [into PCIe design]." - Seattle3503

Advancements in PCIe and Interconnect Technologies

The discussion delves into the evolution of the Peripheral Component Interconnect Express (PCIe) standard, exploring its increasing speeds, modulation techniques (like PAM4), and the growing demand for bandwidth from high-performance computing, particularly in data centers. The practicality and adoption rate of newer PCIe generations for consumer versus enterprise use cases are also debated.

  • "I wonder what modulation order / RF bandwidth they'll be using on the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously high." - zkms
  • "PCIe 7 = 128 GT/s = 64 Gbaud × PAM-4 = 32 'GHz' (if you alternate extremes on each symbol)" - eqvinox
  • "I love the PCIe standard is 3 generations ahead of what is actually released. Gen5 is the live version, but the team behind it is so well organized that they have a roadmap of 3 additional versions now." - bhouston
  • "Gen6 is in use look at Nvidia ConnectX-8" - ThatMedicIsASpy
  • "Millions of Blackwell systems use PCIe 6.x today, PCIe 7.x was finalized last month, and this is an announcement work on PCIe 8.0 has started for release in 3 years." - zamadatix
  • "It'll be interesting if consumer devices bother trying to stay with the latest at all anymore. It's already extremely difficult to justify the cost of implementing PCIe 5.0 when it makes almost no difference for consumer use cases." - zamadatix
  • "The best consumer use case so far is enthusiasts who want really fast NVMe SSDs in x4 lanes, but 5.0 already gives >10 GB/s for a single drive, even with the limited lane count." - zamadatix
  • "More lanes = more cost. Faster lanes = more cost. More faster lanes = lots more cost." - zamadatix
  • "The chipset also strikes some of the balance for consumers though. It has a narrow high speed connection to the CPU but enables many lower speed devices to share that bandwidth." - zamadatix
  • "If you're using a new video card with only 8GB of onboard RAM and are turning on all the heavily-advertised bells and whistles on new games, you're going to be running out of VRAM very, very frequently. The faster bus isn't really important for higher frame rate, it makes the worst-case situations less bad." - simoncion
  • "While the B200 chip itself could do PCIe6 as it was planned for GB200, there is no system around it with Gen6. the official DGX B200 is just PCIe5." - zoltan
  • "So we can skip 6 and 7 and go directly to 8, right?" - Phelinofist
  • "What I don't get: why doesn't AMD just roll Gen6 out in their CPU, bifurcate it to Gen5, and boom, you have 48x2 Gen5s? same argument for gen5 bifurcated to gen4." - zoltan
  • "Bifurcation can't create new lanes, only split the existing lane count up into separate logical slots." - zamadatix
  • "Is that a problem [not enough PCIe lanes]? These days, you don't need slots for your sound card and network card, that stuff's all integrated on the motherboard." - michaelt
  • "Devices integrated on the motherboard will either connect to the cpu via PCIe or USB, lanes aren't just for PCIe cards." - mrene
  • "I thought we were only just up to 5? Did we skip 6 and 7?" - LeoPanthera
  • "It's less surprising if you realize that PCIe is behind Ethernet (per lane)." - wmf

Integration and Form Factor Trends (CPU/RAM/GPU)

The conversation highlights a trend towards greater integration in computing components, with discussions on unified memory architectures, on-die memory, and the potential for CPUs and GPUs to be more tightly coupled. This leads to considerations about upgradability, proprietary ecosystems, and the benefits of reduced latency and increased throughput.

  • "And the memory should be a onboard module on the cpu card intel/amd should replicate what apple did with a unified same ringbus sort of memory modules. Lower latency,higher throughput." - avgeek23
  • "GPU + CPU on the same die, RAM on the same package. A total computer all-in-one." - MBCook (describing Apple Silicon)
  • "Figure out how much RAM, L1-3|4 cache, integer, vector, graphics, and AI horsepower is needed for a use-case ahead-of-time and cram them all into one huge socket with intensive power rails and cooling." - burnt-resistor
  • "People also forget that the Raspberry Pi (appeared 2012) was based on a SoC with a big and powerful GPU and small weak supporting CPU. The board booted the GPU first." - kvemkon
  • "One possible advantage of this approach that no one here has mentioned yet is that it would allow us to put RAM on the CPU die (allowing for us to take advantage of the greater memory bandwidth) while also allowing for upgradable RAM." - leoapagano
  • "I feel like what we really need is a GPU 'socket' like we have for CPU's. And then a set of RAM slots dedicated to that GPU socket (or unified RAM shared between CPU and GPU)" - Melatonic
  • "HBM would most likely have to be integrated directly on the GPU module given performance demands and signal constraints." - Seph
  • "Socketed processors only really work with DDRx type DRAM interfaces. GPUs use GDDR and HBM interfaces, which are not ideal for sockets." - Lramseyer
  • "GB300 has Gen6 so I am pretty sure GV200 would have it at least." - zoltan

Data Center vs. Consumer Hardware Demands

A distinction is drawn between the evolving needs of data centers, particularly for AI workloads, and the typical requirements of consumer desktop computers. The power consumption and interconnect demands of high-density server racks are contrasted with the more constrained, cost-sensitive nature of the consumer market.

  • "There already are different outlets for these higher power draw beasts in data centers. The amount of energy used in a 4u 'AI' box is what an entire rack used to draw." - tracker1
  • "This is a legitimate problem in datacenters. They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases." - 0manrho
  • "The talk of GPU/AI datacenters consuming inordinate amounts of energy isn't just because the DC's are yuge, (although some are), but because the power draw per rack unit space is going through the roof as well." - 0manrho
  • "On the consumer side of things where the CPU's are branded Ryzen or Core instead of Epyc or Xeon, a significant chunk of that power consumption is from the boosting behavior they implement to pseudo-artificially inflate their performance numbers." - 0manrho
  • "But honestly there's a lot of headroom still even if you only have your common american 15A@120VAC outlets available before you need to call your electrician and upgrade your panel and/or install 240VAC outlets or what have you." - 0manrho
  • "For other use cases like GPU servers it is better to have many GPUs for every CPU, so plugging a CPU card into the GPU doesn’t make much sense there either." - db48x
  • "PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast..." - verall
  • "This would pretty much make both intel and amd to start market segmentation by CPU Core + Memory combination. I absolutely do not want that." - 0x457
  • "Most people don't own their house. If you can't get an electrician to install a 240v outlet for your EV charger (which can cost $500-1000), then you're out of luck." - viraptor
  • "The Nordics we're on 10A for standard wall outlets so we're stuck on 2300W without rewiring (or verifying wiring)." - carlhjerpe
  • "In Italy we also have 10A and 16A (single phase). In practice however almost all wires running in the walls are 2.5 mm^2, so that you can use them for either one 16A plug or two adjacent 10A plugs." - bonzini
  • "By an overwhelming margin, most computers are not in gamers' basements." - jeffbee
  • "Also the noise from the fans." - bonzini
  • "I don't think many people would want some 2kW+ system sitting on their desk at home anyways. That's quite a space heater to sit next to." - vel0city