"These price increases have multiple intertwining causes, some direct and some less so: inflation, pandemic-era supply crunches, the unpredictable trade policies of the Trump administration, and a gradual shift among console makers away from selling hardware at a loss or breaking even in the hopes that game sales will subsidize the hardware. And you never want to rule out good old shareholder-prioritizing corporate greed.
But one major factor, both in the price increases and in the reduction in drastic “slim”-style redesigns, is technical: the death of Moore’s Law and a noticeable slowdown in the rate at which processors and graphics chips can improve."
Can you explain to me what the person you are replying to meant by ‘integrated memory on a desktop pc’?
I tried to explain why this phrase makes no sense, but apparently they didn’t like it.
…Standard GPUs and CPUs do not share a common kind of RAM that gets balanced between space reserved for CPU-ish tasks and GPU-ish tasks… that only happens with an APU that uses LPDDR RAM… which isn’t at all a standard desktop PC.
It is as you say, a hierarchy of assets being called into the DDR RAM by the CPU, then streamed or shared into the GPU and its GDDR RAM…
But the GPU and CPU are not literally, directly using the actual same physical RAM hardware as a common shared pool.
Yes, certain data is… shared… in the sense that it is or can be, to some extent, mirrored, parellelized, between two distinct kinds of RAM… but… not in the way they seem to think it works, with one RAM pool just being directly accessed by both the CPU and GPU at the same time.
… Did they mean ‘integrated graphics’ when they … said ‘integrated memory?’
L1 or L2 or L3 caches?
???
I still do not understand how any standard desktop PC has ‘integrated memory’.
What kind of ‘memory’ on a PC… is integrated into the MoBo, unremovable?
???
Hah, now you made me look that stuff up since I was talking anchored on my knowledge of systems with multiple CPUs and shared memory, since that was my expectation about the style of system architecture of the PS5, since in the past that’s how they did things.
So, for starters I never mentioned “integrated memory”, I wrote “integrated graphics”, i.e. the CPU chip comes together with a GPU, either as two dies in the same chip package or even both on the same die.
I think that when people talk about “integrated memory” what they mean is main memory which is soldered on the motherboard rather than coming as discrete memory modules. From the point of view of systems architecture it makes no difference, however from the point of view of electronics, soldered memory can be made to run faster because soldered connections are much closer to perfect than the mechanical contact connections you have for memory modules inserted in slots.
(Quick explanation: at very high clock frequencies the electronics side starts to behave in funny ways as the frequency of the signal travelling on the circuit board gets so high and hence the wavelength size gets so small that it’s down to centimeters or even milimeters - around the scale of the length of circuit board lines - and you start getting effects like signal reflections and interference between circuit lines - because they’re working as mini antennas so can induce effects on nearby lines - hence it’s all a lot more messy than if the thing was just running at a few MHz. Wave reflections can happen in connections which aren’t perfect, such as the mechanical contact of memory modules inserted into slots, so at higher clock speeds the signal integrity of the data travelling to and from the memory is worse than it is with soldered memory whose connections are much closer to perfect).
As far as I know nowadays L1, L2 and L3 caches are always part of the CPU/GPU die, though I vaguelly remember that in the old days (80s, 90s) memory cache might be in the form of dedicated SRAM modules on the motherboard.
As for integrated graphics, here’s some reference for an Intel SoC (system on a chip, in this case with the CPU and GPU together in the same die). If you look at page 5 you can see a nice architecture diagram. Notice how memory access goes via the memory controller (lower right, inside the System Agent block) and then the SoC Ring Interconnect which is an internal bus connecting everything to everything (so quite a lot of data channels). The GPU implementation is the whole left side, the CPU is top and there is a cache slice (at first sight an L4 cache) at the bottom shared by both.
As you see there, in integrated graphics the memory access doesn’t go via the CPU, rather there is a memory controller (and in this example a memory cache) for both and memory access for both the CPU and the GPU cores goes through that single controller and shares that cache (but lower level caches are not shared: notice how the GPU implementation contains its own L3 cache - bottom left, labelled “L3$”)
With regards to the cache dirty problems I mentioned in the previous post, at least that higher level (L4) cache is shared so instead of cache entries being made invalid because of the main memory being changed outside of it, what you get is a different performance problem were there is competiton for cache usage between the areas of memory used by the CPU and areas of memory used by the GPU (as the cache is much smaller than the actual main memory, it can only contain copies of part of the main memory, and if two devices are using different areas of the main memory they’re both causing those areas to get cached but the cache can’t fit both so depending on the usage pattern it might constantly be ejecting entries for one area of memory to make room for entries for the other area of memory and back, which in practice makes it as slow as not having any cache there - there are lots of tricks to make this less of a problem but it’s still slower than if there was just one processing device using that cache such as you get with each processing device having its own cache and its own memory).
As for contention problems, there are generally way more data channels in an internal interconnect as the one you see there than in the data bus to the main memory modules, plus that internal interconnect will be way faster, so the contention in memory access will be lower for cached memory but with cache misses (memory locations not in cache and hence that have to be loaded from main memory) that architecture will still suffer from two devices sharing the main memory hence that memory’s data channels having to be shared.