> To be clear - the shock wasn’t that OpenAI made a big deal, no, it was that they made two massive deals this big, at the same time, with Samsung and SK Hynix simultaneously
That's not "dirty." That's hiding your intentions from suppliers so they don't crank prices before you walk through their front door.
If you want to buy a cake, never let the baker know it's for a wedding.
What they mean is that they bought 40% of all RAM production, they managed to do that by simultaneously making two big deals at the same time. It's buying up 40% of all RAM production with the intention to have most of it idle in warehouses that is "dirty". And in order to be able to do that, they needed to be secretive and time two big deals at the same time.
> It's buying up 40% of all RAM production with the intention to have most of it idle in warehouses
They have no incentive to purchase a rapidly-depreciating asset and then immediately shelve it, none
They might have to warehouse inventory until they can spin-up module-manufacturing capacity, but that's just getting their ducks in a row
The incentive suggested in the article is to block other competitors from scaling training, which is immensely RAM hungry. Amongst other things. Even Nvidia could feel the pressure, since their GPUs need RAM. It could be a good bargaining chip for them, who knows.
I'm not saying it's true, but it is suspicious at the very least. The RAM is unusable as it stands, it's just raw wafer, they'd need a semiconductor fab + PCB assembly to turn them into usable RAM modules. Why does OpenAI want to become a RAM manufacturer, but of only the process post-wafer.
> They have no incentive to purchase a rapidly-depreciating asset and then immediately shelve it, none
It screws up the price for their competitors. That's an incentive. Particularly with so many "AI datacenter" buildouts on the horizon.
That's not the dirty part. This is the dirty part:
> OpenAI isn’t even bothering to buy finished memory modules! No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM! Right now it seems like these wafers will just be stockpiled in warehouses
> OpenAI isn’t even bothering to buy finished memory modules
And? Why should they be obligated to pay for all the middleman steps from fab down to module? That includes: wafer-level test, module-level test (DC, AC, parametric), packaging, post-packaging test, and module fabrication. There's nothing illegal or sketchy about saying, "give me the wafers, I'll take care of everything else myself."
> not even allocated to a specific DRAM standard yet
DRAM manufacturers design and fabricate chips to sell into a standardized, commodity market. There's no secret evolutionary step which occurs after the wafers are etched which turns chips into something which adheres to DDR4,5,6,7,8,9
> It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM
Who cares?
The implication here is that the primary goal is to corner the market, not to use the supply. If you aren't going to use them anyways then of course it is silly to pay for them to be finished.
Do you think that's fine, or do you think that implication is wrong and OpenAI does actually plan to deploy 40% of the world's DRAM supply?
> The implication here is that the primary goal is to corner the market
You have no evidence of that. Even at face value, the idea of "cornering the market" on a depreciating asset with no long-term value isn't a war strategy, it's flushing money down the toilet. Moreover, there's a credible argument OpenAI wanted to secure capacity in an essential part of their upstream supply chain to ensure stable prices for themselves. That's not "cornering the market," either, it's securing stability for their own growth.
Apple used to buy-up almost all leading-edge semiconductor process capacity from TSMC. It wasn't to resell capacity to everyone else, it was to secure capacity for themselves (particularly for new product launches). Nvidia has been doing the same since the CUDA bubble took off (they have, in effect, two entire fabs worth of leading-edge production just for their GPUs/accelerators). Have they been "cornering" the deep sub-micron foundry market?
> the idea of "cornering the market" on a depreciating asset with no long-term value isn't a war strategy, it's flushing money down the toilet
OpenAI's entire business strategy thus far can be summarized as "flushing money down the toilet", so that isn't actually as unlikely as you're making it sound.
Yes, they've made insane scaling bets before and they have paid off.
If what we've heard about no acceptable pre-training runs from them in the last two years trying to increase the memory for training by two orders of magnitude is just a rehash of what got them from gpt2 to gpt3.