96GB modded RTX 4090 for $4.5k
Buying next gpu, 32G and faster or 48G and slower?
AMD inference using AMDVLK driver is 40% faster than RADV on pp, ~15% faster than ROCm inference performance*
Is the varjo Aero still a good headset in 2025?
3D printing
For the love of God, stop abusing the word "multi"
14x SSD, but what to do with them?
Defending Europe without the US: first estimates of what is needed
National security risk of thousands of cleared employees suddenly out of work
PCVR needs a Reverb G3 - affordable VR headset
What's with the too-good-to-be-true cheap GPUs from China on ebay lately? Obviously scammy, but strangely they stay up.
Someone posted some numbers for LLM on the Intel B580. It's fast.
There's also the new ROG Flow Z13 (2025) with 128GB LPDDR5X on board for $2,799
The B7xx can’t come soon enough
Trump’s mass firings could leave federal government with ‘monumental’ bill, say experts
I tested Grok 3 against Deepseek r1 on my personal benchmark. Here's what I found out
"Meta has spent over $100 billion on VR"
BEST hardware for local LLMs
With Valve releasing the source code for TF2, how likely/possible is it that we get a fully VR based TF2 game eventually?
Trump Endorses Sweeping Medicaid Cuts—to Give Tax Cuts to Rich. Remember when Donald Trump promised not to touch Medicaid? He’s already flipped.
What do you guys think about this "96 GBs of VRAM"
Which one will you use for mac in distributed inference? llama.cpp or MLX
My new local inference rig
Trump launches fresh attack on Zelensky, calling him a “dictator”
New laptops with AMD chips have 128 GB unified memory (up to 96 GB of which can be assigned as VRAM)