If they’re not great, it’s your fault /thread 😅
- 1 Post
- 14 Comments
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·3 months agoI believe right now it’s also valid to ditch NVIDIA given a certain budget. Let’s see what can be done with large unified memory and maybe things will be different by the end of the year.
rkd@sh.itjust.worksto
Ukraine@sopuli.xyz•European leaders including Starmer to join Zelenskyy in Washington for meeting with Trump
6·3 months agono more fokin ambushes
rkd@sh.itjust.worksto
news@endlesstalk.org•Three killed, eight injured in shooting in crowded New York clubEnglish
1·3 months agoTrump has entered the chat
rkd@sh.itjust.worksto
news@endlesstalk.org•Trump made direct financial demands during call with Swiss presidentEnglish
33·3 months agoHis whole existence is a financial demand. I believe Bloomberg calls this “a transactional period”. Put it plainly, y’all elected a corrupt president.
rkd@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•HP Z2 Mini G1a Review: Running GPT-OSS 120B Without a Discrete GPUEnglish
1·3 months agoFor some weird reason, in my country it’s easier to order a Beelink or a Framework than an HP. They will sell everything else, except what you want to buy.
rkd@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•GPT-OSS 20B and 120B Models on AMD Ryzen AI ProcessorsEnglish
1·3 months agoRemind me of what are the downsides of possibly getting a framework desktop for christmas.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·3 months agoThat’s a good point, but it seems that there are several ways to make models fit in smaller memory hardware. But there aren’t many options to compensate for not having the ML data types that allows NVIDIA to be like 8x faster sometimes.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·3 months agoFor image generation, you don’t need that much memory. That’s the trade-off, I believe. Get NVIDIA with 16GB VRAM to run Flux and have something like 96GB of RAM for GPT OSS 120b. Or you give up on fast image generation and just do AMD Max+ 395 like you said or Apple Silicon.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
3·3 months agoI’m aware of it, seems cool. But I don’t think AMD fully supports the ML data types that can be used in diffusion and therefore it’s slower than NVIDIA.
it’s most likely math
rkd@sh.itjust.worksto
Games@sh.itjust.works•Nintendo-owned titles excluded from Japan’s biggest speedrunning event after organizers were told they had to apply for permission for each gameEnglish
13·3 months agoCongratulations Nintendo, you played yourself.


I can read minds and they’re thinking “we better get some money around here, otherwise we’re still blaming the immigrants”.