tricerotops [they/them]

  • 0 Posts
  • 89 Comments
Joined 1 month ago
cake
Cake day: July 6th, 2025

help-circle
  • Given that training LLMs from scratch requires massive computational power, you must control some means of production. i.e. You must own a server farm to scrape publicly accessible data or collect data from hosting user services. Then you must also own a server farm equipped with large arrays of GPUs or TPUs to carry out the training and most types of inference.

    So the proletariat cannot simply wield these tools for their own purposes, they must use the products that capital allows to be made available (i.e. proprietary services or pre-trained “open source” models)

    How is this different from any other capital-intensive activity? Is forging steel a forbidden technology because it requires a lot of fuel or electricity to generate enough heat?

    Like I get that we like to feel as if doing computer things is an independent activity that rogue open source hackers should be able to do but some human activity requires a massive scale of coordination and energy and time. Have you seen what it takes to build a computer chip? If you want something that is even more out of reach than training a foundation model, look no further.

    I genuinely don’t see these as unique in the landscape of human technologies. As with all automation the goal for the capitalist is to squeeze more profit out of workers. LLMs don’t change that and they won’t be replacing humans. And as with any technology these are not solely buildable by capitalists. They are a technology that requires inputs at a massive scale. But how is that more limiting than building a laser that vaporizes 300 droplets of mercury per second to expose a silicon wafer to a 13.5 nanometer light? Or building a bridge? Or name any other big project that needs a lot of people and resources…

    So I think there’s a good argument about intent here on the part of the capitalist class, but I don’t find the argument about complexity or resource intensiveness very convincing.









  • I don’t think this is a very good or useful article because it is clearly someone who went into this “experiment” with a negative perspective on the whole thing and didn’t try very hard to make it work. Vibe coding as it stands today is, at best, a coin flip as to whether you can make something coherent and the chances of success rapidly diminish if the project can’t fit into about 50% of the context window. There are things you can do, and probably these things will be incorporated into tools in the future, that will improve your chances of achieving a good outcome. But I disagree with the author’s first statement that using LLMs in a coding workflow is trivial, because it is not. And the fact that they had a bad time proves that it is not. My perspective as someone who has a couple of decades of coding under their belt is that this technology can actually work but it’s a lot harder than anybody gives it credit for and there’s a major risk that LLMs are too unprofitable to continue to exist as tools in a few years.

    I agree though with their last point - “don’t feel pressured to use these” - for sure. I think that is a healthy approach. Nobody knows how to use them properly yet so you won’t lose anything by sitting on the sidelines. And in fact, like I said, it’s completely possible that none of this will even be a thing in 5 years because it’s just too goddamn expensive.



  • Complete HTX client/server crate (TCP-443 & QUIC-443): $400 USD

    Core deliverables: dial(), accept(), multiplexed stream() APIs; ChaCha20-Poly1305; Noise XK; ECH stub; ≥ 80 % line/branch fuzz coverage Description: Think of this as building the “network cables” for Betanet software. It’s a reusable library that lets any app open or accept encrypted connections that look like normal HTTPS on ports 443. Without this, no data can move on Betanet. Every other project will import it.

    so you want an http server?