• 1 Post
  • 22 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle
  • Nah, it’s no psy-op. People truly hold these beliefs. I can’t blame them for it after the time they’ve had with Maduro, but I wouldn’t be surprised if that’s a belief held by the majority of Venezuelans. In Honduras, people are celebrating the Trump pardon of our narcotrafficker ex-president, saying Trump did the right thing, and cursing the current administration as if it could hold a candle to the crimes we saw during the other guy’s time in power. It’s unhinged, and yet it’s no less true. I can at least sympathize with the Venezuelans who think like that a ton more than I can with my own countrymen, bunch of fucking idiots that they are.

    Edit: the perfect English is a byproduct of living in the shadow of the empire. Any chances of financial freedom can only be achieved by working for US companies. That binds you to a race to learn and be good at English as a second language. It’s probably true of any decently-sized city south of the US border. Those who don’t learn English are destined to end up serving those who do end up learning English.









  • JGrffn@lemmy.worldtoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    Wait so you built a pool using removable USB media, and was surprised it didn’t work? Lmao

    That’s like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.

    Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it’s a higher level program which doesn’t care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn’t designed for your specific usecase.







  • I’m from the Americas, but not a crazy ass gringo. I’m 32, engaged, got a good job, a good group of friends, don’t struggle too much in life and everything’s good. School and highschool still legitimately terrify me, I get nightmares over it, and I actually got snipped to avoid even the chance of having to put someone through that shit all over again…among a couple of other reasons.



  • If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

    You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

    I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.


  • What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?


  • Yo I’m from Honduras and your corporations literally invaded my country when workers started complaining about the dismal work conditions. There was a straight up coup enacted by an American business owner, specifically to get someone who aligned with American corporate values. Now the only way things have shifted is you no longer send a fleet filled with armed people to get rid of protests, you simply shut down entire factories with single digit day notice if people start even speaking about unionizing. The empire still intervenes when they don’t like a political candidate, even now. I’m here to assure you that your take is just wrong. This is capitalism, and it evidently does not work.


  • I host a Plex server for close to 70 friends and family members, from multiple parts of the world. I have over 60TBs of movies, tv shows, anime, anime movies, and flac music, and everyone can connect directly to my server via my reverse proxy and my public IPs. This works on their phones, their tvs, their tablets and PCs. I have people of all ages using my server, from very young kids to very old grandparents of friends. I have friends who share their accounts with their families, meaning I probably have already hit 100+ people using my server. Everyone is able to request whatever they want through overseerr with their Plex account, and everything shows up pretty instantly as soon as it is found and downloaded. It works almost flawlessly, whether locally or remotely, from anywhere in the world. I myself don’t even reside in the same home that my Plex server resides. I paid for my lifetime pass over 10 years ago.

    Can you guarantee that I can move over to jellyfin and that every single person currently using my Plex server will continue having the same level of experience and quality of life that they’re having with my Plex server currently? Because if you can’t, you just answered your own question. Sometimes we self host things for ourselves and we can deal with some pains, but sometimes we require something that works for more people than just us, and that’s when we have to make compromises. Plex is not perfect, and is actively becoming enshittified, but I can’t simply dump it and replace it with something very much meant for local or single person use rather than actively serving tens to hundreds of people off a server built with OTC components.