• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: October 20th, 2024

help-circle

  • Most proprietary software has a catchy name and branding, a single website to visit, and a push to “sign up” or “download now”. In contrast, most FOSS have goofy or even unpronounceable names with little or bad branding, no clear authoritative website (especially with federated services), and there’s too much friction to sign up or download the software.

    Additionally, you and I see a clear benefit to open source software, but most people either don’t know what it is or don’t really understand or care why it’s beneficial. It seems so clear and obvious to us, so much so that we’re willing to put up with all kinds of rough edges and hurdles to use.

    This is even worse with federated social media because of the network effect. If there’s no friends or celebrities already there, it’s not clear why I’d want to be there, and there’s very few organizations that have accounts with useful information that I want or need. Even worse, what good stuff does exist is spread across a bunch of different instances and interfaces so if something gets shared on other networks, it’s not clear where it came from or where I’d go to get more of that.

    I’m sure if you look around there are other examples in your life where you haven’t put much thought into things beyond your obvious needs. Do you care enough about ethically sourced diamonds or coffee or other products to make the extra effort to only purchase those? Do you scour labels at the grocery store to ensure they’re sourcing ingredients from reputable places and avoiding using certain chemicals or drugs that you don’t want? Do you care if you’re using services built on clean energy or if they pay fair wages to their employees? Maybe you do all that, but most people find worrying about all that stuff exhausting and just want something to eat, a product that is useful to their life at a fair price, a helpful service that is affordable, etc.





  • Like another commenter, my ADHD interest in novelty helps offset the anxiety from the routine break. In my case I feel more anxiety about losing a routine that I’ve struggled to maintain (thanks again to the ADHD), which has happened to me multiple times this year thanks to random health issues that were outside of my control.

    That said, I have experienced anxiety about using a hotel gym because I’ve struggled to substitute different exercises to accommodate different equipment. I hate having to share equipment with other people in the hotel gym (versus using my own equipment in my basement.

    One strategy I’ve considered is developing (but haven’t executed on, so take with a grain of salt) is a go-to bodyweight routine. That way I don’t have to worry about what equipment is or is not available, or the social stress of finding other people in the gym; I can just do a predetermined set of bodyweight exercises alone in my hotel room. It’s still a break in my routine because it’s a different set of exercises in a different environment, but at least I can mentally prepare myself for something that I’ve pre-prepared as a substitute.

    Alternatively, I’ve accepted that days when I’m stuck in a hotel it’s going to be different and since my struggle is maintaining a stable routine of working out on a schedule, I’ve defaulted to using a treadmill for some set period of time. It’s not the same workout or feeling, but I can usually depend on any exercise room at the very least having a treadmill that I can use.


  • Feels like this is the same logic that is used to ban sex education - they shouldn’t have sex until marriage so you shouldn’t teach them about it. I accept that kids will have access to fully loaded genitals, so I want them to understand and respect the consequences and to be able to act responsibly.

    As much as I would like firearm access to be restricted and regulated, I accept that in the world we live in they’re not going anywhere, anytime soon. I also think alcohol is deadly, both for the consumer and the people around them, but I’d rather teach my kid to drink responsibly than send them off to college unprepared; I’ve seen too many sheltered kids go fucking crazy once they’re out from their parent’s thumb.

    I’ve always felt that harm reduction was an admirable thing. Accept and work with human nature instead of against it. I don’t want people shooting up heroin, but I support giving away free, clean needles to prevent even worse outcomes. Doesn’t mean I endorse or encourage the activity.



  • “Okay, this one is a freebie, since you’re going to figure it out relatively quickly, but making the runner up to the Presidency is just dumb. Having your political rival as the next in line is just… c’mon, you can do better than that.

    Next, I know some of you are concerned about factions. It’s human nature to band together so you can’t really stop this, sorry. That said, this first-past-the-post system is awful and will result in a duopoly and a race to the bottom. If you just adopt ranked choice voting to start with, these factions won’t be neutralized but at least you can ensure more diverse political factions and limit their power to dominate other factions.

    Now then, the second amendment. I know it seems straightforward to you, but eventually this language will be too vague. Additionally, those muskets are eventually going to evolve into easily concealable firearms that can fire a dozen or more deadly accurate shots and be reloaded in under a second. Ironically, they’ll still be wildly outclassed by what a tyrant can field against patriots exercising their rights. I’m not going to tell you what you should do, but at the very least you should clarify what a “well regulated militia” means. Yeah, I know the Constitution can be amended so that it can evolve around those firearms, but it remember my previous point? That duopoly will ensure that nothing is done about this and people will argue past each other.

    What’s next? Oh yeah… you know that clever set of checks & balances y’all designed? Sure it makes sense that each branch would jealously guard their political power, but those factions are going to prefer to centralize all their power behind a single executive. I know you all who supported the Articles of Confederation are horrified about that idea, but eventually things will evolve back to an elected monarchy, and once they consolidate enough power that elected part will probably disappear too. So uh, might want to strengthen those checks & balances. Good luck on that one.

    Finally, I know y’all can’t envision a world without slavery. I get that the southern states are dependent on it and therefore abolishing it overnight seems impossible, but kicking the can down the road (do you guys know that idiom?) is just going to make the problem worse. It will literally divide the country in two and will have major ramifications many centuries later. If you can’t abolish it now, at least put a framework in place to transition away. Maybe in a way that respects those wonderful values you profess in the Declaration of Independence that only apply to a handful of landowning men for now.”




  • Werner von Braun, the primary architect of the Saturn V rocket that took us to the moon, had plans to get us to Mars by 1984. Not sure that was completely realistic, but it’s hard to believe that 40 years after that we don’t even have any serious plans.

    I’m sorry to hear about your self abuse. This internet stranger is hoping that you’re in a better place.




  • Am I afraid to face down a cashier? No.

    Is it REALLY that bad? No.

    Can I make awkward small talk with a stranger? Yes.

    Do I want to make awkward small talk with a stranger? No.

    Am I relieved that I’m not forced to interact with a stranger and can continue to have to my own inner thoughts and not have to spend time rehearsing in my head what to say if they ask me how I am because I feel weirdly compelled to answer it honestly instead of simply saying “fine” like most do? Absolutely.


  • greygore@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    2 months ago

    The only thing close to a decision that LLMs make is

    That’s not true. An “if statement” is literally a decision tree.

    If you want to engage in a semantically argument, then sure, an “if statement” is a form of decision. This is a worthless distinction that has nothing to do with my original point and I believe you’re aware of that so I’m not sure what this adds to the actual meat of the argument?

    The only reason they answer questions is because in the training data they’ve been provided

    This is technically true for something like GPT-1. But it hasn’t been true for the models trained in the last few years.

    Okay, what was added to models trained in the last few years that makes this untrue? To the best of my knowledge, the only advancements have involved:

    • Pre-training, which involves some additional steps to add to or modify the initial training data
    • Fine-tuning, which is additional training on top of an existing model for specific applications.
    • Reasoning, which to the best of my knowledge involves breaking the token output down into stages to give the final output more depth.
    • “More”. More training data, more parameters, more GPUs, more power, etc.

    I’m hardly an expert in the field, so I could have missed plenty, so what is it that makes it “understand” that a question needs to be answered that doesn’t ultimately go back to the original training data? If I feed it training data that never involves questions, then how will it “know” to answer that question?

    it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere

    It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I’m fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential “liability” issues.

    System prompts are literally just additional input that is “upstream” of the actual user input, and I fail to see how that changes what I said about it not understanding what an apology is, or how it can be sincere when the LLM is just spitting out words based on their statistical relation to one another?

    An LLM doesn’t even understand the concept of right or wrong, much less why lying is bad or when it needs to apologize. It can “apologize” in the sense that it has many examples of apologies that it can synthesize into output when you request one, but beyond that it’s just outputting text. It doesn’t have any understanding of that text.

    And in that scenario, yes I’m being gaslite because a human told it to.

    Again, all that’s doing is adding additional words that can be used in generating output. It’s still just generating text output based on text input. That’s it. It has to know it’s lying or being deceitful in order to gaslight you. Does the text resemble something that can be used to gaslight you? Sure. And if I copy and pasted that from ChatGPT that’s what I’d be doing, but an LLM doesn’t have any real understanding of what it’s outputting so saying that there’s any intent to do anything other than generate text based on other text is just nonsense.

    There is no thinking

    Partially agree. There’s no “thinking” in sentient or sapient sense. But there is thinking in the academic/literal definition sense.

    Care to expand on that? Every definition of thinking that I find involves some kind of consideration or reflection, which I would argue that the LLM is not doing, because it’s literally generating output based on a complex system of weighted parameters.

    If you want to take the simplest definition of “well, it’s considering what to output and therefore that’s thought”, then I could argue my smart phone is “thinking” because when I tap on a part of the screen it makes decisions about how to respond. But I don’t think anyone would consider that real “thought”.

    There are no decisions

    Absolutely false. The entire neural network is billions upon billions of decision trees.

    And a logic gate “decides” what to output. And my lightbulb “decides” whether or not to light up based on the state of the switch. And my alarm “decides” to go off based on what time I set it for last night.

    My entire point was to stop anthropomorphizing LLMs by describing what they do as “thought”, and that they don’t make “decisions” in the same way humans do. If you want to use definitions that are overly broad just to say I’m wrong, fine, that’s your prerogative, but it has nothing to do with the idea I was trying to communicate.

    The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are

    I promise you I know very well what LLMs and other AI systems are. They aren’t alive, they do not have human or sapient level of intelligence, and they don’t feel. I’ve actually worked in the AI field for a decade. I’ve trained countless models. I’m quite familiar with them.

    Cool.

    But “gaslighting” is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.

    Sure, if you wanna ascribe human terminology to what marketing companies are calling “artificial intelligence” and further reinforcing misconceptions about how LLMs work, then yeah, you can do that. If you care about people understanding that these algorithms aren’t actually thinking in the same way that humans do, and therefore believing many falsehoods about their capabilities, like I do, then you’d use different terminology.

    It’s clear that you don’t care about that and will continue to anthropomorphize these models, so… I guess I’m done here.


  • I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:

    The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.

    As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.

    He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.

    At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:

    • Are they genuinely intelligent?
    • Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?

    The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?

    At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.

    Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.

    We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.

    Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.

    Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:

    1. Collect underpants
    2. ?
    3. Profit!

    It looks more like:

    1. Use neural network training to construct large language models.
    2. ?
    3. Artificial general intelligence!

    If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.


  • greygore@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    It didn’t lie to you or gaslight you because those are things that a person with agency does. Someone who lies to you makes a decision to deceive you for whatever reason they have. Someone who gaslights you makes a decision to behave like the truth as you know it is wrong in order to discombobulate you and make you question your reality.

    The only thing close to a decision that LLMs make is: what text can I generate that statistically looks similar to all the other text that I’ve been given. The only reason they answer questions is because in the training data they’ve been provided, questions are usually followed by answers.

    It’s not apologizing you to, it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere - it has no ability to be sincere because it doesn’t have any thoughts.

    There is no thinking. There are no decisions. The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are, and the more we fall into the trap of these AI marketers about how close we are to truly thinking machines.