

TiVo had such an excellent UI. DVRs became common, but all the ones I saw had such inferior interfaces. Such a shame.
I bought and hacked a TiVo unit and used it for years in a place where the service wasn’t available. I miss that thing.
TiVo had such an excellent UI. DVRs became common, but all the ones I saw had such inferior interfaces. Such a shame.
I bought and hacked a TiVo unit and used it for years in a place where the service wasn’t available. I miss that thing.
The problem really isn’t the exact percentage, it’s the way it behaves.
It’s trained to never say no. It’s trained to never be unsure. In many cases an answer of “You can’t do that” or “I don’t know how to do that” would be extremely useful. But, instead, it’s like an improv performer always saying “yes, and” then maybe just inventing some bullshit.
I don’t know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I’m looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that’s fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google’s “helpful” AI will helpfully generate some completely believable bullshit. It’s able to take what I’m searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.
I’m knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I’m sure there are a lot of other more gullible optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.
To me, the best way to explain LLMs is to say that they’re these absolutely amazing devices that can be used to generate movie props. You’re directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It’s so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you’d see in a hospital.
But, just like you’d never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that’s hard, because it’s so convincing.
Summarizing requires understanding what’s important, and LLMs don’t “understand” anything.
They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.
The picture suggests that there was an ad that suggested a stop at the sponsored gas station, if the user clicked on the suggestion / ad, then the route would be modified.
IMO that’s massively different than the detour being added by default. If they actually did / were doing that, it would be a huge scandal, but I don’t think that’s what’s happening here. Instead, it’s just an intrusive, annoying ad.
Git is a DVCS. GitHub is a place where DVCS repositories are hosted. There are many other places where DVCS repositories can be hosted, but GitHub is the most famous one… Porn is a type of content. PornHub is a place where porn is hosted. There are many other places where porn can be hosted, but PornHub is the most famous one. It’s a pretty good analogy.
Also because section 1201 of the DMCA means that otherwise useful things become e-waste.