

Good thing I never used Google Assistant then.
Our News Team @ 11 with host Snot Flickerman
Good thing I never used Google Assistant then.
Just be careful running a Tor node and what kind of data could be flowing through your machine… It’s Tor… so some of the data can be pretty fuckin unsavory while other data can be political dissidents who need safety.
modern copyright law is far, far overreaching and in need of major overhaul.
https://rufuspollock.com/papers/optimal_copyright_term.pdf
This research paper from Rufus Pollock in 2009 suggests that the optimal timeframe for copyright is 15 years. I’ve been referencing this for, well, 16 years now, a year longer than the optimum copyright range. If I recall correctly I first saw this referenced by Mike Masnick of techdirt.
Very intuitive, the design is very human.
You must have missed the Bush era/Snowden era:
A new story published on the German site Tagesschau and followed up by BoingBoing and DasErste.de has uncovered some shocking details about who the NSA targets for surveillance including visitors to Linux Journal itself.
While that is troubling in itself, even more troubling to readers on this site is that linuxjournal.com has been flagged as a selector! DasErste.de has published the relevant XKEYSCORE source code, and if you look closely at the rule definitions, you will see linuxjournal.com/content/linux* listed alongside Tails and Tor. According to an article on DasErste.de, the NSA considers Linux Journal an “extremist forum”. This means that merely looking for any Linux content on Linux Journal, not just content about anonymizing software or encryption, is considered suspicious and means your Internet traffic may be stored indefinitely.
One of the biggest questions these new revelations raise is why. Up until this point, I would imagine most Linux Journal readers had considered the NSA revelations as troubling but figured the NSA would never be interested in them personally. Now we know that just visiting this site makes you a target. While we may never know for sure what it is about Linux Journal in particular, the Boing Boing article speculates that it might be to separate out people on the Internet who know how to be private from those who don’t so it can capture communications from everyone with privacy know-how. If that’s true, it seems to go much further to target anyone with Linux know-how.
Let me reiterate this part: the NSA considers Linux Journal an “extremist forum”.
I guess my interest in not wanting ads shoved down my throat or not wanting to deal with Microsoft anymore makes me an extremist.
The seeds for this were planted long ago.
Good question. In a way, the Fedi is a bit like the Storm Area 51 flashmob joke: “they can’t catch all of us!”
The diversified instances may make it harder to track every server and every individual.
I work in higher education and even people with PhDs often fail to type up a coherent email.
If someone got a PhD without being able to write a coherent sentence, that says more about how we’re handing out PhDs to unqualified people than it does that we need LLMs to solve that.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
I’ll concede that this seems useful in saving time to find your starting point.
However.
Is speed as a goal itself a worthwhile thing, or something that capitalist processes push us endlessly toward? Why do we need to be faster?
In prioritizing speed over a slow, tedious personal research, aren’t we allowing ourselves to be put in a position where we might overlook truly relevant research simply because it doesn’t “fit” the “well thought out question?” I’ve often found research that isn’t entirely in the wheelhouse of what I’m looking at, but is actually deeply relevant to it. By using the method you proposed, there’s a good chance that I never surface that research because I had a glorified keyword search find “relevancy” instead of me fumbling around in the dark and finding a “Eureka!” moment of clarity with something initially seemingly unrelated.
It’s more that we genuinely don’t see the net benefit of LLMs in general. Most of us are not programmers who need something to help “efficiency” up our speed of making code. I am perfectly capable of doing research, cataloging sources, and producing my own writing.
I can see marginal benefit for those who struggle with writing, but the problem therein is that they still need to run whatever their LLM spits out past another human to make sure it’s actually accurate or well written. In the end, with all of it you still need human editors and at that point, why have the LLM at all?
I’d love to hear what problem you think LLMs actually solve.
Robot that can “inuitively” jerk me off when?
*Looking at the senior devs JavaScript code
My God, it even has a watermark.
https://discourse.pi-hole.net/t/cannot-access-web-interface-after-pihole-6-update/77366/4
The
git fsck
failing showed a corrupt repository so I researched how to repair a git repository and found the toolgit-repair
- installed and ran this with the--force
flag and this repaired the repository.
Then ran
git pull
and the repository was now healthy. My web UI also works now!!
Probably a silly feature request but would it be worthwhile to add a
git fsck
to all the pihole stores in the debug script?
To start you should go to your web admin folder at /var/www/html/admin
and run a git fsck
to make sure you’re having the same problem as the person above. If you get a lot of failures, its likely the same issue.
So based on this resolved thread, it looks like you need to install git-repair
and then once again go to your pihole web admin interface folder at /var/www/html/admin
.
Then once in that folder run git-repair --force
and then when that completes run git pull
. Hopefully that resolves this issue for you.
Especially if you’ve set up key verification for SSH, you don’t even have to mess with a password.
Then it’s literally just pihole -up
God regular people are so fucking weird haha. I just can’t wrap my mind around wanting to click on ads.
Sometimes I wonder if working in local television news for 10 years and being subjected to ads basically constantly broke something in me.
I’ll definitely give those a spin when I’ve done a fresh install of pihole 6. I’ve been hesitant to do so because I don’t know how to do a fresh install easily when I’ve already used unbound to make my pi-hole a recursive DNS server and one of my pi-holes also doubles as an immich server so I have to do a lot of backing up I as of yet have been too lazy to do.
I still don’t see a difference between choosing who to follow (Mastodon)/choosing what communities to follow (Lemmy)/blocking people-communities-intances you don’t like and “creating your own feed.” To me, all of those things are creating your own feed.
…until your family complains that their favorite site has stopped working.
Pi-Hole these days allows you to create Groups so you can set certain devices to fewer or less restrictive blocklists or just leave their connection untouched entirely. Groups is basically how you solve the problem of it breaking something for someone else.
Source: Pissed off my roommate who I somehow accidentally blocked from using Google to appraise his magic cards or something.
I’m not sure what I’m understanding that’s markedly different from what we have here in terms of feeds, nor am I sure letting users curate and create their own personal echo chambers is a real “solution.”
If I understand it correctly, only some of the ways of viewing Lemmy content actually have an algorithm behind them (Hot view, for instance) whereas things like Top are… literally just the top posts/comments based on aggregated upvotes/downvotes. New just shows things chronologically from newest to oldest, Old is the opposite of that. Controversial is potentially an algorithm but I’m not deeply sure about that, because it seems like it could be calculated as simply as Top is.
Manipulating things over here is more like making spam accounts and flooding with upvotes/downvotes, which is a problem but hopefully one that gets addressed as development continues.
I also thought Mastodon was just a chronological feed as well. Not a lot to manipulate there?
I’ll be real, I don’t get the hype for Bluesky when it’s venture capital funded (by Blockchain Capital no less) and eventually those VCs are going to want a return on investment. At some point, something will have to be done to produce a profit and won’t that be when the screws start being turned on the users?
For sure, but I am also capable of doing that with my thumbs… which is what I do.