Migrated over from [email protected]

  • 0 Posts
  • 19 Comments
Joined 1 month ago
cake
Cake day: June 28th, 2025

help-circle
  • Mhm, of course, critical thinking in general is absolutely important, although I take some issue with describing looking for artifacts as “vague hunches”. Fake photos have existed for ages, and we’ve found consistent ways to spot and identify them, such as checking shadows, the directionality of light in a scene, the fringes of detailed objects, black levels and highlights, and even advanced techniques like bokeh and motion blur. You don’t see many people casting doubt on the validity of old pictures with Trump and Epstein together, for example, despite the long existence of photoshop and advanced VFX. Hell, even this image could have been photoshopped, and you’re relying on your eyes to catch the evidence of that if that were the case.

    The techniques I’ve outlined here aren’t likely to become irrelevant in the next 5+ years, given they’re based on how the underlying technology works, similar to how LLMs aren’t likely to 100% stop hallucinating any time soon. More than that, I actually think there’s a lot less incentive to work these minor kinks out than something like LLM hallucination, because these images already fool 99% of people, and who knows how much additional processing power it would take to run this at a resolution where you could get something like flawless tufts of grass, in a field that’s already struggling to make a profit given the high costs of generating this output. And if/when these techniques become invalid, I’ll put in the effort to learn new ones, as it’s worthwhile to be able to quickly and easily identify fakes.

    As much as I wholeheartedly agree that we need to think critically and evaluate things based on facts, we live in a world where the U.S. President was posting AI videos of Obama just a couple weeks ago. He may be an idiot who is being obviously manipulative, but it’s naive to think we won’t eventually get bad actors like him who try to manipulate narratives like that with current events, where we can’t rely on simply fact-checking history, or that someone might weave a lie that doesn’t have obvious logical gaps, and we need some kind of technique to verify images to settle the inevitable future “he said, she said” debates. The only real alternative is to just never trust a new photo again, because we can’t 100% prove anything new hasn’t been doctored.

    We’ve survived in a world with fake imagery for decades now, I don’t think we need to roll over and accept AI as unbeatable just because it fakes things differently, or because it might hypothetically improve at hiding itself in the future.

    Anyway, rant over, you’re right, critical thinking is paramount, and being able to clearly spot fakes is a super useful skill to add to that kit, even if it can’t 100% confirm an image as real. I believe these are useful tools to have, which is why I took the time to point them out despite the image already having been proven as not AI by others dating it before I got here.


  • True, someone else did some reverse image searching before I got here, but I think it’s an important skill to develop without relying on dating the image, as that will only work for so long, and there will likely be more important things than memes that will need to be proven/disproven in the future. A reverse image search probably won’t help us with the next political scandal, for example. It’s a pretty good backup to have when it applies though, nice that it proves me correct here.



  • I’d recommend you get some practice identifying and proving AI generated images. I agree this has a bit of that “look”, but in this case I’m quite certain it’s just repeated image compression/a cheap camera. Here’s the major details I looked at after seeing your comment:

    • The grass at the bottom left. AI is frequently sloppy with little details and straight lines, usually the ones in the background. In this case, you can look at any blade of grass and follow it, and its path makes sense. The same happens with the lines in the tiles, the water stains, etc.
    • The birthmark on the large brown dog. In this case, this is a set of three photos, which gives us an easy way to spot AI. AI generated images start from random noise, so you’d never get the exact same birthmark, consistent across different angles, from a prompt like “large brown dog with white birthmark on chest”. Spotting a change in the birthmark, or a detail like it, would be a dead giveaway, but I can’t spot any.
    • There are other tricks as well, such as looking for strange variations in contrast and exposure from the underlying noise, but those are more difficult to explain in text. Corridor Digital has some good videos demonstrating it with visual examples if you’re interested, but suffice to say I don’t pick up on that here either.

    It’s useful to be able to prove or disprove your suspicions, as well as to be able to back them up with something as simple as “this is AI generated, just look at the grass”. Hope this helps!


  • I’ll give two answers to this question, from the perspective of a Christian reading the Old Testament/Torah.

    Wouldn’t it be effective to convince followers of a religion if a religion could accurately predict a scientific phenomenon before its followers have the means of discovering it?

    This is interpretative, but if there is a God, he seems big on free will. Why give humanity the option to sin in the garden at all? Why not just reveal himself in the sky each morning? Why even bother creating a universe that can be explained without him? There’s an abundance of easy ways God could make himself irrefutable, and yet in the Bible he makes us “in His image”, and offers us choices like that tree in the garden.

    Furthermore, why even create us to sin in the first place? My interpretation of the Torah is that God is big on relationship, and that free will is a key part of that. Just like a human relationship based on a love potion is kinda creepy, and a pale imitation of something real, it seems like God doesn’t want to be irrefutable.

    I think that’s the more relevant answer to your question, but I’ll also give the only example that comes to mind of the Bible seemingly imparting “scientific knowledge”, which is to look at the laws around “cleanliness”. Someone else already mentioned some “unclean” animals, but if you read more, they pretty consistently seem like good advice around bacteria. Some examples of times you need to “purify” (essentially take a bath) that seem like common sense now:

    • being around dead bodies
    • touching blood that’s not yours
    • having your period
    • etc.

    Reading this as a modern person aware of germs, many of these “laws” seem like they would have kept the death rate of faithful Jews a lot lower than their neighbours in that day.



  • Hazzard@lemmy.ziptoFediverse@lemmy.worldNSFW on Lemmy
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 days ago

    Exactly what I’ve done. Set my settings to hide NSFW, blocked most of the “soft” communities like hot girls and moe anime girls and whatever else (blocking the lemmynsfw.com instance is a great place to start), and I use All frequently. That’s how I’ve found all the communities I’ve subscribed to, but frankly, my /all feed is small enough that I usually see all my subscribed communities anyway.





  • My guess here is that it isn’t Denuvo, it just seems like it’s not designed for Open World games. These issues all also exist on console, where Denuvo isn’t a problem (although it certainly isn’t helping either). Dragon’s Dogma 2 exhibited a lot of the same poor performance and stuttering nearly a year before MH Wilds came out.

    By then, I assume the game was too far into development to change course, with it’s ambitious design and a lot of AI that has to always run in each area adding to the engine issues.

    Honestly… I’m not sure how much better they can make it, given how much time they’ve had to work on it, and that DD2 never really escaped its issues too. It feels like RE Engine was just… fundamentally not designed for this, no matter how great an engine it is in its niche.



  • Yeah, the vision of “transferable NFT cosmetics” always struck me as ridiculous, for exactly this reason.

    Even if some hypothetical NFT spec did allow a cosmetic to be fully stored in the NFT, such that a game could implement a standard API and support NFTs from different studios, what would the specs on that item be? Is the CoD rifle gonna look exactly like the Fortnite rifle so the skins can work in each? Is the Lamborghini from Forza gonna move exactly like one from Gran Turismo?

    Each game has its own engine, its own balancing to worry about, you can’t just blindly drag and drop assets like this, and nobody is gonna keep up with bespoke support for an arbitrary number of assets while more are minted everyday.

    Definitely one of those “promises” that’s just based on sounding cool, without any actual substance behind it, at least not when it comes to anything unique to NFTs.



  • Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I’m not nearly as worried about small players, like hobbyists and small companies, who wouldn’t’ve already developed something like this in house.

    And I brought up “security through obscurity” because I’m somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.

    Many of those people releasing decryption methods were considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I’m hopeful that one day we actually could make an unbeatable AI poison, so I’m happy to support research that pushes us towards that end.

    I’m just not satisfied preventing small players from training AI on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.


  • Exactly, it is an arms race. But if a few students can beat our current best weapons, it’d be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn’t already cracked this and locked everyone involved into serious NDAs.

    Better to know what your enemy has then to just cross your fingers and hope that maybe they didn’t notice this was possible, and have just been letting us poison their precious AI models they’re sinking billions of dollars into. Having this now lets us build the next version of nightshade that isn’t so trivially defeated.




  • Ugh, this is what our legacy product has. Microservices that literally cannot be scaled, because they rely on internal state, and are also all deployed on the same machine.

    Trying to do things like just updating python versions is a nightmare, because you have to do all the work 4 or 5 times.

    Want to implement a linter? Hope you want to do it several times. And all for zero microservice benefits. I hate it.