• umbraroze@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    ·
    6 days ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      ·
      6 days ago

      Because you are coming from the perspective of a reasonable person

      These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

      • pup_atlas@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        That’s what they are saying though. These shouldn’t be thought of as “rules”, they are suggestions near universally designed to point you to the most relevant content. Ignoring them isn’t “stealing something not meant to be captured”, it’s wasting time and resources of your own infra on something very likely to be useless to you.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

      I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

    • EddoWagt@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      They want everything, does it exist, but it’s not in their dataset? Then they want it.

      They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    72
    ·
    7 days ago

    Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      thousands of times a second

      Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you’d use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.

      Static content (and cached content) shouldn’t cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)

      Of course, this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.

      • DigitalDilemma@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Already done, along with a bunch of other stuff including cloudflare WAF and rate limiting rules.

        I am still annoyed that it took me over a day’ of my life to finally (so far) restrict these things. And several other days to offload the problem to Cloudflare pages for sites that I previous self hosted but my rural link couldn’t support.

        this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones.

        And I don’t think these high volume AI scrapes are unintentional DDOS attacks. I consider them entirely intentional. Not deliberrately malicious, but negligent to the point of criminality. (Especially in requesting the same pages again so frequently, and all of them ignoring robots.txt)

    • zovits@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      It certainly sounds like they generate the fake content once and serve it from cache every time: “Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval.”

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    6 days ago

    I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

    • Fluke@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      6 days ago

      And consumed the power output of a medium country to do it.

      Yeah, great job! 👍

      • LeninOnAPrayer@lemm.ee
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        6 days ago

        We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.

    • Slaxis@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 days ago

      The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.

    • petaqui@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology

    • gap_betweenus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Once a technology or even an idea is there, you can’t really make it go away - ai is here to stay. The generative LLM are just a small part.

  • TorJansen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    7 days ago

    And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    7 days ago

    Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      20
      ·
      7 days ago

      In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

        • Dave@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          I’m not sure what can be done at the free tier. There is a switch to turn on AI not blocking, and it breaks federation.

          You can’t whitelist domains because federation could come from and domain. Maybe you could somehow whitelist /inbox for the ActivityPub communication, but I’m not sure how to do that in Cloudflare.

    • Xella@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!

  • weremacaque@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    7 days ago

    You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.

  • x0x7@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.