• 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 hour ago

    At least they have an AI-free option, as annoying as it is to have to opt into it.

    On a related note, it’s hilarious to me that the Ecosia search engine has AI built in. Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 hour ago

    Meanwhile, at HQ: “The userbase hallucinated that they don’t want AI. Maybe we prompted them wrong?”

  • radio@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 hours ago

    And how much of their budget are they blowing on AI features despite polls showing their regular users don’t even want it? Probably also 90%.

  • gaymer@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    7
    ·
    edit-2
    2 hours ago

    People are fucking weird and they can’t be trusted.I can guarantee you 90% voted no AI yet nobody will use noai.duckduckgo.com

    Remember subreddits going dark and people leaving reddit. Hahaha how did that work out ?

    • pressanykeynow@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      16 minutes ago

      Remember subreddits going dark and people leaving reddit. Hahaha how did that work out ?

      We are on lemmy now, so I don’t get your point here.

      • gaymer@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        13 minutes ago

        You might be in IT support. You dont need to understand. Get back to work. Fix that cable under the desk

    • UltraGiGaGigantic@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 minutes ago

      Remember subreddits going dark and people leaving reddit. Hahaha how did that work out ?

      Great, we’re here. I had to double check i was on the fediverse because of your comment.

      • gaymer@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 minute ago

        God! You’re people too dumb. I am glad President Trump is our president

      • mjr@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        24 minutes ago

        Also, lots have apps have the main duckduckgo as a search option. I’ve not seen any have the noai as an option.

      • gaymer@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        20 minutes ago

        The point being is they will cry “nobody asked for AI” but will still use it. I know many 9-5s subscribed to chatgpt paying $20-30 each month yet going around telling everyone how they hate AI and people shouldn’t use it.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      52 minutes ago

      Most people use whatever is the default, even if that default doesn’t perfectly is to their needs/wants.

      That applies even to people that changed their search engine form Google to Duckduckgo.

      Every decisions takes some energy to think about, and the human brain wants to avoid spending energy as much as possible.

      That is why LLMs should be opt-in/by-request instead of opt-out. If people want to occasionally use them, they can decide themselves if spending that additional electricity is worth it.

      Search engines and LLMs are different things, one is for finding content written by humans, the other is for getting a plausible answer to a inquiry.

    • python@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      I’ve been using it. Put it as the default search engine on all my devices, even my work hardware. Before that, I just had the AI features toggled off, but those settings don’t stick when clearing all cookies (which I have to do way too often).

      I have also left reddit 2 years ago and never visited again. So no idea what your point is?

    • CamilleMellom@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      It does make a little sense though. Most people won’t really use the feature they just want know they have access.

      When choosing a search engine, a “standard” user will just think “this one has AI answers so it must be ‘better’” even if they don’t use AI answers. At this point it’s a marketing trick.

      Whoever answered this poll, is already probably not a fan of “AI”

    • BlackDragon@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      I’m not going to use some weird alternative url to remove ai crap, just like I’m not going to append -ai or whatever it was to every google search. I’m just not going to use these services at all. Want me as a user? Remove the AI garbage. It’s that simple.

  • Affidavit@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    I wonder what percentage of Lemmy users are absolutely sick of seeing variations of the exact same thing, over, and over, and over, and fucking over again.

    • Egonallanon@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      You can turn all the AI features off on regular DDG search settings. Best I can tell that achuevescthe same as using the no AI filter.

      • Balinares@pawb.social
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 hours ago

        I mean, the poll was like as not a publicity stunt, to draw attention to the fact DDG is not doing AI. All the same, the fact they are making “no AI” a selling point is noteworthy.

        • mjr@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 minutes ago

          … the fact DDG is not doing AI.

          They are, unless you opt out.

        • 123@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          I still get a hunch of AI bullshit unless I go out of my way. Also I swear they keep reactivating it as much as google when you opt out (or select ddg no ai as your search engine in Firefox and still see that garbage).

  • setsubyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    5 hours ago

    The article already notes that

    privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo

    But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.

      • SendMePhotos@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 hours ago

        That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.

        They WANT you to use Ai so they can cater the answers. (tin foil hat)

        I really do believe that though. Call me a conspiracy theorist but damn it, it fits.

        • Womble@piefed.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 minutes ago

          Search results have been degrading for a lot longer than LLMs have been a thing. Peak usefulness for them was around a decade ago.

        • RedstoneValley@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 hour ago

          It’s not that wild of a conspiracy theory. Hard to get definite proof though because you would have to compare actual search results from the past with the results of the same search from today, and we unfortunately can’t travel back in time.

          But there are indicators for your theory to be true:

          • It’s evident that in UI design the top area of the screen is the most valuable. AI results are always shown there. So we know that selling AI is of utmost importance to Google.
          • The Google search algorithm was altered quite often over the years, these “rollouts” are publicly available information, and a lot of people have written about the changes as soon as they happened.
          • Page ranking fueled a whole industry which was called SEO (Search Engine Optimization). A lot of effort went into understanding how google ranks its results. This was of course done with a different goal in mind but the conclusions from this field can be used to determine if and how search results got worse over time
          • It’s an established fact that companies benefit from users never leaving the company’s ecosystem. Google as an example tried to prevent a clickthrough to the actual websites in the past, with technologies like AMP or by displaying snippets.
          • If users rely on the AI output Google can effectively achieve this: the user is not leaving the page and Google has full control over what content the user sees.

          Now, all of the points listed above can be proven. If you put all of that together it seems at least highly likely that your “conspiracy theory” is in fact true.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          They WANT you to use Ai so they can cater the answers sell you ads and stop you from using the internet.

      • Damorte@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        3 hours ago

        Have you seen the quality of google searches the last few years? I’m not surprised at all. LLM might not give you the correct answer but at least it will provide you with one lol.

      • truthfultemporarily@feddit.org
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        7
        ·
        4 hours ago

        I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.

        • Warl0k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          ·
          edit-2
          4 hours ago

          How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.

          • Deebster@infosec.pub
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 hours ago

            I also sometimes use the Kagi summaries and it’s definitely been wrong before. One time I asked what the term was for something in badminton and it came up with a different badminton term. When I looked at the cited source, it was a multiple choice quiz with the wrong term being the first answer.

            It’s reliable that I still use it, although more often to quickly identify which search results are worth reading.

          • AmbitiousProcess (they/them)@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            3 hours ago

            I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)

            It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.

            I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)

            In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.

            To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.

          • hayvan@piefed.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 hours ago

            I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics

        • Ex Nummis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.

      • gerryflap@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 hours ago

        For some issues, especially related to programming and Linux, I feel like I kinda have to at this point. Google seems to have become useless, and DDG was never great to begin with but is arguably better than Google now. I’ve had some very obscure issues that I spent quite some time searching for, only to drop it into ChatGPT and get a link to some random forum post that discusses it. The biggest one was a Linux kernel regression that was posted on the same day in the Arch Linux forums somewhere. Despite having a hunch about what it could be and searching/struggling for over an hour, I couldn’t find anything. ChatGPT then managed to link me the post (and a suggested fix: switching to LTS kernel) in less than minute.

        For general purpose search tho, hell no. If I want to know factual data that’s easy to find I’ll rely on the good old search engine. And even if I have to use an LLM, I don’t really trust it unless it gives me links to the information or I can verify that what it says is true.

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 minutes ago

          programming and Linux

          I’m seeing almost daily the fuck-ups resulting from somebody trying to fix something with ChatGPT, then coming to the forums because it didn’t work.

        • Cherry@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          49 minutes ago

          Yup this is a great example. LLM for non opinion based stuff or for stuff that’s not essential for life. It’s great for finding a recipe but if you’re gonna rely on the internet or an LLM to help you form an opinion on something that requires objective thinking then no. If I said hey internet/LLM is humour good or bad, it would insert a swayed view.

          It simply can’t be trusted. I can’t even trust it return shopping links so I have retreated back to real life. If it can’t play fair I no longer use it as a tool.

        • IronBird@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          4 hours ago

          it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume? a false sense of confidence where one mught have just said 'i dont know"

          • CallMeAnAI@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            What an absolutely arrogant attitude 🤣 You actually believe there is some gap here 🤣 just amazing.

            Not using AI doesn’t mean your performing whatever task your doing better. It has nothing to do with being able to parse results for bullshit or not.

            • Cherry@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              30 minutes ago

              I think the attitude of being virtuosos or preachy can seep in at times, especially when being part of a cause but IMO diplomacy, having conversations and opening their mind to objectivity has to be better than telling people they are wrong.

              I know this is easy to say, and esp when so many people are just so addicted to social media and the internet.

              I have had conversations with friends, family where they can have a clear conversation about how much propaganda is pushed on to them, and they then turn straight to their phone and hoover up and hour of FB. It does make you think wow sheep. But I have to remind myself we don’t get change by telling people ‘you clearly don’t know your own mind’

          • evol@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            edit-2
            4 hours ago

            So many people were already using tiktok or youtube as google search. I think AI is arguably better than those

            edit: New business, take your chatgpt question and turn it into a tiktok video. The Slop must go on

            • AmbitiousProcess (they/them)@piefed.social
              link
              fedilink
              English
              arrow-up
              7
              ·
              3 hours ago

              The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.

              • evol@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                3 hours ago

                Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point

        • Ex Nummis@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 hours ago

          First, its results are often simply wrong, so that’s no good. Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.

          This is already happening btw, and the reason Musk created Grokipedia. Grok (and even other llm’s!) already use it as a “trusted source”, which it is anything but.

          • CallMeAnAI@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            So literally the same shit as before with search but wrapped up in a nice paragraph with citations you can follow up on?

          • evol@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 hours ago

            Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.

  • Novis@lemdro.id
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 hours ago

    NOW the question is, will they listen? Cause we’ve seen so many times where a company says they’re taking feedback and then do the thing that their audience didn’t want them to do in the first place anyways. Now, of course, they could have more data and metrics that says people don’t care or do want the BS, but I doubt all the companies that DID go hard into AI actually looked at legit numbers, since all the big heads are now saying “why aren’t you people using this stuff?”

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    4 hours ago

    It’s funny how many people ruffle their feathers over this. Same type of comments as when somebody first shared this poll here: you can’t expect this to be representative, it’s not a yes/no question etc.

    Let’s put it like this: I do not want AI pushed on me in almost every online situation. That is a yes/no question to me.

    Why? Because it’s not ready, wastes the planet, and is the USA’s big gamble.

  • WanderingThoughts@europe.pub
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 hours ago

    That’s when the Silicon Valley types all bring out the ol’ “People don’t know what they want until you show it to them.” Well, they already showed what LLM can do and it’s not that great.