Mozilla is in a tricky position. It contains both a nonprofit organization dedicated to making the internet a better place for everyone, and a for-profit arm dedicated to, you know, making money. In the best of times, these things feed each other: The company makes great products that advance its goals for the web, and the nonprofit gets to both advocate for a better web and show people what it looks like. But these are not the best of times. Mozilla has spent the last couple of years implementing layoffs and restructuring, attempting to explain how it can fight for privacy and openness when Google pays most of its bills, while trying to find its place in an increasingly frothy AI landscape.

Fun times to be the new Mozilla CEO, right? But when I put all that to Anthony Enzor-DeMeo, the company’s just-announced chief executive, he swears he sees opportunity in all the upheaval. “I think what’s actually needed now is a technology company that people can trust,” Enzor-DeMeo says. “What I’ve seen with AI is an erosion of trust.”

Mozilla is not going to train its own giant LLM anytime soon. But there’s still an AI Mode coming to Firefox next year, which Enzor-DeMeo says will offer users their choice of model and product, all in a browser they can understand and from a company they can trust. “We’re not incentivized to push one model or the other,” he says. “So we’re going to try to go to market with multiple models.”

-_-

      • ThisSeriesIsFalse@lemmy.ca
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        4
        ·
        1 day ago

        Not elitist to say that people who use what are essentially weighted random word generators for programming, a career that requires one to know exactly how their code works in case it breaks, are bad at their jobs. Just like how it’s not elitist to say that generated images are not art, and that flying a plane into a building doesn’t make you a good pilot.

        • darkkite@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          18
          ·
          1 day ago

          a career that requires one to know exactly how their code works in case it breaks

          Using AI doesn’t mean that you lose the ability to reason, debug, or test generated code. All code merge should be peer-reviewed and tested

          We’re not discussing images, nor planes.

          The claim was tech savvy people, the same people who are most opposed to AI.

          There’s data that to suggest otherwise. people who are technically inclined engage with AI more and have a more positive reception compared to less experienced users.

          Unless you have additional data to support that they are in-fact “dog-shit programmers”, this appears to be an emotional claim colored by your own personal bias. Though if you’re a “pure” programmer who is better than the dog-shit developers I would love to see some of your work or writings.

            • darkkite@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              11
              ·
              24 hours ago

              if that makes you feel better, but i wish you responded to the original claim with data vs ad hominem. but if you’re so good can i view your github to learn how you program?

                • darkkite@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  10
                  ·
                  23 hours ago

                  it’s wild that someone who doesn’t program would degrade others who can because the majority of developers who use ai-based tools.

                  If it degrades brains with use in writing Essays.

                  The study does not prove that claim. Better to link directly to the study https://arxiv.org/abs/2506.08872

                  The study does suggest only relying on ChatGPT could reduce engagement in the specific task of essay writing and harms recall. It does not prove ChatGPT is causing individuals to become less intelligent in the act of programming.

                  There are many studies that shows the potential positive outcomes when utilizing LLMs

                  [2512.13658] Embedding-Based Rankings of Educational Resources based on Learning Outcome Alignment: Benchmarking, Expert Validation, and Learner Performance https://arxiv.org/abs/2512.13658

                  [2509.15068] Learning in Context: Personalizing Educational Content with Large Language Models to Enhance Student Learning https://arxiv.org/abs/2509.15068

                  • supernight52@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    12
                    arrow-down
                    1
                    ·
                    22 hours ago

                    I degrade people that use a literal random word generator that shits out “code” and then claim they can code. I have respect for programmers. Not for AI programmers.

            • darkkite@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              11
              ·
              23 hours ago

              I posted a response in another reply but thats for sharing something. still does not support the claim of dog-shit developers