I’m currently running Deepseek on Linux with Ollama (installed via curl -fsSL https://ollama.com/install.sh | sh), and I specifically have to run it on my personal file server because it’s the only computer in the house with enough memory for the larger models. Since it’s running on the same system that has direct access to all my files, I’m more concerned about security than I would be if it was running on a dedicated server that just does AI. I’m really not knowledgeable on how AI actually works at the execution level, and I just wanted to ask whether Ollama is actually private and secure. I’m assuming it doesn’t send my prompts anywhere since everything I’ve read lists that as the biggest advantage, but how exactly is the AI being executed on the system when you give it a command like ollama run deepseek-r1:32b and have it download files from where it’s downloading from by default? Is it just downloading a regular executable and running that on the system, or is it more sandboxed than that? Is it possible for a malicious AI model to scan my files or do other things on the computer?

  • rutrum@programming.dev
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    22 hours ago

    Its all local. Ollama is the application, deepseek and llama and qwen and whatever else are just model weights. The models arent executables, nor do the models ping external services or whatever. The models are safe. Ollama itself is meant for hosting models locally, and I dont believe it even has capability of doing anything besides run local models.

    Where it gets more complicated is “agentic” assistants, that can read files or execute things at the terminal. The most advanced code assistance are doing this. But this is NOT a function of ollama or the model, its a function of the chat UI or code editor plugin that glues the model output together with a web search, filesystem, terminal session, etc.

    So in short, ollama just runs models. Its all local and private, no worries.

    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      21 hours ago

      Most models now are .safetensor files which are supposed to be safe, but I know in the past there were issues where other model filetypes actually could have attack payloads in them.

  • npe@leminal.space
    link
    fedilink
    arrow-up
    8
    ·
    21 hours ago

    It’s a good question. Older model formats used to allow for executable code to be present and thus would present a security risk. But with the formats that Ollama and Llama.cpp use I believe that’s not the case anymore.

  • 0laura@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    ollama downloads the model from the original source afaik, so there’s not really any risk. the model itself can’t do anything bad but I do not know if there could be malware added through the loading process. I remember there being big problems with pickle files for stable diffusion or something, though that’s been fixed with safetensor afaik.

  • movies@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    22 hours ago

    Nah, you’re safe to run it locally. You’re downloading the specific model, that’s right, and it’s not an exe. As you ask questions of it, the inference step, that is sent directly to the model on your machine by the ollama interface. Nothing goes over the network after you download a model and there is no scanning involved; that’s just not how it works.