• 7 Posts
  • 39 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle





  • corvus@lemmy.mltoPrivacy@lemmy.ml[Deleted]
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    The carrier can track a phone without sim card but it’s not the case if you turn on airplane mode. The whole point of airplane mode is to prevent the phone from emitting any signal to avoid interference with critical aircraft instruments. I don’t see any company risking to circumvent such a critical security feature, it would be easily verifiable.



  • Twelve years ago Moto X was launched by Motorola, at that time controlled by Google. I had it and at any moment you could say “Hello Google, what time is it?” and it responded. I was constantly listening. All the time. And it was a perfectly normal phone regarding battery life or data usage. TWELVE years ago, imagine how much easier would be to implement that now, with more powerful and efficient chips and bigger batteries.

    From an article about Moto X back then: “If you want to take a selfie, you should be able to simply say “Take a selfie!” In short, your smartphone should live up to its name. That’s the goal with the Moto Voice and Moto Assist software integrated into the second generation Moto X smartphone. And to do that, the Moto X is always listening, for verbal commands from the user and also ambient cues of the context. That emergent behavior is spawned by complex interactions between the software and hardware”

    Only much latter I came to the conclusion that with Moto X Google was making its first tests on using the microphone for mass surveillance.






  • It gives me exactly the same message but I’m not using a VPN. When I use the external viewer option with mpv using yt-dlp I only get video without audio. I can download the video fine using yt-dlp and then watch it with mpv, but if I try to stream to mpv while downloading to watch it real-time it gives an ffmpeg error: can’t recognize format… weird.





  • Yeah I tested with lower numbers and it works, I just wanted to offload the whole model thinking it will work, 2GB it’s a lot. With other models it prints about 250MB when fails and if you sum up the model size it’s still well below the iGPU free memory so I dont get it… anyway, I was thinking about upgrading the memory to 32GB or may be 64GB but I hesitate because with models around 7GB and CPU only I get around 5 t/s and with 14GB 2-3 t/s, so I run one of around 30GB I guess it will get around 1 t/s? My supposition is that increasing RAM doesn’t increase performance per se, just let’s you upload bigger models to memory, so performance is approximately linear on model size… what do you think?


  • I get an error when offloading the whole model to GPU

    ./build/bin/llama-cli -m ~/software/ai/models/deepseek-math-7b-instruct.Q8_0.gguf -n 200 -t 10 -ngl 31 -if

    The relevant output is:

    llama_model_load_from_file_impl: using device Vulkan0 (Intel® Iris® Xe Graphics (RPL-U)) - 7759 MiB free

    print_info: file size = 6.84 GiB (8.50 BPW)

    load_tensors: loading model tensors, this can take a while… (mmap = true) load_tensors: offloading 30 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 31/31 layers to GPU load_tensors: Vulkan0 model buffer size = 6577.83 MiB load_tensors: CPU_Mapped model buffer size = 425.00 MiB

    ggml_vulkan: Device memory allocation of size 2013265920 failed ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory llama_kv_cache_init: failed to allocate buffer for kv cache llama_init_from_model: llama_kv_cache_init() failed for self-attention cache common_init_from_params: failed to create context with model ‘~/software/ai/models/deepseek-math-7b-instruct.Q8_0.gguf’ main: error: unable to load model

    It seems to me that there is enough room for the model, but I don’t know what “Device memory allocation of size 2013265920” means.