Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @[email protected]

  • 0 Posts
  • 648 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle


  • Though, on the other hand, having the video saved offsite is useful because then anyone with physical access to your home can’t get rid of the video showing they’re there.

    I have Blue Iris configured to send all alert videos to one of my storage VPSes via SFTP. As soon as someone is detected outside, the video clip is sent offsite.

    The server and the PoE switch that powers the cameras are also on a UPS, which helps if the intruder tries to shut off the power at the main breaker (which, here in California, always needs to be located outside).

    It’s in response to you saying isolate the cameras from the internet entirely

    The cameras themselves should always be isolated. No internet access for the cameras at all. Your NVR can have network access, and is what would handle uploading the videos to internet storage somewhere.


  • Reolink

    Any cameras that can operate entirely offline are good. Dahua and Hikvision are good too. Look for cameras with RTSP and ONVIF support. ONVIF is a standardized API for interacting with cameras and can handle things like pan/tilt/zoom, sending events from the camera to the NVR (eg motion detection), and a bunch of other things.

    I use Blue Iris as my NVR, which is usually regarded as the best, but there’s other good software too (like Frigate), and hardware solutions too.

    Just follow best practices - keep them isolated on a separate VLAN with no internet access. If you want remote access to your NVR, use a VPN like Tailscale.



  • Ohh… I forgot about this. If they’re still doing that then I wouldn’t recommend them.

    For less tech-savvy users, I usually recommend some off-the-shelf hardware, so they have multiple people they can go to in case of issues with either the hardware or the standard built-in software (like the manufacturer, or other people that are also familiar with products from that manufacturer).

    Synology used to be the best for that, but maybe not any more. A lot of brands have gotten into NAS hardware over the last year or two so I’m not sure what’s the best now!







  • Oops, I didn’t know about the SX line, and didn’t know they had auction servers with large amounts of disk space. Thanks!! I’m not familiar with all of Hetzner’a products.

    For pure file storage (ie you’re only using SFTP, Borgbackup, restic, NFS, Samba, etc) I still think the storage boxes are a good deal, as you don’t have to worry about server maintenance (since it’s a shared environment). I’m not sure if supports encryption though, which is probably where a dedicated server would be useful.



  • SQLite is underrated. I’ve used it for high traffic systems with no issues. If your system has a large number of readers and a small number of writers, it performs very well. It’s not as good for high-concurrency write-heavy use cases, but that’s not common (most apps read far more than they write).

    My use case was a DB that was created during the build process, then read on every page load.


  • MariaDB is not always a drop-in replacement. There’s several features that MySQL has that MariaDB doesn’t, especially related to the optimizer (for some types of queries, MySQL will give you a more optimized execution plan compared to MariaDB). It’s also missing some newer data types, like JSON (which indexes the individual fields in JSON objects to make filtering on them more efficient).

    MariaDB and MySQL are both fine. Even though MySQL doesn’t receive as much development any more, it doesn’t really need it. It works fine. If you want a better database system, switch to PostgreSQL, not MariaDB.


  • AWS Glacier would be about $200/mo, PLUS bandwidth transfer charges, which would be something like $500. R2 would be about $750/mo

    50TB on a Hetzner storage box would be $116/month, with unlimited traffic. It’d have to be split across three storage boxes though, since 20TB is the max per box. 10TB is $24/month and 20TB is $46/month.

    They’re only available in Germany and Finland, but data transfer from elsewhere in the world would still be faster than AWS Glacier.

    Another option with Herzner is a dedicated server. Unfortunately the max storage they let you add is 2 x 22TB SATA HDDs, which would only let you store 22TB of stuff (assuming RAID1), for over double the cost of a 20TB storage box.


  • Do you mean 12600K, or do you really mean 2600K? These days, I wouldn’t use anything older than 9th gen, especially if you plan on doing any video transcoding with Jellyfin (transcoding means converting the video to a different format while streaming, usually to reduce bandwidth usage when watching videos away from home).

    See if there’s any e-waste recyclers in your area. A lot of companies are throwing out systems that don’t officially run Windows 11, so you can sometimes find systems with 8th and 9th gen Intel Core processors for very cheap.


  • I think sometimes people forget that one of the main features of Git is that it’s decentralized. You don’t need Github; just push your repo to a different remote.

    Everyone that clones the repo (usually) has a full copy of it, including all history, and theoretically you can clone the repo directly from their copy. Of course, that’s often not practical, which is how we ended up with these centralized services.

    The main issue with losing a Github repo is the auxiliary non-Git-powered features of Github, like issue tracking.