You can use cloudflares DNS and not use their WAF (the proxy bit) just fine. I have been for almost a decade.
🇨🇦
You can use cloudflares DNS and not use their WAF (the proxy bit) just fine. I have been for almost a decade.
Yeah, I’ll take a rip; you next?
Spray paint can on a stick, meant for road marking without bending over.
JFC when did we end up with that many???
It’s been a while since I paid attention obviously; but I thought there were like 5 AC games, maybe 6-7.
13??? How many ways can you make poking someone with a hidden blade interesting…
I use cloudflared to translate DNS into DNS over TLS instead of Unbound to make it into recursive DNS. Just never really seen the need to switch it. I’m happy with nextDNS + Cloudflare resolving DNS upstream.
The main thing I wanted to note is port 53 outbound is blocked at the router to prevent devices from using external/unencrypted DNS. If a LAN device wants DNS resolution they MUST use the LAN DNS servers they were given via DHCP, or use their own DoT config, as plain DNS won’t make it out of the network.
It’s because of this block/enforcement that I run two local DNS servers: pihole on an RPI and a mirror on my main server tower, with Galaxy-Sync keeping them identical. If I tinker with/update one, the other picks up the slack so connectivity/resolution isn’t disrupted.
Lora Croft, or the Great Fairy? Who did it better?
The thing is, until someone actually faces any consequences in modern times for atrocities such as these; simply saying how bad they are has become meaningless.
I’m not sure whether this is specific to this project, docker, or YAML in general.
Looking through my other 20 or so compose files, I use the array notation for most of my environment variables, but I don’t have any double quotation marks elsewhere. Maybe they’re not supposed to work in this format, idk.
Good to keep in mind I guess.
Dev replied to my github discussion.
Apparently it’s an issue with array style env variable layout.
environment:
key:"value"
Instead of
environment:
- key=value
Trying to set that up to try out, but I can’t get it to see/use my config.yaml.
/srv/filebrowser-new/data/config.yaml
volumes:
Says ‘/config/config.yaml’ doesn’t exist and will not start. Same thing if I mount the config file directly, instead of just its folder.
If I remove the env var, it changes to “could not open config file ‘config.yaml’, using default settings” and starts at least. From there I can ‘ls -l’ through docker exec and see that my config is mounted exactly where it’s supposed to be ‘/config/config.yaml’ and has 777 perms, but filebrowser insists it doesn’t exist…
My config is just the example for now.
I don’t understand what I could possibly be doing wrong.
/edit: three hours of messing around and I figured it out:
Must not have quotation marks. Removed them and now it’s working.
This happens to me a tone, while lying in bed thinking over problems/topics from the day.
Thinking through something, gets distracted by a random thought; damn, what was I just thinking about??? Then I lay there for a bit feeling lost.
Frosty bringing the top shit this week
👌
Decided to do some more reading on this topic. TIL:
TCP, the more common protocol; requires at least one side to have a port forwarded through their NAT to the client, so the other side can make a connection to that open port.
uTP on the other hand, can ‘holepunch’ by sending a packet to a known IP, which opens a port through the sending clients NAT, specifically for that IP. That port can then be used to send and receive by either side until it closes due to inactivity.
So, torrent clients can use uTP holepunching to open a port without requiring manual forwarding, then advertise that open port to public trackers. Client ‘A’ will try to connect to an IP+port it got from the tracker and get ignored (because the recipient NAT isn’t expecting data from that IP and drops the packets). Then when client ‘B’ decides to connect to client ‘A’, 'A’s port will now be open and allowing data from 'B’s IP, thus establishing a connection.
This is slower than a direct connection because both clients need to be made aware of each other and decide to attempt to connect at reasonably similar times. It also requires public trackers with peerexchange enabled and the torrents cannot be flagged as private.
Mullvad is one of the most proven privacy friendly vpn services. (the cops literally confiscated their servers and came out with nothing) Torrenting also isn’t the only way to pirate data (plus seeding can be done without an open port, just limits you too peers with open ports)
Reminds me of ‘Fire Crackers’. A super low-efforts edible we used to make.
Just raw ground up bud sprinkled on a soup cracker smeared with a heap of peanut butter. Cooked for 20min in the toaster oven to decarb the bud.
Tastes awful, but reasonably effective.
FolderSync selectively syncs files/folders from my phone back to my server via ssh. Some folders are on a schedule, some monitor for changes and sync immediately; most are just one-way, some are two-way (files added to the server will sync back to the phone as well as uploading data to the server). There’s even one that automatically drops files into paperless-ngx’ consume folder for automatic document importing.
From there BorgBackup makes a daily backup of the data, keeping historical backups for years with absolutely incredible efficiency. I currently have 21 backups of about ~550gb each. Borg stores this in 447gb of total disc space.
That’s another option. Sometimes there is no valve immediately beside the toilet, sometimes it’s crusty af and won’t turn or seal. This can be quicker.
Once the flapper lifts, it won’t close again until the tank empties completely. If the toilet clogs and you try too many times to flush it down instead of breaking out the plunger right away; sometimes the water can’t overflow out of the bowl fast enough to let the tank drain fully, so it just endlessly flows. Doesn’t happen to all toilets, but it’s still good to know when your toilet full of turds just won’t stop dumping water on the floor.
An $11/yr domain pointed at my IP. Port 443 is open to nginx, which proxies to the desired service depending on subdomain. (and explicitly drops any connection that uses my raw ip or an unrecognized name to connect, without responding at all)
ACME.sh automatically refreshes my free ssl certificate every ~2months via DNS-01 verification and letsencrypt.
And finally, I’ve got a dynamic IP, so DDClient keeps my domain pointed at the correct IP when/if it changes.
There’s also pihole on the local network, replacing the WAN IP from external DNS, with the servers local IP, for LAN devices to use. But that’s very much optional, especially if your router performs NAT Hairpinning.
This setup covers all ~24 of the services/web applications I host, though most other services have some additional configuration to make them only accessible from LAN/VPN despite using the same ports and nginx service. I can go into that if there’s interest.
Only Emby/Jellyfin, Ombi, and Filebrowser are made accessible from WAN; so I can easily share those with friends/family without having to guide them through/restrict them to a vpn connection.