Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.
From my /etc/resolv.conf on Debian trixie, which isn’t using openresolv:
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
I mean, if you want to just write a static resolv.conf, I don’t think that you normally need to have it flagged immutable. You just put the text file you want in place of the symlink.
Also, when you talk about fsck, what could be good options for this to check the drive?
I’ve never used proxmox, so I can’t advise how to do so via the UI it provides. As a general Linux approach, though, if you’re copying from a source Linux filesystem, it should be possible to unmount it — or boot from a live boot Linux CD, if that filesystem is required to run the system — and then just run fsck /dev/sda1 or whatever the filesystem device is.
I’d suspect that too. Try just reading from the source drive or just writing to the destination drive and see which causes the problems. Could also be a corrupt filesystem; probably not a bad idea to try to fsck it.
IME, on a failing disk, you can get I/O blocking as the system retries, but it usually won’t freeze the system unless your swap partition/file is on that drive. Then, as soon as the kernel goes to pull something from swap on the failing drive, everything blocks. If you have a way to view the kernel log (e.g. you’re looking at a Linux console or have serial access or something else that keeps working), you’ll probably see kernel log messages. Might try swapoff -a before doing the rsync to disable swap.
At first I was under suspicion was temperature.
I’ve never had it happen, but it is possible for heat to cause issues for hard drives; I’m assuming that OP is checking CPU temperature. If you’ve ever copied the contents of a full disk, the case will tend to get pretty toasty. I don’t know if the firmware will slow down operation to keep temperature sane — all the rotational drives I’ve used in the past have had temperature sensors, so I’d think that it would. Could try aiming a fan at the things. I doubt that that’s it, though.


No one else invents new screws to prevent access (except Nintendo).
Like, on gamepads? I have multiple sets of security bits that I bought just to get required bits to open gamepads, and I’ve never owned a Nintendo gamepad.


I mean sure, if you like spending $1500+ on a new computer every year…they’re completely irreparable, unupgradeable, and they have a definite lifespan when Apple arbitrarily decides that they’re “obsolete”.
That was kind of Steve Jobs’ original vision.
folklore.org archives a lot of stories from the early Apple days.
https://www.folklore.org/Diagnostic_Port.html
Expandability, or the lack thereof, was far and away the most controversial aspect of the original Macintosh hardware design. Apple co-founder Steve Wozniak was a strong believer in hardware expandability, and he endowed the Apple II with luxurious expandability in the form of seven built-in slots for peripheral cards, configured in a clever architecture that allowed each card to incorporate built-in software on its own ROM chip. This flexibility allowed the Apple II to be adapted to a wider range of applications, and quickly spawned a thriving third-party hardware industry.
But Jef Raskin had a very different point of view. He thought that slots were inherently complex, and were one of the obstacles holding back personal computers from reaching a wider audience. He thought that hardware expandability made it more difficult for third party software writers since they couldn’t rely on the consistency of the underlying hardware. His Macintosh vision had Apple cranking out millions of identical, easy to use, low cost appliance computers and since hardware expandability would add significant cost and complexity it was therefore avoided.
Apple’s other co-founder, Steve Jobs, didn’t agree with Jef about many things, but they both felt the same way about hardware expandability: it was a bug instead of a feature. Steve was reportedly against having slots in the Apple II back in the days of yore, and felt even stronger about slots for the Mac. He decreed that the Macintosh would remain perpetually bereft of slots, enclosed in a tightly sealed case, with only the limited expandability of the two serial ports.
Burrell was afraid the 128Kbyte Mac would seem inadequate soon after launch, and there were no slots for the user to add RAM. He realized that he could support 256Kbit RAM chips simply by routing a few extra lines on the PC board, allowing adventurous people who knew how to wield a soldering gun to replace their RAM chips with the newer generation. The extra lines would only cost pennies to add.
But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party. But this time Burrell prevailed, because the change was so minimal. He just left it in there and no one bothered to mention it to Steve, much to the eventual benefit of customers, who didn’t have to buy a whole new Mac to expand their memory.
That being said, modern USB does represent a major change from that point in time, since it’s a relatively-high-speed external bus, and USB does permit for some of the devices that historically would have needed to live on an internal bus to be put on an external bus.


GPU prices are coming to earth
https://lemmy.today/post/42588975
Nvidia reportedly no longer supplying VRAM to its GPU board partners in response to memory crunch — rumor claims vendors will only get the die, forced to source memory on their own
If that’s true, I doubt that they’re going to be coming to earth for long.


https://lemmy.today/post/42574307
GPU prices are coming to earth just as RAM costs shoot into the stratosphere - Ars Technica
If said rumor is true, so much for GPU prices falling.
EDIT: Well, I guess more properly, for video card prices falling; in this context, distinguishing between the GPU chip and the card it lives on does actually matter.


Prices rarely, if ever, go down in a meaningful degree.
Prices on memory have virtually always gone down, and at a rapid pace.
https://ourworldindata.org/grapher/historical-cost-of-computer-memory-and-storage



If consumers aren’t going to or are much less likely to upgrade, then that affects demand from them, and one would expect manufacturers to follow what consumers demand.


I remember when it wasn’t uncommon to buy a prebuilt system and then immediately upgrade its memory with third party DIMMs to avoid paying the PC manufacturer’s premium on memory. Seeing that price relationship becoming inverted is a little bonkers. Though IIRC Framework’s memory-on-prebuilt-systems didn’t have much of a premium.
I also wonder if it will push the market further towards systems with soldered memory or on-core memory.


You can have applications where wall clock tine time is not all that critical but large model size is valuable, or where a model is very sparse, so does little computation relative to the size of the model, but for the major applications, like today’s generative AI chatbots, I think that that’s correct.


Last I looked, a few days ago on Google Shopping, you could still find some retailers that had stock of DDR5 (I was looking at 2x16GB, and you may want more than that) and hadn’t jacked their prices up, but if you’re going to buy, I would not wait longer, because if they haven’t been cleaned out by now, I expect that they will be soon.


Historically, it was conventional to have a “you have unsaved work” in a typical GUI application if you chose to quit, since otherwise, quit was a destructive action without confirmation.
Unless video games save on exit, you typically always have “unsaved work” in a video game, so I sort of understand where many video game devs are coming from if they’re trying to implement analogous behavior.


Have you played the existing Legend of Zelda titles? I mean, there are a ton of them. Even if you stop at Tears of the Kingdom and Breath of the Wild:
https://en.wikipedia.org/wiki/The_Legend_of_Zelda
| Year | Zelda Game |
|---|---|
| 1987 | The Adventure of Link |
| 1991 | A Link to the Past |
| 1993 | Link’s Awakening |
| 1998 | Ocarina of Time |
| 1998 | Link’s Awakening DX |
| 2000 | Majora’s Mask |
| 2001 | Oracle of Seasons |
| 2001 | Oracle of Ages |
| 2002 | Four Swords |
| 2002 | The Wind Waker |
| 2004 | Four Swords Adventures |
| 2004 | The Minish Cap |
| 2006 | Twilight Princess |
| 2007 | Phantom Hourglass |
| 2009 | Spirit Tracks |
| 2011 | Ocarina of Time 3D |
| 2011 | Four Swords Anniversary Edition |
| 2011 | Skyward Sword |
| 2013 | The Wind Waker HD |
| 2013 | A Link Between Worlds |
| 2015 | Majora’s Mask 3D |
| 2015 | Tri Force Heroes |
| 2016 | Twilight Princess HD |


Thanks for the added insights! I haven’t used it myself, so appreciated.
Linux has a second, similar “compressed memory” feature called zswap. This guy has used both, and thinks that if someone is using a system with NVMe, that zswap is preferable.
https://linuxblog.io/zswap-better-than-zram/
Based on his take, zram is probably a better choice for that rotational-disk Celeron, but if you’re running Cities: Skylines on newer hardware, I’m wondering if zswap might be more advantageous.


https://en.wikipedia.org/wiki/Apple_II
The original retail price of the computer was US$1,298 (equivalent to $6,700 in 2024)[18][19] with 4 KB of RAM and US$2,638 (equivalent to $13,700 in 2024) with the maximum 48 KB of RAM.
Few people actually need a full 48KB of RAM, but if you have an extra $6k lying around, it can be awfully nice.


TECO’s kinda-sorta emacs’s parent in sorta the same way that ed kinda-sorta is vi’s parent.
I compiled and tried out a Linux port the other day due to a discussion on editors we were having on the Threadiverse, so was ready to mind. Similar interface to ed, also designed to run on teletypes.


It’s a compressed RAM drive being used as swap backing. The kernel’s already got the functionality to have multiple tiers of priority for storage; this just leverages that. Like, you have uncompressed memory, it gets exhausted and you push some out to compressed memory, that gets exhausted and you push it out to swap on NVMe or something, etc.
Kinda like RAM Doubler of yesteryear, same sort of thing.
If I understand aright, it’s going to be HBM, so it won’t be in DIMM form. Like, can’t just go stick it in a PC.