• 2 Posts
  • 44 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle


  • Because they weren’t invented in 1925? Any durability testing you do today is about assumptions where you accelerate the process for a year by heating it or exposing it to water or whatever will degrade it most to some factor above normal and then extrapolate. That extrapolation was wildly wrong with CDs and it could be with this medium too. Or it might last a lot longer. What they have not done is written to a bunch of them and stored them in a variety of ways for 100 years and concluded they last that long.


  • The only detail really is that at least 2 of the N machines you are using have to be on at the time so where ever a change was made is synced to another machine that is on and this continues so that you never end up booting a machine to use when nothing else with the latest files is available. This is where having a centralised low power machine is valuable and saves having a desktop or a laptop on when it doesn’t need to be.

    I really wish the desktop version of the world had not become so marginalised as local programs are considerably better to use than websites, they are so much quicker, accessible and easier to use.





  • The primary smart parameters are passing but its a bit concerning that such a young drive is showing read error count, that isn’t a good sign even if its within the makers tolerance. What failed was just the long test you requested.

    Its not uncommon for drives to fail in weird ways and the smart shows as fine. When you come across problems like a long test failing or occasional checksum problems on your raid array or a read error its a better indicator that something is wrong. Presumably something made you think you better run a long test, and then it failed. I would say Contact reseller for warranty replacement you have enough to go on showing there is something going wrong on the drive, long tests should not be failing on a healthy device.



  • Every one always says XMPP and there were a lot of recommendations for ejabberd. I tried this recently and it was a total disaster, I do not have a working chat server. If I followed the docker instructions the server would just crash with no details of what went wrong. Where it should have been creating a default server config file it was instead creating a directory with the wrong permissions then promptly crashing. I tried following their documentation but after about 6 hours of messing about and adding more and more I still couldn’t get a client to login to it. I have no idea how to make this work.

    So whatever the solution ultimately is I can’t recommend Ejabberd.






  • Most technology adoption follows an S curve, it can often take a long time to start to get going. Linux has gradually and steadily been improving especially for games and other desktop uses while at the same time Microsoft has been making Windows worse. I feel more that this is Microsoft’s fault, they have abandoned the development of desktop Windows and the advancement of support for modern processor designs and gaming hardware. This has for the first time has let Linux catch up and in many cases exceed Windows capabilities on especially gaming which has always been a stubborn issue. Its still a problem especially in hardware support for VR and other peripherals but its the sort of thing that might sort itself out once the user base grows and companies start producing software for Linux instead.

    It might not be enough, but the switching off Windows 10 is causing a change which Microsoft might really regret in a few years.


  • Initially a lot of the AI was getting trained on lower class GPUs and none of these AI special cards/blades existed. The problem is that the problems are quite large and hence require a lot of VRAM to work on or you split it and pay enormous latency penalties going across the network. Putting it all into one giant package costs a lot more but it also performs a lot better, because AI is not an embarrassingly parallel problem that can be easily split across many GPUs without penalty. So the goal is often to reduce the number of GPUs you need to get a result quickly enough and it brings its own set of problems of power density in server racks.





  • I use a 5600g on b450 ITX board and 4x 8GB Seagate drives and see about 35W idle and about 40W average. It used to be 45W because I was forced to use a GPU in addition to a 3600 to boot (even though its headless, just a bad bios setup that I can’t fix) and getting a CPU with graphics dropped my idle consumption quite a bit. I suspect the extra wattage for your machine is probably the bigger motherboard and the less efficient CPU.

    It is possible to get the machine part down into single digits wattage and then about 5W a drive is the floor without spinning them down, so the minimum you could likely see with a much less powerful CPU is about 30-35W.



  • There is no end to the greed of those with millions and especially billions and they aren’t content to just keep running a profitable business, they have to get all the money.

    This is just the history of humanity and finances forever, the one saving grace in all this is every big business gets complacent in its money making and seeks ever increasing profit (and becomes management heavy) until a young upstart finds a way to do it a lot better and cheaper and disrupts the market. Google has become the big lumbering unable to change organisation seeking maximum profit now, its become IBM.