• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: March 1st, 2024

help-circle

  • I’d say the operational requirements.

    A home PC mostly has max 1 simultaneous user (i.e. the “person”) - out of maybe a small pool of potential users - the availability requirement is ad-hoc. It offers many services, some available immediately on boot, but many are on call.

    A server typically has capacity to provide services to many simutaneous users and probably has a defined availability requirement. Depending on the service, and the number of users and the availability and performance requirements it may need more communication bandwidth , more storage, faster storage, more cores, UPS, live backups and so on. But it doesn’t strictly need any of that hardware unless it helps meet the requirements.

    In terms of software any modern PC runs an OS offering a tonne of services straight from boot / login. I don’t see any real differences there. Typically a server might have more always on serices and less on-call services, but these days there’s VMs and stuff on both servers and on PCs.

    Most PC users would expect to have more rights such as to install and execute what they want. A server will typically have a stronger distinction between user and sys-admin. but again if a server offers a VMs it’s not so clear cut. That mostly comes out of the availability requirement - preventing users compromising the service.




  • I agree, there’s a lot of people in this thread who seem to know exactly what is good or bad for a new user. But I don’t see many being sensitive to what the user might actually want to achieve. New users are not a homogeneous group.

    If the user wants to both use (stably) and learn (break stuff) simultaneously, I’d suggest that they start on debian but have a second disk for a dual boot / experimentation. I don’t really use qemu much but maybe that’s a good alternative these days. But within that I’d say set them self the challenge of getting a working arch install from scrath - following the wiki. Not from the script or endeavourOS - I think those are for 4th/5th install arch users.

    I find it hard to believe that I’d have learned as much if ubuntu was available when I started. But I did dual boot various things with DOS / windows for years - which gave something stable, plus more of a sandbox.

    I think the only universal recommedation for. any user, any distro, is “figure ourt a decent backup policy, then try to stick to it”. If that means buy a cheap used backup pc, or raspberry pi and set it up for any tasks you depend on, then do that. and I’d probably pick debian on that system.





  • Not really, it generally worked in the end - so in fact it’s pretty great actually at getting you out of a hole.

    It was just a load of extra steps - and usually a last resort after failing with whatever came on the installation disks. So morale had taken a few hits before you even started with it.

    Everything is easier when you can connect to the network immediately.

    Fair play to ubuntu (and i guess kernel improvements in early 2ks) - that was such a major step in ease of installation.




  • It depends what packages you need, and what they have to interact with.

    If it’s all standalone then no problem until the hardware degrades.

    For example I had laptop (DOS/Win98 ) with a pcmcia network adapter with BNC 50 ohm coax network dongle, 9/25 pin serial/parallell ports, maybe p/s2 port, floppy drive and so on.

    I can’t think what I’d connect that to I might have a parallell port on my PC, but on that laptop I think I only had laplink so I’d need a linux app to interact with that. I do still hve a floppy drive somewhere, but how to connect that to my motherboard?

    So I’d probably be limited to keyboard and trackball input, and audio + (monochrome) video output.

    lemmings on black and white, blurry, slow refresh rate would still “work” unless the hdd got corrupted.

    Within a lifetime current gen wifi, usb, ethernet etc may all be as rare as 9 pin serial is today - it’s still around of course, but you cant rely on it.








  • +1. and yes use the wiki not the install script.

    I think theres value to anyone with a genuine interest if they just have a go at an archinstall - I think they can setup most things of interest in a Qemu(vm), or a spare partition, or even a usb or something. Theres nothing to lose but time. I’d recommend the user knows enough about their disk setup and their incumbent boot manager so as not to screw up their main os. Though i’m very tempted to say that’s a rite of passage.

    Of course everyone already has a regular backup(s) which contains some sort of list or script for all the software, configs and tweaks they normally do. If not - well another rite of passage.