Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 15 Posts
  • 1.5K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • I have never used Arch. And it may not be worthwhile for OP. But I am pretty confident that I could get that thing working again.

    Booting into a rescue live-boot distro on USB, mount the Arch root somewhere, bind-mounting /sys, /proc, and /dev from the host onto the Arch root, and then chrooting to a bash on the Arch root and you’re basically in the child Arch environment and should be able to do package management, have DKMS work, etc.



  • I think another major factor for Linux gaming beyond Valve was a large shift by game developers to using widely-used game engines. A lot of the platform portability work happened at that level, so was spread across many games. Writing games that could run on both personal computers and personal-computer-like consoles with less porting work became a goal. And today, some games also have releases on mobile platforms.

    When I started using Linux in the late 1990s, the situation was wildly different on that front.


  • Context:

    https://en.wikipedia.org/wiki/Ultra-mobile_PC

    An ultra-mobile PC,[1] or ultra-mobile personal computer (UMPC), is a miniature version of a pen computer, a class of laptop whose specifications were launched by Microsoft and Intel in Spring 2006. Sony had already made a first attempt in this direction in 2004 with its Vaio U series, which was only sold in Asia. UMPCs are generally smaller than subnotebooks, have a TFT display measuring (diagonally) about 12.7 to 17.8 centimetres (5.0 to 7.0 in), are operated like tablet PCs using a touchscreen or a stylus, and can also have a physical keyboard.


  • considers

    I’ve been in a couple conversation threads about this topic before on here. I’m more optimistic.

    I think that the Internet has definitely democratized information in many ways. I mean, if you have an Internet connection, you have access to a huge amount of information. Your voice has an enormous potential reach. A lot of stuff where one would have had to buy expensive reference works or spend a lot of time digging information up are now readily available to anyone with Internet access.

    I think that the big issue wasn’t that people became less critical, but that one stopped having experts filter what one saw. In, say, 1996, most of what I read had passed through the hands of some sort of professional or professionals specialized in writing. For newspapers or magazines, maybe it was a journalist and their editor. For books, an author and their editor and maybe a typesetter.

    Like, in 1996, I mostly didn’t get to actually see the writing of Average Joe. In 2026, I do, and Average Joe plays a larger role in directly setting the conversation. That is democratization. Average Joe of 2026 didn’t, maybe, become a better journalist than the professional journalist of 1996. But…I think that it’s very plausible that he’s a better journalist than Average Joe of 1996.

    Would it have been reasonable to expect Average Joe of 2026 to, in addition to all the other things he does, also be better at journalism than a journalist of 1996? That seems like a high bar to set.

    And we’re also living in a very immature environment as our current media goes. I am not sold that this is the end game.

    There’s a quote from Future Shock — written in 1970, but I think that we can steal the general idea for today:

    It has been observed, for example, that if the last 50,000 years of man’s existence were divided into lifetimes of approximately sixty-two years each, there have been about 800 such lifetimes. Of these 800, fully 650 were spent in caves.

    Only during the last seventy lifetimes has it been possible to communicate effectively from one lifetime to another—as writing made it possible to do. Only during the last six lifetimes did masses of men ever see a printed word. Only during the last four has it been possible to measure time with any precision. Only in the last two has anyone anywhere used an electric motor. And the overwhelming majority of all the material goods we use in daily life today have been developed within the present, the 800th, lifetime.

    That’s just to drive home how extremely rapidly the environment in which we all live has shifted compared to how it had in the past. In that quote, Alvin Toffler was talking about how incredibly quickly things had changed in that it had only been six lifetimes since the public as a whole had seen printed text, how much things had changed. But in 2026, we live in a world where it has only been a quarter of a lifetime, less for most, since much of the global population of humanity has been intimately linked by near-instant, inexpensive, mass communication.

    I think that it would be awfully unexpected and surprising if we would have immediately figured out conventions and social structures and technical solutions to every deficiency for such a new environment. Social media is a very new thing in the human experience at this scale. I think that it is very probable that humanity will — partly by trial-and-error, getting some scrapes and bruises along the way — develop practices to smooth over rough spots and address problems.

    Consider, say, the early motorcar, which had no seatbelts, windscreen, roof, suspension, was driven on a road infrastructure designed for horse-drawn carts to travel maybe ten miles an hour, didn’t have a muffler, didn’t have an electric starter, lacked electric headlights and other lighting, an instrument panel, and all that. It probably had a lot of very glaring problems as a form of transportation to people who saw it. An awful lot of those problems have been solved over time. I think that it would be very surprising if electronic mass communication available to everyone doesn’t do something similar.


  • I don’t know if I can count this as mine, but I certainly didn’t disagree with predictions of others around 1990 or so that the smart home would be the future. The idea was that you’d have a central home computer and it would interface with all sorts of other systems and basically control the house.

    While there are various systems for home automation, things like Home Assistant or OpenHAB, and some people use them, and I’ve used some technology that were expected to be part of this myself, like X10 for device control over power circuits, the vision of a heavily-automated, centrally-controlled home never made it to become the normal. I think that the most-widely-deployed piece of home automation that has shown up since then is maybe the smart thermostat, which isn’t generally hooked into some central home computer.






  • Typically when (some) 3D games don’t work, I’ve found that 3D library support for one of the 32-bit or 64-bit binaries isn’t present — Steam relies on the systemwide libraries — and the game bails or tries to do software rendering. I’ve hit some other users on here who have had the same issue.

    It looks like the full versions of those are all run through Proton, are Windows binaries, though there are Linux-native demo binaries.

    I have Dystopika myself.

    installs

    $ file Dystopika.exe 
    Dystopika.exe: PE32+ executable for MS Windows 6.00 (GUI), x86-64, 7 sections
    $
    

    So probably 64-bit.

    There’s some environment variable that will force Proton to use the older Direct3D backend based on OpenGL (WineD3D) instead of Vulkan (DXVK). Let me see if I can find that.

    searches

    You want:

    PROTON_USE_WINED3D=1 %command%
    

    In the Steam launch properties for the game; that’ll force it to use OpenGL instead of Vulkan. Here, it will run with or without it. Does that magically make it work?

    One useful tool for debugging 3D issues is mangohud. If you stick it in the Steam launch properties before “%command%” and it can display anything at all, it’ll show an overlay showing which API (WineD3D or DXVK) is being used as well as what the rendering device being used is, which will let you know whether it’s trying to render using software or hardware. So MANGOHUD_CONFIG=full mangohud %command%.

    On my system, Dystopika appears able to render in pure software (not at a great framerate, mind):

    PROTON_USE_WINED3D=1 LIBGL_ALWAYS_SOFTWARE=1 MANGOHUD_CONFIG=full mangohud %command%
    

    So I don’t know if it’d be falling back to software causing that. Rendering in software is listed in the mangohud overlay as being “llvmpipe”.

    Another way to check that each path functions is to run the following programs, see if they display correctly and at a reasonable clip. They’re in the mesa-utils-bin:i386 and mesa-utils-bin:amd64 packages on Debian, so probably same for Mint:

    $ glxgears.i386-linux-gnu
    $ glxgears.x86_64-linux-gnu
    $ vkgears.i386-linux-gnu
    $ vkgears.x86_64-linux-gnu
    

    That’ll be a simple test of all of the OpenGL and Vulkan 32-bit and 64-bit interfaces.



  • I don’t know what a Halo battleship is (like…a spaceship in the Halo series?), but basically an amphibious assault ship — can deploy amphibious craft and aircraft — with a deck gun, cruise missiles, SAM array, CIWS, and torpedoes, so kinda an agglomeration of multiple modern-day real-world ship types. Yeah, and then you can either have AI control with you giving orders or you directly control the vehicles.

    There have been a couple games in the line. Carrier Command, a very old game, which I’ve never played. Hostile Waters: Anteus Rising, which is a spiritual successor and is oriented around a single-player campaign. Carrier Command 2, which is really principally a multi-player game, but can be played single-player if you can manage the workload and handle all the roles concurrently; I play it single-player. I like both, though I wish that the last games had a more-sophisticated single-player setup. Not a lot of “fleet command” games out there.

    But in this context, it’s one of the games I can think of, like Race the Sun or some older games, Avara, Spectre, Star Fox, AV-8B Harrier Assault/Flying Nightmares that use untextured polygons as a major element of the game’s graphics. Rez wasn’t untextured, but it made a lot of use of untextured polygons and wireframe. Just saying that one can make a decent 3D game, and one that has an attractive aesthetic, without spending memory on textures at all.


  • I think that there should be realistic video games. Not all video games, certainly, but I don’t think that we should avoid ever trying to make video games with a high level of graphical realism.

    I don’t particularly have any issue specific to violence. Like, I don’t particularly subscribe to past concerns over the years in various countries that no realistic violence should be portrayed in video games, and humans should be replaced by zombies or blood should be green or whatever.

    Whether or not specifically the Grand Theft Auto series should use realistic characters or stick with the more-cartoony representations that it used in the past is, I think, a harder question. I don’t have a hard opinion on it, though personally I enjoyed and played through Grand Theft Auto 3 and never bothered to get through the more-realistic, gritty, Grand Theft Auto 5. Certainly I think that it’s quite possible to make very good games that are not photorealistic. And given the current RAM shortages, if there’s ever been a good time to maybe pull back a bit on more-photorealistic graphics in order to reduce RAM requirements, this seems like a good time.

    Yesterday, I was playing Carrier Command 2. That uses mostly untextured polygons for its graphics, and it’s a perfectly fine game. I have other, many more photorealistic, games available, and the hardware to run them, but that happened to be more appealing.

    EDIT: I just opened it, and with it running, it increased the VRAM usage on my video card by 1.1 GB. Not very VRAM-dependent. And it is pretty, at least in my eyes.


  • So, it’s not really a problem I’ve run into, but I’ve met a lot of people who have difficulty on Windows understanding where they’ve saved something, but do remember that they’ve worked on or looked at it at some point in the past.

    My own suspicion is that part of this problem stems from the fact that back in the day, DOS had a not-incredibly-aimed-at-non-technical-users filesystem layout, and Windows tried to avoid this by hiding that and stacking an increasingly number of “virtual” interfaces on top of things that didn’t just show one the filesystem, whether it be the Start menu or Windows Explorer and file dialogs having a variety of things other than just the filesystem to navigate around. The result is that you have had Microsoft banging away for much of the lifetime of Windows trying to add more ways to access files, most of which increase the difficulty of actually understanding what is going on fully through the extra layers. But regardless of why, some users do have trouble with it.

    So if you can just provide a search that can summon up that document where they were working on that had a picture of giraffes by typing “giraffe” into some search field, maybe that’ll do it.



  • I’m pretty sure that it’s more energy efficient to not emit a given amount of carbon dioxide by not emitting it via combustion than it is to mechanically capture and sequester it from the atmosphere once emitted.

    If you can exploit some process that isn’t directly driven by human-provided energy, like iron seeding of algae, where you’re leveraging plant photosynthesis, okay, then maybe.

    Also, even if you have some way of sequestering carbon dioxide, if you’re still emitting it, it’s gonna be cheaper to just capture it at the point of generation than to process atmospheric air.




  • Not the position Dell is taking, but I’ve been skeptical that building AI hardware directly into specifically laptops is a great idea unless people have a very concrete goal, like text-to-speech, and existing models to run on it, probably specialized ones. This is not to diminish AI compute elsewhere.

    Several reasons.

    • Models for many useful things have been getting larger, and you have a bounded amount of memory in those laptops, which, at the moment, generally can’t be upgraded (though maybe CAMM2 will improve the situation, move back away from soldered memory). Historically, most users did not upgrade memory in their laptop, even if they could. Just throwing the compute hardware there in the expectation that models will come is a bet on the size of the models that people might want to use not getting a whole lot larger. This is especially true for the next year or two, since we expect high memory prices, and people probably being priced out of sticking very large amounts of memory in laptops.

    • Heat and power. The laptop form factor exists to be portable. They are not great at dissipating heat, and unless they’re plugged into wall power, they have sharp constraints on how much power they can usefully use.

    • The parallel compute field is rapidly evolving. People are probably not going to throw out and replace their laptops on a regular basis to keep up with AI stuff (much as laptop vendors might be enthusiastic about this).

    I think that a more-likely outcome, if people want local, generalized AI stuff on laptops, is that someone sells an eGPU-like box that plugs into power and into a USB port or via some wireless protocol to the laptop, and the laptop uses it as an AI accelerator. That box can be replaced or upgraded independently of the laptop itself.

    When I do generative AI stuff on my laptop, for the applications I use, the bandwidth that I need to the compute box is very low, and latency requirements are very relaxed. I presently remotely use a Framework Desktop as a compute box, and can happily generate images or text or whatever over the cell network without problems. If I really wanted disconnected operation, I’d haul the box along with me.

    EDIT: I’d also add that all of this is also true for smartphones, which have the same constraints, and harder limitations on heat, power, and space. You can hook one up to an AI accelerator box via wired or wireless link if you want local compute, but it’s going to be much more difficult to deal with the limitations inherent to the phone form factor and do a lot of compute on the phone itself.

    EDIT2: If you use a high-bandwidth link to such a local, external box, bonus: you also potentially get substantially-increased and upgradeable graphical capabilities on the laptop or smartphone if you can use such a box as an eGPU, something where having low-latency compute available is actually quite useful.