

KDE user here, I still use X11 to play old Minecraft versions. LWJGL2 uses xrandr to read (and sometimes modify? wtf) display configurations on Linux, and the last few times I’ve tried it on Wayland it kept screwing the whole desktop up.
KDE user here, I still use X11 to play old Minecraft versions. LWJGL2 uses xrandr to read (and sometimes modify? wtf) display configurations on Linux, and the last few times I’ve tried it on Wayland it kept screwing the whole desktop up.
It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.
Nouveau is dead, it’s been replaced with Zink on NVK.
True, but there are also some legitimate applications for 100s of gigabytes of RAM. I’ve been working on a thing for processing historical OpenStreetMap data and it is quite a few orders of magnitude faster to fill the database by loading the 300GiB or so of point data into memory, sorting it in memory, and then partitioning and compressing it into pre-sorted table files which RocksDB can ingest directly without additional processing. I had to get 24x16GiB of RAM in order to do that, though.
In my experience, nouveau is painfully slow and crashes constantly to the point of being virtually unusable for anything. The developers agree, as in the last couple months nouveau has been phased out of Mesa entirely. More recent Mesa versions now implement OpenGL on Nvidia using Zink on NVK, and the result is quite a bit faster and FAR more stable.
If your distribution currently still ships a Mesa version which uses nouveau, I would personally recommend you just stick with the Intel graphics for now.
And linux has io_uring which can handle millions of syscalls from a single thread without breaking a sweat. In my experience, I/O on Windows is just really slow, every file operation takes 10s to 100s of times longer than on any Unix-like kernel (1000s if windows defender is enabled)
Aside from checking the kernel log (sudo dmesg
) and system log (sudo journalctl -xe
) for any interesting messages, I might suggest simply watching for any processes which are abnormally high while the system is running slow. My initial approach would be to use htop
(disable “Hide Kernel Threads” and enable “Detailed CPU Time”), and seeing which processes, if any, are eating up your CPU time. The colored core utilization bars at the top show how much CPU time is being spent on what: gray for disk wait, red for kernel, green for regular user process, etc. That information will be a good starting point.
I mean, there are plenty of wealthy immigrants here, but I would say there are probably more immigrant families from regular or (relatively) poor families around. Like, people with a regular income who are working a regular job, not your average finance expat from an English-speaking country on an all-expenses-paid expat package plus a neat half million bucks per year salary on top.
That said, the non-expat immigrants I know are overwhelmingly from eastern European countries, so I wouldn’t be surprised if the acceptance policy is biased against non-Europeans. You don’t see many average working-class Americans around here unless they married a Swiss person.
Again, that would be TIFF. TIFF images can be encoded either with each line compressed separately or with rectangular tiles compressed separately, and separately compressed blocks can be read and decompressed in parallel. I have some >100GiB TIFFs containing elevation maps for entire countries, and my very old laptop can happily zoom and pan around in them with virtually no delay.
There is a reason why TIFF is one of the most popular formats for raster geographic datasets :)
“i only use 4chan”
I have tried hosting a Tor relay on a VPS in the past and it was bottlenecked by the CPU at barely 20MB/s, although to be fair this was without hardware AES. More importantly for you, the server’s IP started getting DDoSed constantly and a whole bunch of big internet services just immediately blocked the address (the list of relay IPs is public and many things just block every address on that list instead of only exit nodes). So any of your machines are probably at least somewhat up to the task (ideally if they have hardware AES support), but this is definitely not something I’d do on my home network.
I have a blanket I’ve slept with every day since I was barely a month old (am 23 now), wouldn’t trade it for anything. I can definitely relate :)
This is me, I have had the same thing for breakfast every day for the last 5? maybe 6 years.
I would be very hesitant to run sed on a bunch of files consisting primarily of highly compressed binary data.
deleted by creator
What would be the point of streaming a game at 4K onto an 800p display?
Thinking of a modern GPU as a “graphics processor” is a bit misleading. GPUs haven’t been purely graphics processors for 15 years or so, they’ve morphed into general-purpose parallel compute processors with a few graphics-specific things implemented in hardware as separate components (e.g. rasterization, fragment blending).
Those hardware stages generally take so little time compared to the rest of the graphics pipeline that it normally makes the most sense to have far more silicon dedicated to general-purpose shader cores than the fixed-function graphics hardware. A single rasterizer unit might be able to produce up to 16 shader threads worth of fragments per cycle, so even if your fragment shader is very simple and only takes 8 cycles per pixel, you can keep 8x16 cores busy with only one rasterizer in this example.
The result is that GPUs are basically just a chip packed full of a staggering number of fully programmable floating-point and integer ALUs, with only a little bit of fixed hardware dedicated to graphics squeezed in between. Any application which doesn’t need the graphics stuff and just wants to run a program on thousands of threads in parallel can simply ignore the graphics hardware and stick to the programmable shader cores, and still be able to leverage nearly all of the chip’s computational power. Heck, a growing number of games are bypassing the fixed-function hardware for some parts of rendering (e.g. compositing with compute shaders instead of drawing screen-sized rectangles, etc.) because it’s faster to simply start a bunch of threads and read+write a bunch of pixels in software.