• 1 Post
  • 63 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle
  • Re-reading your original question, it should have been pretty obvious in retrospect that I am not really in the target audience. welp, my bad :P

    I didn’t get any blood work done unfortunately, since my doctor’s office refuses to do it without a specific request from my GP (and the whole reason I wanted to do a trial run DIY was because I can’t realistically do this kind of stuff the legit way at the moment), so I just went with a dose a bit higher than the dosages I’d seen recommended online for “most” people and figured it was unlikely that that wouldn’t be enough. Since I saw nipple changes almost immediately I assumed that it was doing the trick, but the expected other effects just never came and I stopped when my nipples had become large enough that I was about to start needing a bra to stop them visibly poking through my shirt.

    I didn’t really consider that the longer half-life was super relevant to the “startup delay”, most resources I found online seemed to show it nearly reaching the steady state level after only one or two doses. If that was actually the problem that’s a pretty big derp on my part, but I’m already planning to give it another shot once I’m not living at home.


  • I did estrogen monotherapy for about 2 months earlier this year. Quite frankly, the only changes I noticed was an immediate and significant increase in nipple sensitivity+size, and a reduction of nighttime erections. Other than that I didn’t notice any of the early changes which I had been lead to expect within the first few weeks: no emotional differences, no reduction in skin oiliness, no changes in body temperature, etc.

    For what it’s worth, I was taking 1.4ml/week of 40% estradiol enanthate without any antiandrogens, am in my early 20s and have a very low body mass.










  • You shouldn’t need to download any graphics drivers, Ubuntu (and pretty much every other distribution) ships with the open-source AMD driver stack by default, which are significantly better than and less hassle than the proprietary drivers for pretty much all purposes. If you’re getting video out it’s almost certainly already using the internal GPU, but if you’re unsure you can open a terminal and run sudo apt install mesa-utils and then glxinfo -B to double-check what is being used for rendering.



  • Thinking of a modern GPU as a “graphics processor” is a bit misleading. GPUs haven’t been purely graphics processors for 15 years or so, they’ve morphed into general-purpose parallel compute processors with a few graphics-specific things implemented in hardware as separate components (e.g. rasterization, fragment blending).

    Those hardware stages generally take so little time compared to the rest of the graphics pipeline that it normally makes the most sense to have far more silicon dedicated to general-purpose shader cores than the fixed-function graphics hardware. A single rasterizer unit might be able to produce up to 16 shader threads worth of fragments per cycle, so even if your fragment shader is very simple and only takes 8 cycles per pixel, you can keep 8x16 cores busy with only one rasterizer in this example.

    The result is that GPUs are basically just a chip packed full of a staggering number of fully programmable floating-point and integer ALUs, with only a little bit of fixed hardware dedicated to graphics squeezed in between. Any application which doesn’t need the graphics stuff and just wants to run a program on thousands of threads in parallel can simply ignore the graphics hardware and stick to the programmable shader cores, and still be able to leverage nearly all of the chip’s computational power. Heck, a growing number of games are bypassing the fixed-function hardware for some parts of rendering (e.g. compositing with compute shaders instead of drawing screen-sized rectangles, etc.) because it’s faster to simply start a bunch of threads and read+write a bunch of pixels in software.





  • True, but there are also some legitimate applications for 100s of gigabytes of RAM. I’ve been working on a thing for processing historical OpenStreetMap data and it is quite a few orders of magnitude faster to fill the database by loading the 300GiB or so of point data into memory, sorting it in memory, and then partitioning and compressing it into pre-sorted table files which RocksDB can ingest directly without additional processing. I had to get 24x16GiB of RAM in order to do that, though.


  • In my experience, nouveau is painfully slow and crashes constantly to the point of being virtually unusable for anything. The developers agree, as in the last couple months nouveau has been phased out of Mesa entirely. More recent Mesa versions now implement OpenGL on Nvidia using Zink on NVK, and the result is quite a bit faster and FAR more stable.

    If your distribution currently still ships a Mesa version which uses nouveau, I would personally recommend you just stick with the Intel graphics for now.



  • Aside from checking the kernel log (sudo dmesg) and system log (sudo journalctl -xe) for any interesting messages, I might suggest simply watching for any processes which are abnormally high while the system is running slow. My initial approach would be to use htop (disable “Hide Kernel Threads” and enable “Detailed CPU Time”), and seeing which processes, if any, are eating up your CPU time. The colored core utilization bars at the top show how much CPU time is being spent on what: gray for disk wait, red for kernel, green for regular user process, etc. That information will be a good starting point.