In my relentless pursuit of trying to coax more performance out of my Lemmy instance I read that PostgreSQL heavily relies on the OSs disk cache for read performance. I’ve got 16 GB of RAM and two hdds in RAID 1. I’ve PostgreSQL configured to use 12 GB of RAM and I’ve zram swap set up with 8 GB.

But according to htop PostgreSQL is using only about 4 GB. My swap gets hardly touched. And read performance is awful. Opening my profile regularly times out. Only when it’s worked once does it load quickly until I don’t touch it again for half an hour or so.

Now, my theory is that the zram actually takes available RAM away from the disk cache, thus slowing the whole system down. My googling couldn’t bring me the answer because it only showed me how to set up zram in the first place.

Does anyone know if my theory is correct?

  • aubeynarf@lemmynsfw.com
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    10 hours ago

    Why would you reserve ram for swap???

    You’re hindering the OS’s ability to manage memory.

    Put swap on disk. Aim for it to rarely be touched - but it needs to be there so the OS can move idle memory data out if it wants to.

    Don’t hard-allocate a memory partition for postgres. Let it allocate and free as it sees fit.

    Then the OS will naturally use all possible RAM for cache, with the freedom to use more or less for the server process as demand requires.

    Monitor queries to ensure you’re not seeing table scans due to missing indexes. Make sure VACUUM is happening either automatically or manually.

    • Björn Tantau@swg-empire.deOP
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      Why would you reserve ram for swap???

      It’s a useful way of squeezing out a few GB more. Worked wonders on my starved Steam Deck and allowed me to play Cities Skylines smoothly and without crashes.

      But on a DB heavy server that is apparently not a good idea. I’ve switched to a swap file.

      Monitor queries to ensure you’re not seeing table scans due to missing indexes.

      There are definitely some unoptimised queries and missing indexes. Lemmy 1.0 will supposedly fix a lot of them.

  • non_burglar@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    10 hours ago

    Zram does not impede disk cache, it’s a block device with compression, unavailable to the kernel for anything else.

    I do wonder what you’re trying to achieve by moving swap to zram? You’re potentially moving pages in and out of swap for no real reason, with compression, where the swap wouldn’t have occurred if zram weren’t in place.

  • taaz@biglemmowski.win
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 hours ago

    Linux has kind of two forms of memory pages (entries in RAM), one is a file cache (page cache) and the other is “memory allocated by programs for work” (anonymous pages).

    When you look at memory consumed by a process you are looking at RSS, page/file cache is part of kernel and for example in btop corresponds to Cached.

    Page cache can never be moved into swap - that would be the same as duplicating the file from one place on a disk to another place on a (possibly different) disk.
    If more memory is needed, page cache is evicted (written back into the respective file, if changed). Only anonymous pages (not backed by anything permanent) can be moved into swap.

    So what does “PostgreSQL heavily relies on the OSs disk cache” mean? The more free memory there is, the more files can be kept cached in RAM and the faster postgres can then retrieve these files.

    When you add zram, you dedicate part of actual RAM to a compressed swap device which, as I said above, will never contain page cache.
    In theory this still increases the total available memory but in reality that is only true if you configure the kernel to aggressively “swap” anonymous pages into the zram backed swap.

    Notes: I tried to simplify this a bit so it might not be exact, also if you look at a process, the memory consumed by it is called RSS and it contains multiple different things not just memory directly allocated by the code of the program.

  • CondorWonder@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    9 hours ago

    Based on what I’ve seen with my use of ZRam I don’t think it reserves the total space, but instead consumes whatever is shown in the output of zramctl --output-all. If you’re swapping then yes it would take memory from the system (up to the 8G disk size), based on how compressible the swapped content is (like if you’re getting a 3x ratio it’s 8GB/3=2.6GB). That said - it will take memory from the disk cache if you’re swapping.

    Realistically I think your issue is IO and there’s not much you can do with if your disk cache is being flushed. Switching to zswap might help as it should spill more into disk if you’re under memory pressure.

  • Shadow@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    10 hours ago

    Yes, configuring memory to be used for zram would mark it as unavailable for kernel fs caching.

    Does iostat show your disks being pegged when it’s slow? Odd that performance would be so bad on those specs, makes me think you have disk Io issues maybe.

  • BB_C@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    9 hours ago
    • Use zram devices equal to the number of threads in your system.
    • Use zstd compression.
    • Mount zram devices as swap with high priority.
    • Mount disk swap partition(s) with low priority.
    • Increase swapiness:
         sysctl vm.swappiness=<larger number than default>
      
    • Use zramctlto see detailed info about your zram disks.
    • Check with iotop to see if something unexpected is using a lot of IO traffic.