• psyc@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 days ago

    They had to make sure they didn’t get another 1080ti situation and have people hold on to their cards for a decade+

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 days ago

    TBH why would anybody even upgrade from the 30 serires or an AMD? At this point performance gains have almost plateaued.

    • Alphane Moon@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I see zero reasons to upgrade from my 3080 and I have a 1440 primary monitor (only goes up to 75 Hz, but that’s fine with me).

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      3 days ago

      The performance gains per watt and per dollar have plateaued. Unless you have a really cool distributed rendering solution for 3090s the final total performance has increased very significantly. And I’m saying 3090s because that’s Ampere’s effective consumer top of the line. There are clearly plenty of upgrade paths from a 3060, even if the one you find optimal isn’t a 5060.

      Sometimes people just say things, man.

  • real_squids@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 days ago

    Neither card was using any sort of thermal pads to connect the power delivery portion of the PCB, where the hotspot is located, to each GPU’s respective backplate.

    lol

    Anybody still remember when palit were good? Showing my age here.

  • Alphane Moon@lemmy.worldOPM
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    Sound’s like Nvidia is pressuring the AIB with their fake MSRP and this is what we get as a result.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      3 days ago

      No, not really.

      Nvidia is pressuring partner board mfgrs to stay closer to MSRP…

      But the fundamental problem is Nvidia’s actual design guidelines, which this article and the actual source this article is based on state directly, in detail.

      The problem is Nvidia is designing shitty cards that physically can’t handle the heat from the way they do power management/routing through the board itself.

      Recently, several current graphics card models in the RTX 5000 series, including the RTX 5080, 5070 (Ti) and 5060 Ti in particular, have shown thermal anomalies in the area of local hotspots on the back of the board in my tests.

      These affect cards from major board partners such as Palit, PNY and MSI as well as variants from other manufacturers, which (have to) largely adhere to the reference design specified by NVIDIA.

      The thermal load does not manifest itself as a systemic temperature problem of the GPU cores themselves, but in the form of pronounced heat nests below the power supply – often in areas that are hardly cooled or mechanically connected at all when viewed from the rear.

      https://www.igorslab.de/en/local-hotspots-on-rtx-5000-cards-when-board-layout-and-cooling-design-do-not-work-together/

      Basically, this is the whole… literally melting/fires starting in recent Nvidia GPUs at the power connector… thing, or something quite similar to it, on the 50 series.

      https://youtube.com/watch?v=Y36LMS5y34A

      The partner mfgrs, as I highlighted… are largely following Nvidia’s design specs quite closely.

      Meaning this issue is almost certainly present in actual Nvidia reference cards as well, but there are far less of those, so it is harder to get a good sample size to do a study.

      Not sure if you haven’t been following this generation of Nvidia cards very closely… but there have been tons of other hardware defects coming straight out of Nvidia, such as defective/broken/deactivated ROP clusters, which are basically a specialized subcomponent of the GPU processor chip itself, similar to how tensor cores or cuda cores are specialized subcomponents.

      The partner board mfgs just get those from Nvidia, and while they should probably be actually quality testing them better and not selling defective ones they assemble into their own boards… they are also not the ultimate cause of that problem…

      And having a missing ROP cluster or two will kneecap your GPU’s performance, more significantly on lower tier cards.

      Also, Jayz2cents just in the last 24 hours put out a video showing that the latest mainline Nvidia driver update on Windows (576.02) basically breaks the way internal GPU temps are reported to most third party software that monitors GPU temps and apply custom fan rpm curves, ocing and what not.

      This results in your GPU temp not getting updated in that software in many scenarios, which then means your fans don’t actually ramp up, which then means your GPU overheats.

      https://youtube.com/watch?v=KrCEPX47vtw

      However… it does seem that this was fixed… but literally only because Jay fucking put Nvidia on blast. The error never should have been present in the first place, there really isn’t a concievable reason to change the fundamental way that… drivers have been reporting temps to other software for… a decade? two decades?

      And also Nvidia isn’t like… widely telling people ‘oh shit please do a driver rollback’… they just kind of quietly published an optional hotfix that is fairly hard to find, you’d have to have basically watched Jay’s first video to even really know this problem exists…

      …and if it effects you, because you use custom fan curves… well, it’ll cause massive damage to your card.

      https://youtube.com/watch?v=W9ztK2pFe64

      Finally: AiB means ‘Add in Board’.

      Any GPU is an add in board.

      I know everyone uses the term AiB to mean ‘non Nvidia/AMD/Intel reference card, produced by a partner mfger’… but everyone is wrong.

      A reference card … is an AiB.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        I do appreciate the pedantry about terminology.

        I do have a few questions about the findings here that I’d like to see the specialized press cover before I start recommending people go drilling holes through their backplates. The obvious is how this works across a wider array of cards, since the sample in the piece is so small, but also whether undervolting would help or if that’d all be downstream from the potentially affected segments of the board.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          My semi-educated guess is that undervolting could help, but the real problem is just literally fundamentally bad design with the power connector / raw amount of power needed in such a small space.

          Like… we literally already saw this kind of nonsense with the 40 series…

          https://www.tomshardware.com/pc-components/gpus/nvidia-confident-that-rtx-50-series-power-connectors-unlikely-to-melt-despite-higher-tdp

          So uh… no… ‘Tom’…? … no it does not look like everything has been solved on this front.

          Of course yes, I too would like to see more specialized, actual investigation into this… but uh yeah, my gut feeling is … they are just flying too close to the sun, have hit the limits of the physics of heat dispersion.

          A 5090 has a max 575W power draw.

          That is fucking insane. That’s the almost the power draw of an entire decently high end gaming pc a decade ago, maybe even less far back than that.

          Even if the 50 series doesn’t have literally exploding power connectors, they evidently just cannot actually sufficiently manage the heat purely generated from the electrical power itself…

          Like… fans aren’t enough, and if they wanted to make these things last at all, they’d sell them all with AiO liquid cooling loops and fans.

          But I get the strong impression Nvidia fully doesn’t give a shit about the pc gaming crowd, is abusing their monopoly status and cult like fandom, is knowingly and intentionally selling unreliable garbage that is designed to intentionally obsolete itself.

          Nvidia’s entire pivot into AI upscaling, frame gen… that they massively coerced the wider gaming industry into… this is an unsustainable paradigm.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            That is a broader and well litigated issue at this point. Going for more power draw wouldn’t be a problem in itself (hey, your microwave will pull 1000w and it doesn’t spontaneously combust). The problem is they designed the whole thing to what would safely fit (poorly) managing 350w and tidy and instead they are pushing 600w through it with meaningful cost-cutting shortcuts.

            That is what it is, and I think it’s a more than reasonable dealbreaker, which leaves this generation of GPUs down to a low-tier Intel card with its own compatibility issues and a decent but expensive mid-tier AMD offering. We are at a very weird impasse and I have no intuition about where it goes looking forward.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 days ago

              … but your microwave is literally designed to heat things, and be used in short bursts, not constantly.

              But, yeah, it seems we basically agree.

              This whole situation is a mess.

              They would need to invent… some entirely new standard or paradigm for power distribution… or slap a liquid cooler on these things, and just fully announce ‘lol, we only cater to the making gaming hardware for the top 5% of pc gamers, by income distribution’.

              • MudMan@fedia.io
                link
                fedilink
                arrow-up
                2
                ·
                3 days ago

                Yeah, that’s my point about the microwave thing. It’s not that the total power is too much, it’s that you need more reliable ways to get it where it needs to be.

                I don’t understand how massively ramping up the power led to thinner wires and smaller plugs, for one thing. Other than someone got fancy and wanted prettier looking cable management over… you know, the laws of physics. Because apparently hardware manufacturers haven’t gotten past the notion that PC enthusiasts want to have a fancy aquarium that also does some computing sometimes. They should have made this thing a chonker with proper mains power wires. It’s called hardware for a reason.

                But I agree that the other option is completely changing how a PC is built. If you’re gonna have a GPU pulling 600W while the entire rest of the system is barely doing half of that maybe it’s time to rethink the idea of a modular board with sockets for cards, CPU and RAM and cables for power delivery. This entire structure was designed for 8086s and Motorola 68000s back when ISA ports were meant to hold a plug for your printer, a hard drive controller and a sound card. Laptops have moved on almost entirely from this format and there are plenty of manufacturers now building PCs on laptop hardware, Apple included.

                Maybe it’s time you start buying a graphics card with integrated shared memory and a slot to plug in a modular CPU instead. Maybe the GPU does its own power management and feeds power to the rest of the system instead of the other way around.

                I don’t know, I’m not a hardware engineer. I can tell the current way of doing things for desktop PCs is dumb now, though.

                • sp3ctr4l@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 days ago

                  Erp, you posted as I was editing in an addendum, here’s my addendum.

                  EDIT:

                  It is still absolutely wild to me that… … just what the fuck is the point of this new real time raytracing paradigm, that necessitated frame upscaling… which also necessitates framegen… which (sony has announced they are looking into this) may soon necessitate AI assisted input to hallucinate what the player ‘probably wants to be doing’, to counteract the input lag frome framegen?

                  Cyberpunk 2077, years after release… oh, you want to actually run path tracing at 4k, maxxed settings?

                  A 4090 with 24 gb vram gets you 20 (real) fps average.

                  what? Why … why are we even designing games with features that just… no hardware can even actually run, no matter how much money you throw at that hardware?

                  Apparently a 5090 can get up to 60ish (real) FPS…

                  This is insane. CP77 is … the flagship debut and testbed of this new paradigm, and here we are like 5 years later, and Nvidia is saying ok so you can now finally actually do what we initially said you could do 5 years ago, just buy this… currently, roughly $4000 video card.

                  … What?

                  Ok, to reply to what you just said:

                  Yeah, no notes, total agreement, I am now too angry to say anything more poignant.