• BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    edit-2
    20 hours ago

    Or developers could just optimize their games instead of using this generative drivel to compensate for lazy or rushed development.

    Edit: to everyone but [email protected]

    I’ve already previously blocked your accounts for being Ai apologists. 😂literally can’t see your dumb responses.

    Glad to see the pro Ai hoard is still prowling the forums for validation.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      2 days ago

      No matter how optimized a game is, there will be someone with hardware that can barely run it.

      For those people, having access to upscaling in order to gain performance is a plus.

      • inclementimmigrant@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        2 days ago

        Which was what this tech was supposed to be for when it was first pitched to gamers, a tool to help extend the usable life of a GPU.

        Not we know now that’s not how the tech is being used and especially for Nvidia, that not how this is used and marketed at this point and it would seem that developers are just expecting upscaling to fill in the gap for not doing a proper job to being with.

        ETA, also don’t forget that it’s not just upscaling, Nvidia are pushing fake frames as the standard too in their marketing and optimization push.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Frame generation is a requirement if we’re going to see very high refresh rate (480hz+) displays become the norm. No card is rasterizing an entire scene 500 times per second.

          Calling it fake frames is letting Internet memes stand in place of actual knowledge. There’s a lot of optimizations done in the rendering pipeline which use data from previous frames to generate future frames, generating an intermediate frame while waiting for the GPU to finish rendering the previous frame is just one trick.

          The generated frame increases the visual clarity of motion, you can see at https://testufo.com/photo.

          We’re not going to have cards that can pathtrace at 4k@1000hz anytime soon, frame generation is one of the techniques that will make it possible.

          It’s one thing to be upset at companies marketing teams who try to confuse people with FPS numbers by tweaking up scaling and frame generation. Directing that frustration at the technology itself is silly.

          e: a downvote, great argument

          • inclementimmigrant@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            1 day ago

            Yeah, downvoted because I woke up and saw this absolutely ridiculous strawman that bordered on marketing drivel worth of Nvidia and monitor manufacturing advertising wing.

            1. The argument is that this tech is being used by both the manufacturer and game devs to be lazy and market lies not how can we ever get to 1000hz with path tracing.
            2. The whole 500hz benefits are skeptical and subjective at best considering even going from 144 to 240 you’re already seeing large diminishing returns but that’s really a whole other argument about monitor BS currently.
            3. Being a complex solution doesn’t make it a good solution and frame gen is not a good solution for making sure your game doesn’t run like ass.
            4. Frame generation is supposed to help older cards get better “FPS” and smooth out motion, you know what would help that over having new games use frame generation as a big ass crutch? Optimizing your damn game so you don’t stutter like a drunken sailor with a speech impediment in the first place and not adding a crap ton of latency with fake frames.
            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              1 day ago

              The argument is that this tech is being used by both the manufacturer and game devs to be lazy and market lies not how can we ever get to 1000hz with path tracing.

              Yeah, marketing lies. I mentioned this in the last paragraph.

              The whole 500hz benefits are skeptical and subjective at best considering even going from 144 to 240 you’re already seeing diminishing returns on but that’s really a whole other argument about monitor BS currently.

              You’re skeptical of the benefits, that is obvious.

              You’re wrong about it being subjective though. There are peer reviewed methods of creating photographs that display motion blur as a human eye would experience it. People have been using these techniques to evaluate monitors for years now. Here’s a very high level overview of the state of objective testing: https://blurbusters.com/massive-upgrade-with-120-vs-480-hz-oled-much-more-visible-than-60-vs-120-hz-even-for-office/ . We are seeing diminishing returns because it, roughly, takes a doubling in the refresh rate to cut the motion blur in half. 60-120 is half as blurry, 144 to 240 is only 25% less blurry.

              If you want to keep seeing noticeable gains, up to being imperceptible, then display refresh rates need to continue to double and there have to be new frames generated for each of those refresh rates. Even if a card can do 480fps on some limited games, it can’t do 1000fps, or 2000fps.

              We need exponential increases in monitor refresh rates in order to achieve improvements in motion blur, but graphics cards have not been making exponential increases in power in quite some time.

              Rasterization and Raytracing performance growth is sub-exponential while the requirements for reducing motion blur are exponential. So either monitor companies can decide to stop improving (not likely since TCL just demoed a 4k 1000hz monitor) or there has to be some technological solution for filling the gap.

              That technological solution is frame generation.

              Unless you know of some other way to introduce exponential growth in processing power (if you did you would win multiple Nobel prizes), then we have to use something that isn’t raw rendering. There is no way for a game to ‘optimize’ its way into having 10x framerate, or 100x framerate.

              Being a complex solution doesn’t make it a good solution and frame gen is not a good solution for making sure your game doesn’t run like ass.

              Yes, game companies are lazy and they cover the laziness by marketing their game with a lot of upscaling so that they can keep producing crazier and crazier graphics despite graphics cards performance growth not keeping up. This is the fault of gaming companies and their marketing and not of upscaling and frame generation technology

              Frame generation is supposed to help older cards get better “FPS” and smooth out motion, you know what would help that over having new games use frame generation as a big ass crutch? Optimizing your damn game so you don’t stutter like a drunken sailor with a speech impediment in the first place and not adding a crap ton of latency with fake frames.

              Frame generation gives all cards better FPS, which objectively smooths out motion. Going from 30 to 60 fps cuts motion blur in half. Nothing supposed about it.

              A developer’s choice to optimize their game and their choice to support upscaling and frame generation are not mutually exclusive choices. There are plenty of examples of games which run well natively and also support frame generation and upscaling.

              Also, frame generation only adds latency when the frame time is long (low FPS). As the source framerate increases the input latency and the frame time converge. In addition, it’s possible to use frame generation to reduce input delay (blur busters: https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/). Input latency is a very solvable problem.


              My point is that you’re not understanding the trajectory of display hardware development vs the graphics card performance growth and presenting frame generation and upscaling as some plot by game developers and graphics card designers so that they can produce worse products.

              It’s conspiracy nonsense.

        • real_squids@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          It’s very unfortunate that all of this shiny new tech is often only present on the latest GPUs, this is a good exception to something that looks like a forever rule.

          I understand there were big changes between RDNA 3 and 4, but if you look at GCN and it’s support thru the generations this trend still seems greedy as hell.

      • SuiXi3D@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        As an example of how this tech can be useful: sometimes, games just hitch for a quick second. Can be any number of reasons why. Even on a ‘perfect’ system, it can happen. Such is the case with my PC and emulating android to play Destiny Rising. No matter what, it just likes to hitch occasionally. By using Lossless Scaling’s frame generation, it’s buttery smooth. I don’t notice any input lag (base FPS is 60) so everything’s all good.

        I also use Lossless Scaling on my Lenovo Legion Go a lot. Just helps things look that much better.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 day ago

          Frame generation objectively reduces motion blur and frame consistency.

          Neural network-based upscaling is a far better alternative. Previously, in the time of the dinosaurs, we’d get better frame rate by turning the resolution down and letting the monitor handle upscaling. This looked bad but higher frame rate often is more important for image quality than resolution. Now we get the same performance boost with much less loss of visual clarity, and some antialiasing for free on top of it.

          Upscaling and frame generation are good technologies. People are upset at the marketing of graphics cards which abuse these technologies to announce impressive FPS numbers when the hardware isn’t as big of an upgrade as implied.

          Marketing departments lying about their products isn’t new, but for some people this is the first time that they’ve noticed it affecting them. Instead of getting mad at companies for lying, they’re ignorantly attacking the technologies themselves.

    • Coelacanth@feddit.nu
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      2 days ago

      Just because some developers are bad or lazy at optimisation doesn’t make these tools bad. Unoptimized games have existed for far longer than AI upscaling tools. If I can use DLSS to still get solid framerates in new releases without needing to buy a new $2000 graphics card every two years that sounds pretty good in my book. I get why some people dislike Frame Generation as it does typically come with some input lag and is a bit of a win-more in that you need 60+ FPS in the first place for it to work well. But DLSS/FSR are good tools in my book and one of the best applications of AI.