• MHLoppy@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    27 days ago

    That’s an interesting take, but I reckon it’d be a tough sell given the latency penalty it’d bring with it to do that, especially if the core scheduling is ever “wrong” and ends up unnecessarily bouncing between the dies. Guess we’ll have to wait and see!

    • HorreC@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      26 days ago

      Yeah it was the only reason I could think of on the fly that they are even doing this. Maybe with the I/O improvements that are in the threadripper maybe there is some locking down of how caching works with this so they tend to understand where the information is, and the cores it will put to use it. But I would think that would be more of a scheduler task, but I dont understand that level very well, or movements in that area.