• schema@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    15 hours ago

    For AI processing, I don’t think it would make much difference if it lasted longer. I could be wrong, but afaik, running the actual transformer for AI is done in VRAM, and staging and preprocessing is done in RAM. Anything else wouldn’t really make sense speed and bandwidth wise.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Oh I agree, but the speeds in the article are much faster than any current volatile memory. So it could theoretically be used to vastly expand memory availability for accelerators/TPUs/etc for their onboard memory.

      I guess if they can replicate these speeds in volatile memory and increase the buses to handle it, then they’d be really onto something here for numerous use cases.