• brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.

        • WorldsDumbestMan@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          Qwen3 8B sorry, Idiot spelling. I use it to talk about problems when I have no internet or maxed out on Claude. I can rarely trust it with anything reasoning related, it’s faster and easier to do most things myself.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            23 hours ago

            Yeah, 7B models are just not quite there.

            There are tons of places to get free access to bigger models. I’d suggest Jamba, Kimi, Deepseek Chat, and Google AI Studio, and the new GLM chat app: https://chat.z.ai/

            And depending on your hardware, you can probably run better MoEs at the speed of 8Bs. Qwen3 30B is so much smarter its not even funny, and faster on CPU.