• Mbourgon everywhere@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      13 hours ago

      I’ll be honest, when I first heard that Mozilla had come out with an AI I figured it was on the back of them trying a couple different ad scenarios, and assumed the worst. Pleasantly surprised by Orbit.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      12 hours ago

      it’s a good idea to not look to deeply into the historic actions of the creator of llamafile. she’s pretty polarising.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          she was the face of the occupy wall street movement, but her views back then were more ancap than anti capital. while working for google she tried to petition the us government to shut itself down and hand the reins over to the tech industry, with google’s ceo as president.

          the base of the APE library that powers llamafile is called cosmopolitan libC, iirc in direct reference to the old soviet term.

          to give credit she’s mellowed out a lot in recent years.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        9 hours ago

        I don’t care who they are or what their Xitter history is.

        The tools is great, the tool is not backdoored. I ruthlessly use effective tools that I can get my hands on.

        Using open source software on its own does not even entails economic support for its creator.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          llamafile is not really “effective”. it’s incredibly impressive, but it’s the opposite of effective. it’s a collection of a bunch of hacks reliant on coincidences in OS design, and works by basically recompiling itself on the fly to work with different architectures.

          if you want effective, run llama.cpp compiled with actual optimizations for your platform.