• utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    Very interesting, thanks for sharing https://github.com/theaiautomators/insights-lm-local-package

    Honestly though it might take 15min to configure, 1hr to let it run so that it get all images, dependencies, etc, 30min to debug GPU passthrough with the right driver version, 10min to try by getting the right endpoint… then 1min to realize that sure you can get give a PDF and “chat” with it but nothing particularly interesting or actually insightful will come out of this, especially if the paper itself if well written, namely has a proper introduction, structure, etc.

    So… I’m leaving this comment here to maybe try one day, updated my list of local AI services but most likely I won’t bother anymore.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Happy to help…if I did. LOL I’ve run a few local LLM just to be versed in the subject, however, my equipment is not modern enough to handle the requirements to make it justifiable to me in a self hosted environment.

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        You did. Well my point is that nobody needs this kind of equipment in the first place anyway because 99% of “useful” stuff done by an average officeworker isn’t actually LLM it’s usually STT. The rest, e.g. GenAI with videos is for shit&giggles, vibe coding doesn’t work except few super tiny narrow cases (e.g. transforming a file quickly without caring for 100% accuracy and when converters don’t already exist) and last but not least genAI on text itself is mostly used for spam, scan and cheating at school.

        So… please don’t felt “left behind” if you can’t self host this kind of tools, it seems to me it’s nearly never justifiable!