• thickertoofan@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 hours ago

    Yeah LLM seems like the go to solution. And the best one. And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.