• futatorius@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    13 hours ago

    Where I work, we’ve been looking into data compression that’s optimized by an ML system. We have a shit-ton of parameters, and the ML algorithm compares the number of sig figs in each parameter to its byte size, and truncates where that doesn’t cause any loss of fidelity. So far, it looks promising, really good compression factor, but we still need to do more work on de-skilling the decompression at the receiving end.

    I wouldn’t have thought LLM was the right technology to use for something like this.