• vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

    I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it