• stephen01king@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I think you’re completely wrong by still comparing skills that have no relation to each other. What’s the similarity between driving and coding that would require an LLM to be need to do one before you can believe it can do the other? Explain that leap in logic properly before you continue with your argument.

    An LLM is designed to output text. Expecting them to drive to prove their ability to output code is like expecting them to dance to prove their ability to produce poems. It’s inability to do an unrelated skill has no bearing on it’s ability to do a different one. You’re basically judging a fish on its ability to walk on land, and using that as the basis to judge its ability to swim.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        What does that even mean? Neural networks have varying levels of complexity, even within the same technology. Even the same LLM model can have different number of tokens that differentiate the complexity of their operation.

        So instead of using a neural network that is designed to input and output text and making it learn to output coding, which is also text, you think it’s supposed to be easier for them to make it instead analyse various video and audio input from multiple cameras, and then output the various actions that is required for it to drive a car? Does that make sense to you?