• Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Yeah the original comment in this chain more describes US Telcos and shit, not this particular instance.

    • Fungah@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      That’s what they said basically.

      Like. You can compile better or more diverse datasets to train a model on. But you can also have better code training on the same dataset.

      The model is what the code poops out after its eaten the dataset I haven’t read the paper so no idea if the better training had to do with some super unique spin on their dataset but I’m assuming its better code.