I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      To clarify:

      We don’t even know how human intelligence/consciousness works, let alone how to simulate it.

      But we know how an individual neuron works.

      The issue with OPs idea is we don’t know how to tell a computer what a bunch of neurons do to create an intelligence/consciousness.

      • Neuromancer49@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Heck, we barely know how neurons work. Sure, we’ve got the important stuff down like action potentials and ion channels, but there’s all sorts of stuff we don’t fully understand yet. For example, we know the huntingtin protein is critical to neuron growth (maybe for axons?), and we know if the gene has too many mutations it causes Huntington’s disease. But we don’t know why huntingtin is essential, or how it actually effects neuron growth. We just know that cells die without it, or when it is misformed.

        Now, take that uncertainty and multiply it by the sheer number of genes and proteins we haven’t fully figured out and baby, you’ve got a stew going.

  • x86x87@lemmy.one
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Simulating even one neuron is very complex. Neurons in artificial neuron nets used in machine learning are a gross oversimplification. On top on this you need to get the wiring right. On top on this you need to get the sensorial system right (a brain without input is worthless). On top of this you need an environment. So it’s multiple layers of complexity that we don’t have

  • PhlubbaDubba@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    That’s kinda the idea of neural network AI

    The problem is that neurons aren’t transistors, they don’t operate in base 2 arithmetic, and are basically an example of chaos theory, where a system is narrow enough for outer bounds to be defined, yet complex enough that the amount of “picture resolution” needed to be able to accurately predict how it will behave is currently beyond our scope of understanding to replicate or even theorize on.

    This is basically the realm where you’re no longer asking for math to fetch a logical answer to a question and more trying to use it as a way to perfectly calculate the future like an oracle trying to divine one’s own fate from the stars. It even comes with its own system of cool runes!

    I fully imagine we will have a precise calculation of Rayo’s Number before we have a binary computer capable of being raised as a human with a fully human intelligence and emotional depth.

    More likely I see the “singularity” coming in the form of someone who figures out how to augment human intelligence with an AI neural implant capable of the sorts of complex calculations that are impossible for a human mind to fathom while benefiting from human abilities for pattern recognition to build more accurate models.

    If someone figures out how to do this without accidentally creating a cheap 80’s slasher villain, it will immediately become the single most sought after medical device in human history, as these new augmented mind humans will instantly become a major competitive pressure for even most manual labor jobs.

  • los_chill@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Neurons undergo physical change in their interconnectivity. New connections (synapses) are created, strengthened, and lost over time. We don’t have circuits that can do that.

    • Neuromancer49@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Actually, neuron-based machine learning models can handle this. The connections between the fake neurons can be modeled as a “strength”, or the probability that activating neuron A leads to activation of neuron B. Advanced learning models just change the strength of these connections. If the probability is zero, that’s a “lost” connection.

      Those models don’t have physical connections between neurons, but mathematical/programmed connections. Those are easy to change.

      • FooBarrington@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        That’s a vastly simplified model. Real neurons can’t be approximated with a couple of weights - each neuron is at least as complex as a multi-layer RNN.

  • rtfm_modular@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    First, we don’t understand our own neurons enough to model them.

    AI’s “neuron” or node is a math equation that takes a numeric input with a variable “weight” that affects the output. An actual neuron a cell with something like 6000 synaptic connections each and 600 trillion synapses total. How do you simulate that? I’d argue the magic of AI is how much more efficient it is comparatively with only 176 billion parameters in GPT4.

    They’re two fundamentally different systems and so is the resulting knowledge. AI doesn’t need to learn like a baby, because the model is the brain. The magic of our neurons is their plasticity and our ability to freely move around in this world and be creative. AI is just a model of what it’s been fed, so how do you get new ideas? But it seems that with LLMs, the more data and parameters, the more emergent abilities. So we just need to scale it up and eventually we can raise the.

    AI does pretty amazing and bizarre things today we don’t understand, and they are already using giant expensive server farms to do it. AI is super compute heavy and require a ton of energy to run. So, the cost is a rate limiting the scale of AI.

    There are also issues related to how to get more data. Generative AI is already everywhere and what good s is it to train on its own shit? Also, how do you ethically or legally get that data? Does that data violate our right to privacy?

    Finally, I think AI actually possess an intelligence with an ability to reason, like us. But it’s fundamentally a different form of intelligence.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they’ve been set up with a chatbox where you’re interacting directly with something that attempts human-like responses, gives off the misconception that the thing you’re talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn’t do that good job of comprehending what exactly it’s telling you. It’s very confident when it gives responses which also means when it’s wrong, it’s very confidently delivering the incorrect response.

      • rtfm_modular@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Talk to anyone who consumes Fox News daily and you’ll get incorrect predictive text generated quite confidently. You may also deny them their intelligence and lack of humanity with the fallacies they uphold.

        I also think intelligence is a gradient—is an ant intelligent? What about a dog? Chimp? Who gets to draw the line?

        It very may be a very complex predictive text generator that hallucinates but I’m concerned that it minimizes its capabilities for better or worse—Its ability to maintain context and has enough plasticity to reason and change its response points to something more, even if we’re at an early stage.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          What you’re alluding to is the Turing test and it hasn’t been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they’re speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn’t proof of an LLMs capabilities over more rudimentary chatbots.

          You’re also suggesting that it minimises the complexity of its outputs. My determination is that what we’re getting is the limit of what it can achieve. You’d have to prove that any allusion to higher intelligence can’t be attributed to coercion by the user or it’s just hallucinating based on imitating artificial intelligence from media.

          There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it’s a sophisticated machine learning algorithm.