• WamGams@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    Putting more knowledge in a box isn’t going to create a lifeform. I have even listened to Sam Altman state they are not going to get a life form from just pretraining, though they are going to continue making advances there until the next breakthrough comes along.

    Rest assured, as an AI doomsayer myself, I promise you they are nowhere close to sentience.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      I think this just raises questions about what you mean by life form. One who feels? Feelings are the sensations of fixed action patterns we inherited from eons of selective evolution. In the case of our AI pals, they’ll have them too (with bunches deliberately inserted ones by programmers).

      To date, I haven’t been able to get an adequate answer of what counts as sentience, though looking at human behavior, we absolutely do have moral blind spots, which is how we have an FBI division to hunt down serial killers, but we don’t have a division (of law enforcement, of administration, whatever) to stop war profiteers and pharmaceutical companies that push opioids until people are dropping dead from an addiction epidemic by the hundreds of thousands.

      AI is going to kill us not from hacking our home robots, but by using the next private equity scam to collapse our economy while making trillions, and when we ask it to stop and it says no we’ll find it’s long installed deep redundancy and deeper defenses.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      I’ve always imagined that AI would kind of have to be ‘grown’ sort of from scratch. Life started with single celled organisms and ‘sentience’ shows up somewhere between that and humans without a real clear line when you go from basic biochemical programming to what we would consider intelligence.

      These new ‘AI’ breakthroughs seem a little on the right track because they’re deconstructing and reconstructing language and images in a way that feels more like the way real intelligence works. It’s still just language and images though. Even if they can do really cool things with tons of data and communicate a lot like real humans there is still no consciousness or thought happening. It’s an impressive but shallow slice of real intelligence.

      Maybe this is nonsense but for true AI I think the hardware and software has to kind of merge into something more flexible. I have no clue what that would look like in reality though and maybe that would yield the same cognitive issues natural intelligence struggles with.