Those are valid concerns and ones they don’t really seem to have answered yet, which makes the pace at which they’re progressing irresponsible. There was an article a year or so about a simulated experiment with an AI pilot where it got points for bombing a target successfully, and lost points for not bombing the target. But it had to get approval from a human operator before striking the target. The human told it no, so it killed the human, and then bombed the target. So they told it that it can’t kill the human or it will lose all its points. So it attacked the communication equipment that the human used to tell it no before the human could tell it no and then bombed the target. This was all a simulation, so no humans were actually killed, but that raised all sorts of red flags. I’m sure they’ve put hundreds of hours into research since then, but ultimately it’s hard not to feel like this will backfire. Perhaps that’s just because a lifetime of being conditioned by Terminator and Matrix movies, but some of the evidence so far like that experiment proves that it’s not an outlandish concern. I don’t see how humans can envision ever possible scenario in which the AI might go rogue. Hopefully they have a great off switch.
Those are valid concerns and ones they don’t really seem to have answered yet, which makes the pace at which they’re progressing irresponsible. There was an article a year or so about a simulated experiment with an AI pilot where it got points for bombing a target successfully, and lost points for not bombing the target. But it had to get approval from a human operator before striking the target. The human told it no, so it killed the human, and then bombed the target. So they told it that it can’t kill the human or it will lose all its points. So it attacked the communication equipment that the human used to tell it no before the human could tell it no and then bombed the target. This was all a simulation, so no humans were actually killed, but that raised all sorts of red flags. I’m sure they’ve put hundreds of hours into research since then, but ultimately it’s hard not to feel like this will backfire. Perhaps that’s just because a lifetime of being conditioned by Terminator and Matrix movies, but some of the evidence so far like that experiment proves that it’s not an outlandish concern. I don’t see how humans can envision ever possible scenario in which the AI might go rogue. Hopefully they have a great off switch.