My first reaction was: "Good."
He is worried about "the alignment problem," that the artificial intelligences we create might not share our values.
Holden writes:
By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that - I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.
Given that we enslave, forcibly contain, and kill billions of fellow sentient beings, if a "superior" AI actually were to share human values, it seems like they would kill, enslave, and forcibly contain us.
Holden, like almost every other EA and longtermist, simply assumes that humanity shouldn't be "defeated." Rarely does anyone note that it is possible, even likely, that on net, things would be much better if AIs did replace us.
The closest Holden comes is when he addresses objections:
Holden, like almost every other EA and longtermist, simply assumes that humanity shouldn't be "defeated." Rarely does anyone note that it is possible, even likely, that on net, things would be much better if AIs did replace us.
The closest Holden comes is when he addresses objections:
Isn't it fine or maybe good if AIs defeat us? They have rights too.
- Maybe AIs should have rights; if so, it would be nice if we could reach some "compromise" way of coexisting that respects those rights.
- But if they're able to defeat us entirely, that isn't what I'd plan on getting - instead I'd expect (by default) a world run entirely according to whatever goals AIs happen to have.
- These goals might have essentially nothing to do with anything humans value, and could be actively counter to it - e.g., placing zero value on beauty and having zero attempts to prevent or avoid suffering).
Zero attempts to prevent suffering? Aren't you mistaking AIs for humanity? It might not be true, but it sure seems like humanity is the cause of most of the world's suffering, both to other humans and to other animals.
Setting aside our inherent tribal loyalties to humanity and our bias for continued existence, it is entirely likely that AIs defeating humanity would be an improvement. It would be hard for them to be worse. Probably a huge improvement.
Setting aside our inherent tribal loyalties to humanity and our bias for continued existence, it is entirely likely that AIs defeating humanity would be an improvement. It would be hard for them to be worse. Probably a huge improvement.
Smile! Your cruel rule will be over soon. |
PS: This by Luke Muehlhauser (a name that has certainly been misspelled a lot. C'mon -- "Luke"?) says that in 2019, $40 million went to AI existential-risk work. ~$40 million of that was probably either wasted or will make the world a worse place (on net).
And only $51 million went to animal welfare work, which ... OK, yeah, much if not most of which definitely makes the world a worse place (driving the switch from beef to chicken).
So ... ignore this postscript.
And only $51 million went to animal welfare work, which ... OK, yeah, much if not most of which definitely makes the world a worse place (driving the switch from beef to chicken).
So ... ignore this postscript.
No comments:
Post a Comment