Talk:Existential risk from artificial intelligence/Archive 2
| This is an archive of past discussions about Existential risk from artificial intelligence. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
| Archive 1 | Archive 2 |
Steven Pinker
Is there any reason why this article dedicates an entire paragraph to uncritically quoting Steven Pinker when he is not an AI researcher? Its not that he has an insightful counterargument to Instrumental convergence or the orthogonality thesis, he doesn't engage with the ideas at all because he likely hasn't heard of them. He has no qualifications in any field relevant to this conversation and everything he says could have been said in 1980. He has a bone to pick with anything he sees as pessimism and his popular science article is just a kneejerk response to people being concerned about something. His "skepticism" is a response to a straw man he invented for the sake of an agenda, it is not a response to any of the things discussed in this article. If we write a wikipedia article called Things Steven Pinker Made Up we can include this paragraph there instead.
The only way I can imagine this section being at all useful in framing the debate is to follow it with an excerpt from someone who actually works on this problem as an illustration of all the things casual observers can be completely wrong about when they don't know what they don't know. Cyrusabyrd (talk) 05:22, 5 May 2024 (UTC)
- In my opinion this article suffers from too few perspectives, not too many. I think the Pinker quote offers a helpful perspective that people may be projecting anthropomorphism onto these problems. He's clearly a notable figure. Despite what some advocates argue, this topic is not a hard science, so perspectives from other fields (like philosophers, politicians, artists, and in this case psychologists/linguists) are also helpful, so long as they are not given undue weight. StereoFolic (talk) 14:26, 5 May 2024 (UTC)
- That said, if there are direct responses to his views from reliable sources, please add them. I think that YouTube video is a borderline source, since it's self published and it's unclear to me whether it meets the requirements for those. StereoFolic (talk) 14:42, 5 May 2024 (UTC)
- I think my concern is that it is given undue weight, but I agree that this could be balanced out by adding more perspectives. I think the entire anthropomorphism section is problematic and I'm trying to think of a way to salvage it. I can get more perspectives in there but the fundamental framing between "people who think AI will destroy the world" and "people who don't" is just silly. There are people who think there is a risk and that it should be taken seriously and people who think this is a waste of money and an attempt to scaremonger about technology. Nobody serious claims to know what's going to happen. Talking about this with any rigor or effort not to say things that aren't true turns it into an essay. Cyrusabyrd (talk) 18:23, 5 May 2024 (UTC)