apriiori

my letter to my representatives on super ai

a short break from my habitual passivism

The following is a letter11 I’ll also be sending it by email. I am sending to my Representative and Senators addressing the topic of artificial superintelligence. I think contacting your representatives about this or any topic can be a very good idea, and am always happy to help anyone interested in doing so. The If Anyone Builds It website has a helpful page for anyone interested in talking to their representatives about superintelligence.


Dear [Representative/Senator] [name of politician of whom I am a constituent],

I am writing to you as a constituent [living in Portland] to express deep concern about the attempted development of artificial superintelligence. While many have legitimate concerns about the ability of current AI to be used for scams or for displacing human artists and workers, I’m primarily concerned about risks of much larger catastrophes. I’m scared that if unregulated AI progress produces an AI with substantially stronger problem-solving capabilities than humans, it will be a danger to human society and to the world much like nuclear war or climate change.

Many prominent experts have predicted that there may be a one-in-five chance or more that the development of superintelligence may lead to catastrophic outcomes or even the literal extinction of humanity (and other life on Earth), including “Godfathers of AI” Geoffrey Hinton and Yoshua Bengio. Even some leaders of frontier AI corporations, such as Dario Amodei, have publicly agreed with this assessment. It’s incredibly reckless to allow AI corporations to continue trying to develop superintelligence while several credible experts are making predictions this dire, and it’s just common sense that trying to give an AI the capacity to outplan and outmaneuver humans while we are still very far from a complete understanding of AI behavior is incredibly dangerous.

I urge you to announce that you would support an international agreement banning the development of superintelligence around the world.

Thank you for your time and attention to this critical issue. I’ve previously been impressed by [something badass done by politician of whom I’m a constituent], and I look forward to your response and to seeing your leadership on preventing the creation of superintelligence.

Sincerely,

April [surname]

[address]


Some thoughts on why I wrote this how I did:

As someone living in Portland, I conjecture that a lot of the feedback on artificial intelligence heard by my representatives is the sort that’s rabidly anti-AI art or similar. And though I’m personally fond of AI art for at least some use cases, I do think some of those concerns are legitimate22 Unlike the water thing…. But I wanted to clearly state up front that my concerns are different from those, and (I’d argue) much more dire—if not for existential risk, I'd be much more sympathetic to accelerationism.

I don’t, in general, think running epistemics primarily on appeal to authority is a good idea, if you’re already in a position to think for yourself on an issue very much. But the number and type of experts expressing severe concern about AI is unlike other fields, and I think that’s important. It’s the sort of fact that can help cross the epistemic distance to someone who isn’t really aware that existential risk from AI isn’t just a movie plot, or to someone who thinks the idea to be outside the Overton window.

I also think emphasizing the common sense nature of the worries is important. This isn’t really some arcane theory, it’s not dependent on some galaxy brain decision theoretic reasoning or whatever. It is very straightforward: unprecedentedly capable beings which we can’t foresee the behavior of? Uh oh.

There are some people who think we don’t just have some chance of safely navigating unregulated AI progress, but that it is actually very likely to be safe. They might be right, but I think their arguments are less straightforward and common sensical, and I think the best way to reason in complicated domains with poor feedback loops is to try to rely on cases that are as simple as possible.

Maybe reaching human level on some of the things current AI fails at will take not just a decade or two, but even longer than that. Maybe, though the orthogonality thesis is true in the general case, specifically producing AIs by having them learn from human behavior tends to produce AIs with value systems which aren’t just superficially similar to human morality, but actually robustly good. I heed some models like those to an extent33 something something platonic representation hypothesis, but I feel like they are far from common sense even if they are true, and I wouldn’t want to rely on them while they’re still fairly speculative or conjectural.

I kept the specific concrete ask for supporting an international ban on superintelligence unchanged from the template on the If Anyone Builds It website. I think including a specific concrete ask is good, and I think trying to coordinate on one particular thing to ask of our representatives makes sense, and I think this specific ask is a good one—if such a ban were implemented well, it would relieve many of my worries, and it doesn’t seem unnecessarily broad.

Since I don’t contact my representatives very often44 Yet Growth Mindset?, I figured it made sense to include a sentence about some things I think they’re doing well in general. Part of that is probably a desire to make people I’m asking things from feel appreciated, which maybe isn’t very practically important for a letter that will most likely be read by a staffer who reads such letters all the time, but I don’t think it hurts the word count too much.

Word count was indeed a consideration—it’s probably clear to anyone who has read this far that I of course could have written much more, but some quick googling taught me that it’s considered advisable to keep these brief. This is half of why I removed any hedging I considered including. The other half is some mix of “I don’t actually think people like me sounding unconfident in contexts like this is good for the world” and “probably my representatives do not especially try to developed nuanced worldviews primarily through reading letters from random constituents.”

Regardless of word count concerns, I do think it was worth including the arguments in favor of my stance that I did. It seems to be commonly believed that a personal letter can sometimes be meaningfully more influential than a copy-paste form letter or whatnot. And to me it seems clear that including some of the most compelling justifications of my attitude makes sense—it might or might not matter at all in this particular case, but either way I want to support a world where politicians pay attention to the quality of arguments made by their constituents, instead of just tallying how many people are on whatever side of whatever disagreement. So to that end I want my letters to have more than just a statement of my opinion.

Of course, other people might be more inclined to make some sort of emotional appeal, and I think it’s a really good thing for some other people to take that approach. But to me personally, this is more of a “the facts look maybe really dire” issue than an “I feel strongly about this on an emotional level” issue? For one thing, it’s abstract enough for my emotions to be sort of in denial55 Which is plausibly good for my sanity, though maybe some amount of desperation would help me make a bigger difference in the world., but mostly just… the issue at hand isn’t how much anyone is impacted by the end of the world, it’s that the end of the fucking world is at risk!

  1. I’ll also be sending it by email.

  2. Unlike the water thing…

  3. something something platonic representation hypothesis

  4. Yet Growth Mindset?

  5. Which is plausibly good for my sanity, though maybe some amount of desperation would help me make a bigger difference in the world.