1 Comment
Oct 1·edited Oct 1

It sounds like you are writing from a place of enthusiasm about AI capabilities and potential. Specifically about the abilities for AI to provide mundane utility, make lots of money, and become smarter at human tasks. You make a good argument from that perspective.

However, I'd like to point out that from a cautious perspective - where one is concerned about AI's ability to commit bioterrorism, plot crimes, perform electioneering, enable surveillance, create deepfakes, or other bad actions - publishing the weights of an AI model is a patently bad idea. It's unique badness comes from the fact that, once you release a model into the world you can no longer take it back. It is irreversibly committing humanity to a world full of this AI model's potential good, and potential evil. The expected value of releasing an AIs weights is therefore NOT an unalloyed good, and I think one could argue it is actually quite negative - partially because the world has superfragile institutions and there are bad actors out there who would be excited to have a free bioterrism expert.

Anyways, I consider these the strongest arguments against releasing dual use technology like this into the world irreversibly. At least if OpenAI has the closed model weights, we can retract the technology from the world if it kills a million people. We can't do the same in a world where the weights are published. Anyways, I am curious what you think about this argument?

Other dual use technologies that people don't release into the wild due to infohazards: nuclear, pandemic pathogen DNA.

Expand full comment