Case studies of self-governance to reduce technology risk
6 April 2021
Summary
- Self-governance occurs when private actors coordinate to address issues that are not obviously related to profit, with minimal involvement from governments and standards bodies.
- Historical cases of self-governance to reduce technology risk are rare. I find 6 cases that seem somewhat similar to AI development, including the actions of Leo Szilard and other physicists in 1939 and the 1975 Asilomar conference.
- The following factors seem to make self-governance efforts more likely to occur:
- Risks are salient
- The government looks like it might step in if private actors do nothing
- The field or industry is small
- Support from gatekeepers (like journals and large consumer-facing firms)
- Support from credentialed scientists.
- After the initial self-governance effort, governments usually step in to develop and codify rules.
- My biggest takeaway is probably that self-governance efforts seem more likely to occur when risks are somewhat prominent. As a result, we could do more to connect “near-term” issues like data privacy and algorithmic bias with “long-term” concerns. We could try to preemptively identify “fire alarms” for TAI, and be ready to take advantage of these warning signals if they occur.
Full post on the EA Forum.