On the Societal Impact of Open Foundation Models, by Sayash Kapoor
The paper provides insights into the benefits and risks of open foundation models, emphasizing the need for a more grounded assessment of their societal impact through empirical research and a comprehensive risk assessment framework.
Safe Harbor
Safe Harbor A safe harbor is a measure to provide legal protection to hackers engaged in “good faith” research, abiding by pre-agreed rules of engagement, or vulnerability disclosure policy (e.g. HackerOne).
Independent evaluation and red teaming are critical for identifying the risks posed by generative AI systems. However, the terms of service and enforcement strategies used by prominent AI companies to deter model misuse have disincentives on good faith safety evaluations. We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
Summary
- Researchers investigating AI risks should conduct new research to clarify the marginal risks for misuse of open foundation models. Greater attention should be placed on articulating the status quo, constructing realistic threat models, and considering the full supply chain for misuse.
- Policymakers should ensure that research on the risks of open foundation models is sufficiently funded and independent from developers' interests. If significant risks are identified, further policy interventions can be considered. Policymakers should also assess the impacts of proposed regulations on developers, ensuring that high compliance burdens are justified.