This spectrum of approaches is not amenable to a simple “open/closed” binary, and NTIA should endeavor to analyze them on the basis of risk, rather than on what is and is not released and to whom. [...] A fully open-source model, where everything is public from the training data to the implementation code to the model weights, will have different concerns than an open model in which only weights and code are made publicly available and both will differ from a public innovation model in which implementation code and a sample model, but not a fully trained model, is publicly available. [...] Public innovation AI models allow others to benefit from training, as has been observed with the plethora of models built off of Meta’s LLaMA.3 The economic benefits are a major portion of the advantages of a public innovation model. [...] However, this risk presumes that misuse of the model is easier than misuse of a closed model, which likely will not be the case in many circumstances, and that the risks of release outweigh the risks, costs, and lack of competition in a closed-only AI ecosystem. [...] For example, benchmarking equity concerns related to a model is far simpler when that model is publicly available, and ensuring that any fixes are truly part of the model rather than being bandages slapped on top of a model that is still flawed is much simpler in a public innovation environment.
Authors
- Pages
- 5
- Published in
- United States of America