Google cautions the U.S. government against imposing what it perceives to be onerous obligations around AI systems, like usage liability obligations. In many cases, Google argues, the developer of a model “has little to no visibility or control” over how a model is being used and thus shouldn’t bear responsibility for misuse.
Historically, Google has opposed laws like California’s defeated SB 1047, which clearly laid out what would constitute precautions an AI developer should take before releasing a model and in which cases developers might be held liable for model-induced harms.
“Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging,” Google wrote.
Google in its proposal also called disclosure requirements like those being contemplated by the EU “overly broad,” and said the U.S. government should oppose transparency rules that require “divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models.”
A growing number of countries and states have passed laws requiring AI developers to reveal more about how their systems work. California’s AB 2013 mandates that companies developing AI systems publish a high-level summary of the datasets that they used to train their systems. In the EU, to comply with the AI Act once it comes into force, companies will have to supply model deployers with detailed instructions on the operation, limitations, and risks associated with the model.
Leave feedback about this
You must be logged in to post a comment.