By and large, the AI community perceives these reports as good-faith efforts to support independent research and safety evaluations, but the reports have taken on additional importance in recent years. As Transformer previously noted, Google told the U.S. government in 2023 that it would publish safety reports for all “significant,” public AI model releases “within scope.” The company made a similar commitment to other governments, promising to “provide public transparency.”
There have been regulatory efforts at the federal and state levels in the U.S. to create safety reporting standards for AI model developers. However, they’ve been met with limited adoption and success. One of the more notable attempts was the vetoed California bill SB 1047, which the tech industry vehemently opposed. Lawmakers have also put forth legislation that would authorize the U.S. AI Safety Institute, the U.S.’ AI standard-setting body, to establish guidelines for model releases. However, the Safety Institute is now facing possible cuts under the Trump administration.
From all appearances, Google is falling behind on some of its promises to report on model testing while at the same time shipping models faster than ever. It’s a bad precedent, many experts argue — particularly as these models become more capable and sophisticated.