Explore more publications!

A Look at AI-Related Shareholder Proposals at U.S. Companies, 2022-2025

The long-projected systemic and business transformations that can be brought about by Artificial Intelligence (AI) technologies have started. Accordingly, many companies and their boards of directors have faced scrutiny in recent years over effective governance mechanisms and due diligence of the opportunities as well as the material risks—financial, regulatory, legal, and reputational—posed by AI. Potential AI and associated hyperscale data center issues are far-reaching and span governance, environmental, and social aspects. It is perhaps unclear whether markets have yet fully priced in some of the wide-ranging and material risks, alongside the opportunities. The range of issues from an investor perspective are looked at in this article through the lens of recent U.S. shareholder proposals directly related to AI and at companies providing AI tools and infrastructure.

A substantial number of institutional investors have disclosed their expectations and engagements on Responsible AI (RAI) risk management, with respect to long-term financial returns and pragmatic value creation. For example, as of a February 2024 Investor Statement on Ethical AI, the World Benchmarking Alliance’s Collective Impact Coalition (CIC) for Ethical AI comprised investors representing over USD $8.5 trillion in assets under management.

Leading RAI risk management frameworks are striving to standardize due diligence considerations. Many institutional investors have specified alignment with internationally-accepted RAI frameworks, such as the OECD AI Principles and UNESCO’s Recommendation on the Ethics of AI, regarding board accountability, transparency, due diligence, and risk management. Additionally, a Banking Policy Institute April 2024 white paper expressed support for the U.S. Department of Commerce’s NIST AI Risk Management Framework. ISO/IEC 42001:2023 is another widely cited RAI framework, noted for its potential usefulness for compliance with evolving regulatory standards.

There is a growing consensus among many institutional investors globally that effective AI governance is inextricably linked to fiduciary duty, long-term financial performance, and sustainable economic growth drivers.

On the regulatory compliance front, the EUAI Act possesses extraterritorial reach, as the regulation’s scope extends to companies located outside the European Union if their AI activities affect the EU market or individuals within it. There has also been a proliferation of legislation enacted by U.S. states and other jurisdictions around the world. However, with the rapid pace of investment and innovation around AI technologies, it is oft remarked that stringent mandatory safeguards may disincentivize innovation and lag behind technological developments. On the other hand, others have expressed concern that voluntary principles disclosures can encourage superficial compliance without substantive change, amounting to nothing more than marketing window dressing. In spite of concerns on both sides of the argument, both regulatory compliance and voluntary frameworks are core elements of the RAI ecosystem.

In recent years, a number of U.S. companies—mainly within the technology sector, but increasingly across other sectors—have received shareholder proposals requesting increased disclosure of RAI policies, procedures, and practices with regards to board and committee oversight, environmental sustainability, and human rights risk mitigation. The issues addressed by the varied shareholder proposals have ranged across privacy concerns, copyright infringement, energy and water usage, community and societal impacts, human rights, and “just transition” strategies for affected employees, as well as board oversight of these matters. The business and economic areas concerned have also varied widely – ranging from upstream component procurement, critical minerals sourcing, geopolitical tensions, industrial policy, infrastructure development and financing, and data acquisition, to downstream applications, safety and security concerns, and waste management. For the purposes of this paper, we have looked at U.S. shareholder proposals that have either been explicit in being AI-related or have been implicitly AI-related by being filed at companies that have AI as a substantial or core business strategy element (collectively, “AI-related” in this paper).

AI-related shareholder proposals seen at U.S. companies from 2022 to 2025 have touched on many directly and indirectly AI-related material risks and opportunities including but not limited to:

  • Board oversight
  • GHG emissions targets, climate goals, and climate transition plans
  • Fossil fuel development and production
  • Physical risks of climate change
  • Water resource management
  • Child safety
  • End use due diligence (surveillance, censorship, conflict-affected and high-risk areas)
  • Ethical data acquisition and usage (privacy, safety, intellectual property)
  • Human capital management (bias, discrimination, workplace monitoring, health and safety, automation, and other workforce impacts)
  • Just AI transition
  • Misinformation and disinformation
  • Targeted advertising
  • Weapons development

A summary of the shareholder proposals covered is presented in the table further below.

By the numbers:

In the 2025 U.S. proxy season, there was a significant decrease in the overall number of environmental-and social-related shareholder proposals on ballot, due in part to the U.S. Securities and Exchange Commission’s (SEC) issuance of its Staff Legal Bulletin No. 14M (SLB 14M). However, AI-related proposals on ballot did not drop in overall numbers.

Average support levels for AI-related shareholder proposals have followed the recent overall shareholder proposal voting trend of declining support for environmental- and social-related proposals in general; however, with the exception of the relatively small number of environmental-focused proposals, support for AI-related proposals has not decreased as markedly.

The bulk of AI-related shareholder proposals have addressed “social” matters such as human rights and labor rights concerns including child safety, end use due diligence, data acquisition and usage, misinformation and disinformation, targeted advertising, workforce impacts, etc. Each year, there have also been some board oversight-related proposals regarding g AI governance, as well as environmental-related proposals addressing a range of concerns including hyperscaler data center issues, increased energy usage, GHG emissions, water-related risks, and physical risks related to climate change.

Below is the list of AI-related shareholders proposals on ballot at U.S. companies from 2022 to mid-year 2025 identified and covered in this paper.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions