Property & Casualty Insights
Cybersecurity and AI: Why Corporate Boards Must Act Now
MARCH 3, 2026
Serving on a corporate board — public or private — has never been more challenging. Economic uncertainty, geopolitical shifts, and rapid technological change now require more proactive governance.
Stakeholders expect boards to be engaged in all critical areas that may affect company performance. In 2026 and beyond, cybersecurity and artificial intelligence (AI) will likely dominate board agendas.
Cybersecurity Weaknesses Persist Despite Recent Board Progress
Board oversight of cybersecurity has improved but is still uneven, as reported by the National Association of Corporate Directors (NACD) in its 2025 public and private company “Board Practices and Oversight” surveys.1
For public companies, 77% of respondents reported that their boards have discussed the material and financial implications of a cybersecurity incident, up from just 25% in 2022. Meanwhile, 73% of private company respondents noted that their board’s understanding of cybersecurity is better than it was just two years ago.
In the government sector, respondents noted some improvements in:
- Communication with management about the cyber-risk information needed for oversight
- Cyber and privacy education for individual directors
- Leveraging external and internal experts to better understand cyber risks
- Reviewing and refining cyberbreach response plans
However, survey respondents across various sectors agreed that several key areas still need attention. These include enhancing the quality of cyber risk reporting and metrics, clearly defining specific roles and responsibilities during a cyber event, and more thoroughly evaluating the risks associated with contractors and third-party providers (TPPs).
Are You Having Full-Board Discussions on AI?
The rapid evolution of AI has been extraordinary, forcing board members to begin adjusting their governance strategies accordingly. According to NACD’s 2025 survey, 62% of both public and private company director respondents have set aside agenda time for a full-board discussion of AI. Furthermore, at least half of the respondents inquired how AI could affect workforce needs. Other noteworthy AI-related activities performed by many boards include:
- Requesting updates on data governance practices
- Assessing risks associated with AI
- Setting aside time in committee meetings to discuss AI (more prominent for public company boards)
Fewer than 20% of respondents said their boards approve annual AI budgets, have created metrics for management reporting, or have established a dedicated technology committee to oversee AI.
D&O Liability Faces Escalating AI and Cyber-Driven Exposures
Not surprisingly, both cybersecurity and AI are top-of-mind risk issues for directors and officers (D&O) liability underwriters today. They point to:
- Cyber-related loss severity: Data breaches and other cybersecurity incidents leading to securities class action claims (SCAs) have become more frequent. In 2024 alone, three of the largest settlements on record totaled $560 million.2
- AI claims frequency exposure: From 2021 through the first half of 2025, there have been 48 federal SCAs filed that alleged false or misleading statements related to AI.2 Filings rose from seven in 2023 to 15 in 2024, with an additional 12 recorded in just the first half of 2025. Allegations include exaggerating AI capabilities (“AI washing”), misrepresenting AI-driven revenue, and failing to disclose material AI risks.
Given the long-tail nature of these claims, it remains to be seen if AI-related SCAs will produce severe losses. A $65 million settlement by a well-known American technology company is already expected.3
For private companies, D&O underwriters are also expanding their questions about cybersecurity and AI exposures. The potential disruption from a significant cyber event or the financial impact of falling behind competitors in the use of AI — coupled with the increasing focus of regulators on both exposures — means underwriters are seeking additional information on these risks from all existing or potential insureds.
Mitigating the Risks
Organizations need a multifaceted strategy to mitigate the risks associated with cybersecurity and AI developments, including:
- Strong, proactive governance that includes board-level oversight of cybersecurity and AI risks
- Ongoing board education and training, particularly about AI developments
- Continuous evaluation of disclosures made to shareholders about the potential impact of AI and cybersecurity on the company’s performance
- Third-party risk management (TPRM), including the review of vendor and TPP contracts (for proper risk allocation, contract negotiation, and evidencing)
- Cyber incident prevention and response planning
- Adherence to the Securities & Exchange Commission’s (SEC) cybersecurity rules that require public companies to promptly disclose cybersecurity incidents
Customized D&O Liability and Cyber Risk Insurance
Buyers of D&O liability insurance should watch for proposed exclusions that may deny AI or cyber-related claims. These exclusions are still rare, but could significantly impact organizations, given how widely cybersecurity and AI exposures apply. D&O liability insurance needs to be written to trigger coverage for claims that allege mismanagement of the organization, regardless of the underlying actions. For board members in particular, D&O liability insurance must provide protection against allegations of improper oversight of the organization’s financial, operational, regulatory, and governance-related risks.
How USI Can Help
USI’s Executive & Professional Risk Solutions (EPS) team specializes in helping organizations manage complex, non-physical risks. For guidance on AI and cyber risk mitigation strategies, insurance placement, or claims advocacy, please contact your local USI consultant or visit us at usi.com.
SUBSCRIBE
Get USI insights delivered to your inbox monthly.