- Sequoia Capital, Kleiner Perkins, Founders Fund, and Andreessen Horowitz (a16z)
A research project by Kayode Adekoya
Submitted to : Varitas Research Fellowship, Nov. 2025.
Abstract
This literature review examines how top U.S. venture capital firms - Sequoia Capital, Kleiner Perkins, Founders Fund, and Andreessen Horowitz (a16z) - affect AI safety practices through their investments and board roles.
I synthesize academic studies and credible industry sources on venture capital’s role in AI development, detailing each firm’s AI-focused deals and board appointments. i also examine the AI safety commitments of both the VCs and their portfolio companies, contrasting these with evidence of profit-driven pressures.
Notably, a16z and Sequoia have backed safety-oriented ventures (e.g., co-founder Ilya Sutskever’s Safe Superintelligence Inc., which raised $1 billion from Andreessen Horowitz and Sequoia). Additionally, the UK AI Safety Institute’s over £15 million Alignment Project supports safety-focused research and initiatives through an international coalition.
Prominent partners such as Marc Andreessen publicly prioritize rapid innovation over precaution, labeling AI safety regulation as a “negative, risk-aversion frame.” In contrast, portfolio companies (e.g., OpenAI) espouse safety missions (“ensure that AGI benefits all of humanity”), raising potential conflicts between lofty safety pledges and VC profit motives.
I identify patterns such as aggressive early-stage AI investment (e.g., Sequoia saw approximately 60% of new deals in AI by late 2023) and political engagement (a16z and OpenAI leaders formed a PAC, “Leading the Future,” to oppose restrictive AI laws). The analysis highlights tensions between venture-driven incentives and long-term AI safety, suggesting that VC funding significantly shapes AI innovation priorities.
Introduction
Artificial intelligence (AI) is now a focal point of technology investment, with venture capital (VC) firms pouring unprecedented funds into startups promising rapid AI advances. Major VC firms wield significant influence: through funding, they can steer which AI projects thrive, and through board representation, they can shape company strategies.
This raises questions about AI safety - ensuring AI development includes safeguards against harms - and how it fits with VC objectives. Our review focuses on four leading firms (Sequoia Capital, Kleiner Perkins, Founders Fund, Andreessen Horowitz) to understand the intersection of venture capital and AI safety.
These firms are known for funding cutting-edge AI ventures (e.g., Founders Fund in DeepMind and Palantir; a16z in OpenAI; Sequoia in numerous AI startups) and participating in policy debates. i examine published literature and industry reports to:
1.Summarize research on VC influence in AI development.
2.Detail the firms’ specific AI investments and board roles.
3.Review any stated AI safety commitments by the firms or their investees.
4.Analyze conflicts between profit motives and safety pledges.
5.Identify trends in how VC funding may prioritize innovation over caution.
This sheds light on how profit-driven incentives in VC may shape the future trajectory and safety culture of AI research.
Methodology
I conducted a systematic review of academic and industry sources from 2018–2025. Searches on databases (e.g., Google Scholar, IEEE Xplore) and news archives (e.g., Reuters, Wired, TechCrunch) used keywords like “venture capital AI safety,” “Andreessen Horowitz AI ethics,” “Sequoia AI investment,” and “VC AI board appointments.” I also reviewed corporate materials (VC websites, startup charters) and nonprofit reports (e.g., Future of Life Institute). Credible sources, peer-reviewed papers, major news outlets, and VC blogs were prioritized.
Articles citing specific investments or policies were used to identify who invested in what. When direct evidence of a VC’s safety policy was lacking, I inferred positions from public communications: e.g., a16z blog posts or press releases. Evidence of conflicts (e.g., policy lobbying) was noted from watchdog and news reports. Each fact was cross-checked against multiple sources when possible. Thematic coding organized findings into topics: investment activity, board influence, stated safety practices, conflicts, and development trends.
Thematic Review
VC Investment Patterns in AI.
Recent data show these VCs have dramatically increased AI investments. Sequoia, for example, reported that approximately 60% of its 2023 new deals were AI startups, compared to 16% the previous year.
It now has ~70 AI-related portfolio companies from seed to public scale. Notable Sequoia-backed AI firms include Harvey (legal AI assistant), Dust (AI research assistant), Replicate (model deployment platform), generative startup Tavus, model hub Hugging Face, and enterprise search Glean. Sequoia also invested in OpenAI’s 2021 round.
Kleiner Perkins has funded numerous AI startups. Partner Mamoon Hamid focuses on tools for high-paying fields (e.g., legal or medical AI assistants), and Crunchbase notes Kleiner as an investor in Codeium (AI coding), Glean (AI search), Nooks (AI finance assistant), and Ambience Healthcare.
Andreessen Horowitz (a16z) has long emphasized AI. Beyond the early OpenAI stake, a16z maintains an extensive AI portfolio, from biotech AI to defense AI. Venture partner Anjney Midha serves on boards of AI startups Mistral AI, Luma AI, Sesame AI, and Periodic Labs. The firm also launched a 10-year $6 billion growth fund in 2025 to back emerging tech, including AI.
Founders Fund focuses on deep-tech and AI’s frontier. Its portfolio includes DeepMind, Palantir, Neuralink, and defense AI firm Anduril. Partners occasionally sit on company boards or influence leadership.
David Sacks (ex-Founders) recently became a White House AI adviser.Across these firms, a trend emerges: early and extensive VC funding of AI startups, large funding rounds, and dedicated AI funds (e.g., Sequoia’s $750M Series A fund in 2025).
Safety Policies and Commitments
Few VCs have explicit public AI safety policies. Portfolio companies often advertise safety-minded charters: OpenAI commits to ensuring “AGI benefits all of humanity” and “doing the research required to make AGI safe.” Safe Superintelligence Inc. (SSI), backed by a16z and Sequoia, explicitly declares a mission “to safely develop superintelligence.
”The UK AI Safety Institute’s over £15 million Alignment Project supports safety-focused research and initiatives through an international coalition.
Beyond this, VCs generally prioritize broad innovation, with safety initiatives being project-specific or company-driven.
Board Influence and Governance
VCs gain board seats in portfolio companies, giving them governance influence. Anjney Midha (a16z) sits on multiple AI startup boards, steering strategy. Sequoia partners often join boards for guidance and connections.
Beyond boards, VCs engage in policy and advisory roles. Andreessen Horowitz co-founded the pro-AI PAC “Leading the Future,” and David Sacks (ex-Founders) served as an AI adviser in government. These activities indicate influence on AI governance and regulatory landscapes.Conflicts of Interest and Profit Motives
VCs often prioritize profit over safety, as suggested by advocacy patterns. Sequoia and a16z champion speed over restraint, with Andreessen framing AI caution as a “negative, risk-aversion frame.” Portfolio companies like OpenAI and SSI enshrine safety goals but rely on VC funding, creating tension between commercial incentives and safety.
VC advocacy may blur lines: if regulations slow products, VC-backed startups might lose market advantages. Evidence suggests profit incentives often outweigh voluntary safety measures, though this is inferred from public statements and lobbying rather than direct internal data.
Patterns and Trends
1.Explosive funding: U.S. AI startup investment dwarfs other regions (e.g., ~$100B in 2024 vs ~$16B in Europe).
2.Thematic focus shifts: VCs fund AI infrastructure, chips, and enterprise solutions.
3.Regulatory pressure: VCs react to policy signals, favoring minimal oversight.
4.Safety as an afterthought: Conferences often emphasize hype over safety, normalizing a “move fast” culture. For instance, the 2025 Paris AI Action Summit adopted a "Safety Third" motto, shifting focus toward investment and innovation.
VC funding shapes AI priorities toward growth and competition rather than precaution.
Discussion
Top VCs drive AI innovation with little inherent emphasis on safety. Board roles embed venture-centric mindsets in governance. Public forums and policy debates show advocacy for rapid AI deployment. Safety-focused ventures exist (SSI), but profit-driven investors influence priorities. Quick returns can dilute safety, though VC-led safety funds and some ethics hires appear as countertrends.
However, some VCs argue that rapid innovation inherently advances safety by enabling broader access and collaborative development. For example, a16z's "Little Tech Agenda" posits that open AI ecosystems foster collective safeguards, countering overly restrictive regulations that could stifle progress.Overall, while tensions persist, these counterpoints suggest a nuanced balance where innovation and safety are not always mutually exclusive.
Conclusion
Sequoia Capital, Kleiner Perkins, Founders Fund, and Andreessen Horowitz play pivotal roles in AI development trajectories. Through investment and board participation, they accelerate innovation. Public stances and private incentives favor growth and market share, sometimes at the expense of safety. Verified safety-focused funding exists (e.g., UK AI Safety Institute Alignment Project), but conflicts persist. Mitigating these may require linking funding to safety milestones or independent oversight. Future research should track VC behavior in response to AI incidents and regulatory changes.
Bibliography.
Andreessen, M., & Horowitz, B. (2024, July 5). The Little Tech Agenda. Andreessen Horowitz.
OpenAI. (n.d.). OpenAI Charter.
Teare, G. (2023, October 23). Inside Sequoia Capital’s Growing AI Portfolio. Crunchbase News.
Newcomer (Daily). (2025, July 25). Safety Third Is the New Motto at Paris AI Action Summit.
AI Security Institute. (n.d.). Grants – £15M Alignment Fund.
