Body
Fighting for a human future.AI is poised to remake the world.Help us ensure it benefits all of us.Learn moreTake actionPolicy & Research↗We engage in policy advocacy and research across the United States, the European Union and around the world.Image: FLI’s Emilia Javorsky at the Vienna Autonomous Weapons Conference 2025Futures↗The Futures program aims to guide humanity towards the beneficial outcomes made possible by transformative technologies.Image: Our latest Futures project—a series of interactive, research-backed scenarios of how AI could transform the world.Communications↗We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.Image: Max Tegmark takes the stage on opening night at Web Summit 2024 in Lisbon.Grantmaking↗We provide grants to individuals and organisations working on projects that further our mission.Image: Mark Brakel attends a dinner hosted by grantees at the Foundation of American Scientists.Recent updates from usRAISE-ing the Bar for AI CompaniesPlus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.4 September, 2025AI safety report cards are out. How did the major companies do?Plus: Update on EU guidelines; the recent AI Security Forum; how AI increases nuclear risk; and more.1 August, 2025Senate Rejects Ban on AI RegulationPlus: The OpenAI Files; creepy new InsideAI video; and more.3 July, 2025Hear from us every monthJoin 40,000+ other newsletter subscribers for monthly updates on the work we’re doing to safeguard our shared futures.Our MissionSteering transformativetechnology towards benefiting life and away from extreme large-scale risks.We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects.Learn moreFocus AreasArtificial IntelligenceAI can be an incredible tool that solves real problems and accelerates human flourishing, or a runaway uncontrollable force which destabilizes society, disempowers most people, enables terrorism, and replaces us.BiotechnologyAdvances in biotechnology can revolutionize medicine, manufacturing, and agriculture, but without proper safeguards, they also raise the risk of engineered pandemics and novel biological weapons.Nuclear WeaponsPeaceful use of nuclear technology can help power a sustainable future, but nuclear weapons risk mass catastrophe, escalation of conflict, the potential for nuclear winter, global famine and state collapse.Featured videosThe best recent content from us and our partners:More videosFeatured projectsRead about some of our current featured projects:Recently announcedStatement on SuperintelligenceA stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, and actors stand together.CommunicationsPolicy & ResearchView allFLI AI Safety Index: Summer 2025 EditionSeven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.Recommendations for the U.S. AI Action PlanThe Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.AI Safety SummitsGovernments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.FLI AI Safety Index: Summer 2025 EditionSeven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.Recommendations for the U.S. AI Action PlanThe Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.AI Safety SummitsGovernments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.FuturesView allAI’s Role in Reshaping Power DistributionAdvanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.Envisioning Positive Futures with TechnologyStorytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.Perspectives of Traditional Religions on Positive AI FuturesMost of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.AI’s Role in Reshaping Power DistributionAdvanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.Envisioning Positive Futures with TechnologyStorytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.Perspectives of Traditional Religions on Positive AI FuturesMost of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.CommunicationsView allDigital Media AcceleratorThe Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.Keep The Future HumanWhy and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.Multistakeholder Engagement for Safe and Prosperous AIFLI is launching new grants to educate and engage stakeholder groups, as well as the general public, in the movement for safe, secure and beneficial AI.Digital Media AcceleratorThe Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.Keep The Future HumanWhy and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.Multistakeholder Engagement for Safe and Prosperous AIFLI is launching new grants to educate and engage stakeholder groups, as well as the general public, in the movement for safe, secure and beneficial AI.GrantmakingView allAI Existential Safety CommunityA community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.FellowshipsSince 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.RFPs, Contests, and CollaborationsRequests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.AI Existential Safety CommunityA community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.FellowshipsSince 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.RFPs, Contests, and CollaborationsRequests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.NewsletterRegular updates about the technologies shaping our worldEvery month, we bring 40,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our focus areas, and key updates on the work we do.Subscribe to our newsletter to receive these highlights at the end of each month.Recent editionsRAISE-ing the Bar for AI CompaniesPlus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.4 September, 2025AI safety report cards are out. How did the major companies do?Plus: Update on EU guidelines; the recent AI Security Forum; how AI increases nuclear risk; and more.1 August, 2025Senate Rejects Ban on AI RegulationPlus: The OpenAI Files; creepy new InsideAI video; and more.3 July, 2025One Big Beautiful Bill…banning state AI laws?!Plus: Updates on the EU AI Act Code of Practice; the Singapore Consensus; open letter from Evangelical leaders; and more.31 May, 2025View allLatest contentThe most recent content we have published:Featured contentWe must not build AI to replace humans.A new essay by Anthony Aguirre, Executive Director of the Future of Life InstituteHumanity is on the brink of developing artificial general intelligence that exceeds our own. It's time to close the gates on AGI and superintelligence... before we lose control of our future.Read the essay ->PostsThe U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AIThree‑quarters of U.S. adults want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than industry "self‑regulation."19 October, 2025Policy,Recent NewsMichael Kleinman reacts to breakthrough AI safety legislationFLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum3 October, 2025AI Policy,StatementAre we close to an intelligence explosion?AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.21 March, 2025AI,Existential RiskThe Impact of AI in Education: Navigating the Imminent FutureWhat must be considered to build a safe but effective future for AI in education, and for children to be safe online?13 February, 2025AI,Ethics,Guest postView allPodcastsAvailable on all podcast platforms:SpotifyApple MusicPocket CastsPodcast AddictMore...•LatestCan Defense in Depth Work for AI? (with Adam Gleave)3 October, 2025How We Keep Humans in Control of AI (with Beatrice Erkers)26 September, 2025Why Building Superintelligence Means Human Extinction (with Nate Soares)18 September, 2025Breaking the Intelligence Curse (with Luke Drago)10 September, 2025What Markets Tell Us About AI Timelines (with Basil Halperin)1 September, 2025View allPapersAI Safety Index: Summer 2025 (2-Page Summary)July 2025Open fileStaffer’s Guide to AI Policy: Congressional Committees and Relevant LegislationMarch 2025Open fileRecommendations for the U.S. AI Action PlanMarch 2025Open fileView allUse your voiceProtect what's human.Big Tech is racing to build increasingly powerful and uncontrollable AI systems designed to replace humans. You have the power to do something about it.Take action today to protect our future:Take Action ->Our peopleA team committed to the future of life.Our staff represents a diverse range of expertise, having worked in academia, for government and in industry. Their background range from machine learning to medicine and everything in between.Meet our teamOpen RolesCareersOur HistoryWe’ve been working to safeguard humanity’s future since 2014.Learn about FLI’s work and achievements since its founding, including historic conferences, grant programs, and open letters that have shaped the course of technology.Explore our history ->