Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Defining Monitorable and Useful Goals
    In my most recent post, I introduced a corrigibility transformation that could take an arbitrary goal over external environments and define a corrigible goal with no hit to performance. That post focused on corrigibility and deception in training, which are some of the biggest problems in AI alignment, but the underlying mechanism has broader applicability.
    AI Alignment Forum | 4 hours ago
  • Consider political giving for AI safety
    AI policy and the unique advantages of individual donors
    The Fox Says | 5 hours ago
  • Introducing Linked Services: A Simpler Way to Connect Research Tools to OSF
    Researchers rely on a wide range of tools throughout the research lifecycle for storing data, analyzing results, writing papers, and sharing outcomes. The OSF helps bring those tools into one place, making it easier to organize, share, and connect your work.
    Center for Open Science | 5 hours ago
  • Reflection on being a confused, then proud, bisexual
    For Pride, kind of. Are bisexuals canonically late? Probably.
    Contemplatonist | 6 hours ago
  • Why I am more optimistic about the animal movement this year
    Sometimes working on animal issues feels like an uphill battle, with alternative protein losing its trendy status with VCs, corporate campaigns hitting blocks in enforcement and veganism being stuck at the same percentage it's been for decades.
    Effective Altruism Forum | 6 hours ago
  • Rewiring the Chip Landscape: Trends and Challenges in Chipmaking Equipment
    Center for Security and Emerging Technology | 7 hours ago
  • Don't Be Deep, Dark, and Brooding
    Don't have your self-image be as the emotionally-troubled, intellectually complex protagonist
    Bentham's Newsletter | 8 hours ago
  • Faunalytics’ 2025 Mid-Year Report
    Just past the halfway point of 2025, Liz Wheeler offers an update to our supporters and fellow advocates about our latest research, newest resources, and what’s in store for the rest of the year. The post Faunalytics’ 2025 Mid-Year Report appeared first on Faunalytics.
    Faunalytics | 8 hours ago
  • How to instantly be better at things
    Just pretend you're someone else
    Useful Fictions | 8 hours ago
  • You Win Some, You Lose Many: Conservation Bias Fails The Most Vulnerable
    Large, charismatic mammals receive the bulk of attention and funding despite numerous other species facing a much greater risk of extinction. The post You Win Some, You Lose Many: Conservation Bias Fails The Most Vulnerable appeared first on Faunalytics.
    Faunalytics | 8 hours ago
  • Intrauterine Devices
    IUDs generate a lot of strong opinions. Why?
    Atoms vs Bits | 8 hours ago
  • Apply now! Mentorship for ambitious professionals | Job Blast 🚀
    Apply now! Mentorship for ambitious professionals | Job Blast 🚀 Apply by July 25 and get matched with a mentor that can transform your career for free ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏...
    EACN Newsletter | 9 hours ago
  • A visit from the Gates Foundation
    Dr Estee Torok, Senior Program Officer in Surveillance, Data & Epidemiology in Malaria / Global Health at the Gates Foundation visited Target Malaria Uganda based at the Uganda Virus Research Institute. Dr Torok was welcomed by the Head of the Entomology Department and Principal Investigator of Target Malaria Project, Dr Jonathan Kayondo, who provided an […].
    Target Malaria | 9 hours ago
  • Creepy Philosophy
    What candidate truths do you find most disturbing?
    Good Thoughts | 9 hours ago
  • EA Forum Digest #249
    EA Forum Digest #249 Hello!. Next week is Career Conversations Week on the Forum! Consider writing about your job, or thinking about what you’d like to ask advisors from Probably Good, 80,000 Hours, Animal Advocacy Careers and Successif in our AMA next week (it'll be pinned on the front page on Monday).
    EA Forum Digest | 9 hours ago
  • Predicting earthquakes
    Achieving a 10-minute warning would save thousands of lives
    The Works in Progress Newsletter | 10 hours ago
  • Q&A with Amélie Godefroidt: Building Trust and Transparency in Complex, Sensitive Research
    Open science practices are embedded throughout Amélie Godefroidt’s research, which explores public opinion during and after wars, civil conflicts, and terrorist attacks. As a Postdoctoral Researcher and Lecturer at the KU Leuven Centre for Research on Peace and Development in Belgium, she regularly engages with ethically and methodologically complex material and highly sensitive data, and...
    Center for Open Science | 10 hours ago
  • Three requirements for a “CERN for AI” – a Geneva Security Debate
    On the margins of the 2025 AI for Good Summit, the Simon Institute for Longterm Governance (SI) co-hosted a Geneva Security Debate on the…
    Simon Institute for Longterm Governance | 12 hours ago
  • Trump just handed China a major advantage on AI
    Late on Monday night, July 14, 2025, the ninth richest man in the world broke some momentous news: The US government would allow him, Nvidia CEO Jensen Huang, to sell H20 processors to Chinese customers again. To people following the Trump administration and its seemingly unending announcements and reversals of trade restrictions, this might not […]...
    Future Perfect | 13 hours ago
  • How to improve your happiness: an unexpected truth
    That’s what this blog is about: the unexpected happiness that comes from helping others and making a difference in the world. And I’m not talking about just giving your attention to and being aware of the problems in the world, but giving your time and money, strategically and generously, to actually solve those problems for others.
    Happier Lives Institute | 13 hours ago
  • Climate and Nutrition Data Analysis Intern
    Climate and Nutrition Data Analysis Intern admin_inox Wed, 07/16/2025 - 10:30 vacancy_id SYS-1286 location London (UK), Delhi (IN) Contract type Intern Duration Other Frontend apply URL https://jobs.gainhealth.org/vacancies/1286/apply/ Closing date Wed, 07/23/2025 - 12:00 Department Programmes about_the_role <p>The Global Alliance for Improved Nutrition (GAIN) is seeking a...
    Global Alliance for Improved Nutrition | 14 hours ago
  • Addressing the nonhuman gap in intergovernmental AI governance frameworks
    Originally submitted as a project for the AI Safety Fundamentals AI Governance course in April 2025; edited and somewhat expanded for publication on the EA Forum in July 2025. Thanks to Yip Fai Tse, Arturs Kanepajs, Max Taylor, Constance Li, Kevin Xia, Adrià Moret and Sam Tucker-Davis for advice and suggestions before and/or after the writing of this piece. Introduction.
    Effective Altruism Forum | 15 hours ago
  • Inside OpenAI culture 💼, Google AI blocks exploit 🛡️, S3 Vectors 👨‍💻
    TLDR AI | 23 hours ago
  • Runaway wives in 1700s Pennsylvania
    Newspaper announcements that give a taste of relationship breakdowns. The post Runaway wives in 1700s Pennsylvania appeared first on Otherwise.
    Otherwise | 1 days ago
  • MSEP: A Platform for Molecular Systems Engineering
    MSEP is a free, open-source platform for designing and simulating atomically precise nanomechanical systems — a tool for exploring the foundations of future physical technologies.
    AI Prospects: Toward Global Goal Convergence | 1 days ago
  • Data Director at Greener by Default
    At Greener by Default, you will be working towards the ambitious goal of completely transforming institutional foodservice - and the food system as a whole - by making plant-based food the default. We recognize that the fate of humans, animals, and ecosystems are bound together, and strive to create a food system that will allow all life on earth to flourish.
    Animal Advocacy Forum | 1 days ago
  • What’s in the European Union’s Codes of Practice for Governing General-Purpose AI?
    This analysis provides an accessible reading of the EU’s new Codes of Practice for General-Purpose AI, which help model providers comply with the AI Act’s provisions. Our focus is on the Safety and Security Code. The post What’s in the European Union’s Codes of Practice for Governing General-Purpose AI? appeared first on The Future Society.
    The Future Society | 1 days ago
  • Principles for Picking Practical Interpretability Projects
    Thanks to Neel Nanda and Helena Casademunt for feedback on a draft. In a previous post, I argued that some interpretability researchers should prioritize showcasing downstream applications of interpretability work; I call this type of research practical interpretability. How should we pick promising downstream applications for practical interpretability researchers to target?
    AI Alignment Forum | 1 days ago
  • AI Safety Newsletter #59: EU Publishes General-Purpose AI Code of Practice
    Plus: Meta Superintelligence Labs
    AI Safety Newsletter | 1 days ago
  • July '25 EA Newsletter Poll
    This poll was linked in the July edition of the Effective Altruism Newsletter. I've chosen this question because Marcus A. Davis's Forum post, featured in this month's newsletter, raises this difficult question: if we are uncertain about our fundamental philosophical assumptions, how can we prioritise between causes? There is no neutral position.
    Effective Altruism Forum | 1 days ago
  • Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
    Twitter | Paper PDF. Seven years ago, OpenAI five had just been released, and many people in the AI safety community expected AIs to be opaque RL agents. Luckily, we ended up with reasoning models that speak their thoughts clearly enough for us to follow along (most of the time).
    AI Alignment Forum | 1 days ago
  • The Paradox of Trumpism
    How Trump is killing millions of people and why few people care
    Bentham's Newsletter | 1 days ago
  • How to persuade people, ethically
    Key Takeaways Ethical persuasion is a learnable skill. Persuasion is a skill like any other, and it is possible both to learn how to do...
    Clearer Thinking | 1 days ago
  • Overthinking in grantmaking
    I spend a lot of time talking to philanthropists about how to donate, and one trend I have noticed, particularly with smart but new-to-grantmaking folks, is the trend below.
    Measured Life | 1 days ago
  • Cause humility, no AI moratorium, Gavi needs funding
    Cause humility, no AI moratorium, Gavi needs funding View this email in your browser Hello! Our favourite links this month include: The US withdrew funding from Gavi, which vaccinates half of the world’s children.
    Effective Altruism Newsletter | 1 days ago
  • HLI is recruiting a Development Manager!
    We're looking for an excellent Development Manager to join our remote team, starting September 2025. The post HLI is recruiting a Development Manager! appeared first on Happier Lives Institute.
    Happier Lives Institute | 1 days ago
  • How Do We Have Healthy, Happy Guinea Pigs?
    Guinea pigs are popular companion animals, but little is known about their welfare in homes. Researchers examined data from over 1,000 guardians to see what kind of care their guinea pigs are given. The post How Do We Have Healthy, Happy Guinea Pigs? appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Rebuilding after apocalypse: What 13 experts say about bouncing back
    The post Rebuilding after apocalypse: What 13 experts say about bouncing back appeared first on 80,000 Hours.
    80,000 Hours | 1 days ago
  • Three Lessons from the International AI Safety Report for the Independent, International Scientific Panel on AI
    On the margins of the 2025 AI for Good Summit, the Simon Institute for Longterm Governance (SI) organized an event on the International AI…
    Simon Institute for Longterm Governance | 1 days ago
  • Book Review: Arguments About Aborigines
    Astral Codex Ten | 1 days ago
  • Compassion in World Farming is Hiring: Legacy Officer (Fundraising)
    Compassion in World Farming International is the leading international farm animal welfare charity, campaigning to improve the lives of millions of farm animals through advocacy, lobbying for legislative change, and positive engagement with the global food industry. Our established international Food Business programme aims to raise baseline standards for farm animals by securing commitments,...
    Animal Advocacy Forum | 1 days ago
  • Senegal reaches historic milestone by eliminating trachoma
    Thanks to support from Sightsavers and other organisations, millions of people in Senegal are no longer at risk from losing their sight to the eye disease.
    Sightsavers | 2 days ago
  • No one should have to suffer
    No one should have to suffer, no matter where they are. And if we can help more people with the same donation, then that’s the right thing to do. Watch Part I and Part II of The Skeptic in our profile now. Or check out our giving pledges, to give to the charities that can help others the most at gwwc.org/pledge...
    Giving What We Can | 2 days ago
  • Population growth or decline will have little impact on climate change
    It'll be too slow and too late for the timeframes that we need to decarbonise.
    Sustainability by Numbers | 2 days ago
  • Meta's AI pivot 🤖, ChromeOS Android merger 📱, AWS agentic IDE 👨‍💻
    TLDR AI | 2 days ago
  • Claude Finds God
    ✻[Perfect stillness]✻.
    Asterisk | 2 days ago
  • Publicamos nuestras cuentas anuales auditadas de 2024
    Ayuda Efectiva | 2 days ago
  • Narrow Misalignment is Hard, Emergent Misalignment is Easy
    Anna and Ed are co-first authors for this work. We’re presenting these results as a research update for a continuing body of work, which we hope will be interesting and useful for others working on related topics. TL;DR:
    AI Alignment Forum | 2 days ago
  • Recent Redwood Research project proposals
    Previously, we've shared a few higher-effort project proposals relating to AI control in particular. In this post, we'll share a whole host of less polished project proposals. All of these projects excite at least one Redwood researcher, and high-quality research on any of these problems seems pretty valuable. They differ widely in scope, area, and difficulty. Control.
    AI Alignment Forum | 2 days ago
  • What do you Want out of Literature Reviews?
    Tl;dr how can I improve my literature-review based posts? I write a fair number of blog posts that present the data from scientific papers. There’s a balancing act to this- too much detail and people bounce off, too little and I’m misleading people. I don’t even think I’m on the pareto frontier of this- probably … Continue reading "What do you Want out of Literature Reviews?"...
    Aceso Under Glass | 2 days ago
  • Solving Problems, Not Building Empires: The Value of Prioritizing your Ecosystem
    Linkpost as I am mostly writing on substack now. TL;DR: NGOs have a unique advantage over for-profits in their ability to cooperate and build common goods, yet few fully leverage "ecosystem thinking.". This approach extends beyond an organization's direct impact to consider the entire field's health.
    Effective Altruism Forum | 2 days ago
  • Self-preservation or Instruction Ambiguity? Examining the Causes of Shutdown Resistance
    This is a write-up of a brief investigation into shutdown resistance undertaken by the Google DeepMind interpretability team. TL;DR: Why do models sometimes resist shutdown? Are they ignoring instructions to pursue their own agenda – in this case, self-preservation? Or is there a more prosaic explanation?
    AI Alignment Forum | 2 days ago
  • Do LLMs know what they're capable of? Why this matters for AI safety, and initial findings
    This post is a companion piece to a forthcoming paper. This work was done as part of MATS 7.0 & 7.1. Abstract. We explore how LLMs’ awareness of their own capabilities affects their ability to acquire resources, sandbag an evaluation, and escape AI control.
    AI Alignment Forum | 2 days ago
  • 🟢 Houthis sink ships, US tariffs for US allies and secondary tariff threats for Russia, Grok and Kimi model releases | Global Risks Weekly Roundup #28/2025
    The Houthis sank two ships last week, new powerful open source model released.
    Sentinel | 2 days ago
  • Autonomy Consequentialism
    Maximizing respect for others' self-regarding preferences
    Good Thoughts | 2 days ago
  • EU CITIZENS: Animal welfare consultation closing soon (16 July)
    There is an important consultation happening right now on revising EU animal welfare laws - covering cages, import standards, male chick culling, and welfare indicators. If you're an EU Citizen, I encourage you to take 15-30 minutes over the next few days to respond. Deadline: July 16th (midnight CET).
    Effective Altruism Forum | 2 days ago
  • How To Cause Less Suffering While Eating Animals
    It's surprisingly easy to be a much more ethical omnivore
    Bentham's Newsletter | 2 days ago
  • Asexual, Aromantic, Apatrial
    what universal human experiences are you missing?
    Thing of Things | 2 days ago
  • More Than Instinct, Animals Have Expectations
    Animals aren’t just reactive — they form expectations and learn from experience. Just like in humans, this affects how they feel. The post More Than Instinct, Animals Have Expectations appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • We Animals is hiring a Project Manager, Assignments
    As the world’s leading animal photojournalism agency, We Animals (WA) advocates for animals through photojournalism. Our global investigations and stories expose our complex relationships with animals, create ethical and cultural shifts in society, and empower human capacity for compassion and change.
    Animal Advocacy Forum | 2 days ago
  • Iceland Foods commits to higher prawn welfare!
    Iceland Foods committed to eliminating eyestalk ablation and implementing pre-slaughter electric stunning across their own-label prawn range by the end of 2027. ICAW has been running a campaign against Iceland for several months inclusive of digital actions, in-person demonstrations (including 70 people gathering in London in May) and other pressure tactics. Only 3 retailers have yet to...
    Animal Advocacy Forum | 2 days ago
  • Recent Redwood Research project proposals
    Empirical AI security/safety projects across a variety of areas
    Redwood Research | 2 days ago
  • Learn To Draw With ATVBT
    deliberate practice for portraiture
    Atoms vs Bits | 2 days ago
  • Debate: organisations using Rethink Priorities’ mainline welfare ranges should consider effects on soil nematodes, mites, and springtails, or at least be transparent about their reasons for neglecting them?
    Summary: I think organisations using Rethink Priorities’s (RP’s) mainline welfare ranges, at least Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Animal Welfare Fund (AWF), and RP, should consider effects on soil nematodes, mites, and springtails. I believe these are the driver of the overall effects of the vast majority of interventions.
    Effective Altruism Forum | 2 days ago
  • Import AI 420: Prisoner Dilemma AI; FrontierMath Tier 4; and how to regulate AI companies
    Plus, steganography and future superintelligences
    Import AI | 2 days ago
  • ChinAI #320: Acting Crazy — AI's most important use case for Chinese youth
    Greetings from a world where…...
    ChinAI Newsletter | 2 days ago
  • Is EA still 'talent-constrained'?
    Since January I’ve applied to ~25 EA-aligned roles. Every listing attracted hundreds of candidates (one passed 1,200). It seems we already have a very deep bench of motivated, values-aligned people, yet orgs still run long, resource-heavy hiring rounds. That raises three things: Cost-effectiveness:
    Effective Altruism Forum | 3 days ago
  • How Does Time Horizon Vary Across Domains?
    Each line represents the trend in time horizon for one benchmark smoothed spline with s=0.2, k=1. Lines have a range on the y-axis equal to the range of task lengths actually in the benchmark (2nd% to 98th% quantiles). Summary: In the paper Measuring AI Ability to Complete Long Software Tasks (Kwa & West et al.
    METR | 3 days ago
  • Open Thread 390
    Astral Codex Ten | 3 days ago
  • How does the World Bank classify countries by income?
    The World Bank classifies countries into four income groups based on average income per person. This article explains how these groups are defined.
    Our World in Data | 3 days ago
  • Google acquires Windsurf 💰, SpaceX funds xAI 🤖, vibe coding interviews 👨‍💻
    TLDR AI | 3 days ago
  • Reflecting on One Year of Mieux Donner: Launching the French Effective Giving Initiative
    As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR: Prioritisation is important.
    Effective Altruism Forum | 3 days ago
  • Hiring: Impact Measurement and Data Science
    The ASPCA is hiring a Senior Manager, Impact Measurement and Data Science to support the development and execution of evaluation strategies for various programs that aim to improve the lives of animals. We are looking for a collaborative and creative critical thinker, committed to the mission and values of the organization. Applications are due 7/20.
    Animal Advocacy Forum | 3 days ago
  • OpenAI's simple scalable oversight experiment
    #ai #alignment #aisafety #openai
    Rational Animations | 3 days ago
  • this week in security — july 13 edition
    this week in security — july 13 edition CitixBleed 2 under attack, 'Hafnium' hacker arrested, Jack Dorsey's not-so-'secure' messaging app, Gemini accessing other Android apps, and more. ~this week in security~. a cybersecurity newsletter by @zackwhittaker volume 8, issue 28 View this email in your browser | past issues | RSS ~ ~ THIS WEEK, TL;DR. Everyone...
    This week in security | 3 days ago
  • How To Be More Productive
    Easy productivity advice
    Bentham's Newsletter | 3 days ago
  • 10,000 times as much compute as GPT-3
    Hardware is a huge part of the AI game right now - access to chips, the geopolitics of Taiwan - and it's because they need hundreds, then thousands, then tens of thousands and maybe millions more to train the next biggest models. #airisk #aiprogress #compute
    AI In Context | 4 days ago
  • Every Essay Lamenting College Students' Inability To Write
    A parody
    Bentham's Newsletter | 4 days ago
  • The Meaning of America
    Here, I talk about what America used to stand for, and how we are losing it.
    Fake Nous | 4 days ago
  • AI, Animals, & Digital Minds 2025: Retrospective
    Key takeaway. We need less breadth and more depth. Much of the early growth and success of the movement at the intersection of AI, animals and digital minds has come from exchanging ideas, people and resources across these fields. We think we are now approaching a point where more specialisation and a greater focus on action will lead to the most valuable outcomes. Rather not read?
    Effective Altruism Forum | 4 days ago
  • America is finally moving past its post-9/11 security theater
    On Tuesday, the TSA — a federal agency not known for its generosity — gave American travelers a gift: They will no longer have to take off their shoes when going through airport security. “I think most Americans will be very excited to see they will be able to keep their shoes on,” said Homeland […]...
    Future Perfect | 4 days ago
  • Sanity Check - Givewell's New Incentives estimate seems optimistic
    TLDR: On a VERY rough Sanity check , GiveWell’s New Incentives “lived saved” estimate seems twice as high as is plausible. On seeing this wonderful graph from a GiveWell Back-check of New Incentives, I thought to myself wow New Incentives saved 27,000 lives - that’s impressive but feels high. So I decided to spend a surprisingly fun 90 minutes of my life doing a back-check of a back-check.
    Effective Altruism Forum | 5 days ago
  • La Médecine des Désirs avec Guillaume Durand
    ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la 🔔 Vous avez aimé cette vidéo ? Pensez à mettre un 👍 et à la partager.
    The Flares | 5 days ago
  • Is It So Much to Ask for a Nice Reliable Aggregated X-Risk Forecast?
    On most questions about the future, I don’t hold a strong view. I read the aggregate prediction of forecasters on Metaculus or Manifold Markets and then I pretty much believe whatever it says. Various attempts have been made to forecast existential risk.
    Philosophical Multicore | 5 days ago
  • Behind Open Asteroid Impact
    An Annotated Guide to My AI Safety Satire
    Linch Zhang | 5 days ago
  • Climate Activists Need to Radically Change Their Approach Under Trump
    Climate advocates must double down on pragmatic industrial strategies to make clean energy a winning business for all
    Institute for Progress | 5 days ago
  • (Linkpost) METR: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
    This seems like an important piece of work - an RCT on the use of AI capabilities for developers. The TL;DR of the paper is. The devil is in the details of course. This post by Steve Newman does a good job of working through some of them. I have highlighted some considerations from it: The methodology was as follows:
    Effective Altruism Forum | 5 days ago
  • Explore AI and consciousness with the Berggruen Prize
    A $50,000 first place prize for essays exploring consciousness
    The Power Law | 5 days ago
  • Victory: Super Festval supermarket goes crate-free in Brazil!
    Hi all!. We’re happy to share that Super Festval, a supermarket brand part of Grupo Beal (former “Companhia Beal de Alimentos’), has officially published a commitment to exclusively source pork from group housing systems during gestation in Brazil by 2028, considering preferably preimplantation systems where sows are housed in stalls for no longer than 7 days. You can read the announcement in...
    Animal Advocacy Forum | 5 days ago
  • Can for-profit companies create significant, direct impact?
    Thank you to @Jacintha Baas, @Judith Rensing, and the @CE team for your help in editing and improving this post. Introductory Context. Hi, I’m Trish. This is my first post on the EA forum.
    Effective Altruism Forum | 5 days ago
  • Your Review: Of Mice, Mechanisms, and Dementia
    Finalist #3 in the Review Contest
    Astral Codex Ten | 5 days ago
  • What the EU code of practice actually requires
    Transformer Weekly: SB 53’s revamp, Peter Kyle on AGI, and a movie about Ilya Sutskever...
    Transformer | 5 days ago
  • Against Therapy Speak
    Just talk like a person!
    Bentham's Newsletter | 5 days ago
  • Book Review: The Artist's Way
    It is a truth universally acknowledged that a rationalist, in possession of a vague understanding of Bayes’ Theorem, must be in want of some woo.
    Thing of Things | 5 days ago
  • Cage-Free Housing For Japanese Quails
    The welfare needs of Japanese quails are understudied compared to other farmed birds. What do we know, and what more can we learn?. The post Cage-Free Housing For Japanese Quails appeared first on Faunalytics.
    Faunalytics | 5 days ago
  • Vibe Bias
    Check Your Dialectical Privilege
    Good Thoughts | 5 days ago
  • Is This Anything? 8
    no attempts
    Atoms vs Bits | 5 days ago
  • What Happens After Superintelligence? (with Anders Sandberg)
    Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization.
    Future of Life Institute | 5 days ago
  • Grok’s MechaHitler disaster is a preview of AI disasters to come
    From the beginning, Elon Musk has marketed Grok, the chatbot integrated into X, as the unwoke AI that would give it to you straight, unlike the competitors. But on X over the last year, Musk’s supporters have repeatedly complained of a problem: Grok is still left-leaning. Ask it if transgender women are women, and it […]...
    Future Perfect | 5 days ago
  • AI is the most rapidly adopted technology in history
    7 charts about AI deployment
    Benjamin Todd | 5 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.