F e l l o w s h i p

Explore the frontiers of humanity’s future.

From existential risks to the future of biotechnology, and population ethics to space law, CERI Futures Fellows are immersed in concepts in technology, philosophy, and futurology. Fellows will think critically about grand-scale questions on the trajectory of human civilization.

The Fellowship includes

  • Seminars with experts from organizations including Cambridge University, the Centre for the Study of Existential Risk, and Cambridge AI Safety Hub.

  • Social events with other Fellows

  • Two formal dinners

  • Career guidance

  • 1-on-1 mentorship

  • Opportunity to produce research for CERI’s termly publication

Applications for the Spring 2024 Fellowship have closed.

  • Week 1: Humanity's Wild Futures

    Introducing the concept of wild futures and the importance of considering a wide range of human possibilities.

  • Week 2: Value Theory, Population, and Future Ethics

    In looking at the future we are caught between viewing technology as a peril or protector. Our usual focus on the near term is challenged by longtermism, which insists our choices now echo into the distant future, necessitating a pivot towards the welfare of future generations.

  • Week 3: Space Law, Cosmic Colonies & Human Expansion

    We’ll discuss the feasibility of colonizing the Moon and Mars, the concept of multi-planetary human civilization; and the challenges of Interstellar travel and terraforming.

  • Week 4: The Rise of Intelligent Machines

    Explore the transformative world of artificial intelligence and its profound impact on human civilization. We’ll dive into the latest developments in AI, exploring both the technical innovations and the societal, ethical, and policy-related challenges they pose.

  • Week 5: S-Risks and Astronomical Future Suffering

    What do biological computing, wild animals, and space colonization have in common? Suffering Risk (’S-risk’) researchers have been considering them in scenarios of extreme suffering on a global or cosmic scale. Given the potentially vast number of sentient beings in the future, they could face widespread severe suffering due to various cultural, evolutionary, and technological factors.

  • Week 6: Existential Risks

    This week, we’ll introduce Fellows to the core ideas of existential risks. We will define what an x-risk is, and discuss why protecting humanity’s future by reducing x-risks might be one of the best ways to have future impact.

  • Capstone Project

    Fellows will complete a deliverable on an area of choice, to be included in our termly publication.

Fellows

  • Jacob Bowden

    Jacob is a second year philosophy undergrad at Christ's College. His interests in philosophy are and always have been primarily those areas which are of practical significance and application - particularly ethics. He was exposed to the ideas of EA by the works of Peter Singer, whose arguments he found and still find compelling despite their seeming simplicity. He has recently become more interested in the ideas of Derek Parfit, whose work has been very influential upon the longtermist movement.

  • Lewis Broadhurst

    Lewis is a Chemistry Masters graduate from the University of Leeds who now works as a software engineer within the realm of finance. His curiosity to where the future of humanity will lead us and desire to learn more about the current/future applications of AI, lead him to join the Futures Fellowship program. In his free time, Lewis is often coding new side projects, exploring museums in Cambridge/London, reading, and running.

  • Laurence Cardwell

    English-Austrian, and raised in Costa Rica, Laurence completed his undergraduate studies at St Andrews University in History and Management. Speaking 6 languages, he’s worked in 11 countries across a range of sectors, including at the International Chamber of Commerce, the Vatican, and teaching Critical Thinking at Dulwich College. He’s co-founded a luxury real estate brand in Spain, launched a crêperie, served in the Army, and seen a few stints on television. He left operational venture capital firm Rocket Internet to found health-tech startup SURVIVOR alongside an ex-VP of the Red Cross. While continuing to work on his startup, he has just moved back from Azerbaijan, where he worked for the past two years at the Central Bank, in order to begin an MPhil in AI Ethics at Cambridge University’s Leverhulme Centre of the Future of Intelligence. With this, he’s looking to enter the exciting and terrifying world of AI from any number of angles.

  • Shoshana Dahdi

    Shoshana is a third-year undergrad at Emmanuel College reading History and Politics (but pretty much just international relations at this point). Her interests include AI policy and governance, particularly democratic participation in the fine-tuning of LLMs and alignment AI safety questions. In her spare time, I enjoy trail running, mountaineering and swimming.

  • Sam Fiske

    Sam is a master's student pursuing a degree in Philosophy. He is mainly interested in the philosophical problems of intelligent machines, specifically as they relate to theories of consciousness and moral status. Outside of my academic interests, he enjoys long-distance running, playing on the University lacrosse team, and supporting some American sports teams from afar.

  • Natasha Karner

    Natasha is a PhD candidate at RMIT University. She researches the impact of emerging technologies on global security, such as AI and nuclear weapons, and the development of Autonomous Weapons Systems (AWS). She is passionate about collaboration across disciplines for creating innovative ideas, and her current “outside” interest is epigenetics.

  • Leong Chit Jeff Kwan

    Jeff is a final year MEng Engineering student at the University of Cambridge, with a focus on Information and Computer Engineering and Bioengineering. Jeff has a keen interest in machine learning and AI, and wishes Jeff's future career contributes to the workloads of AI safety researchers. Jeff enjoys thinking deep thoughts and producing music.

  • Oona Lagercrantz

    Oona is an MPhil student in Politics and International Studies at Corpus Christi, Cambridge, and previously graduated with a BA in Human, Social, and Political Sciences. She enjoys thinking about the politics of the future: considering how futures are imagined, for and by whom, and with what political implications. Oona’s current research focuses on geoengineering, but she is also interested in emerging technologies and futures more broadly, including AI, genetic engineering, and space expansionism.

  • Duncan McClements

    Duncan is a first year economics student at Kings College Cambridge. He is interested in modelling civilisational recovery, transition dynamics under brain emulations and the long run future of humanity, and enjoys board games and walks in his spare time.

  • Giovanni Mussini

    Giovanni is a second-year Phd student in Earth Sciences. He loves to work at the interface of good old-fashioned natural history - collecting, describing, and making sense of the world's biological diversity - and the big themes it can shed light on: the evolution of complexity, intelligence, and beauty on Earth and elsewhere. His studies and research have brought him face to face with mass extinctions, revolutions in biological technology, and life's awesome terraforming abilities. This has made him passionate about the epic history of life in deep time, and he doesn't want the show to stop anytime soon. He's keen on expanding his understanding of existential risks, from comet strikes to AI, to help cherish and protect life's vast evolutionary futures.

  • Ben Norman

    Ben is a first-year undergraduate at the London Interdisciplinary School, where he is studying various disciplines (so far: neuroscience, network science, complexity theory) and applying them to real-world problems. His drive to mitigate existential risks stems from his belief that the vast majority of humanity's future lies ahead, either on Earth or in space, with a strong preference for the latter. He is keen to explore the ethics of the far future, such as the implications of humanity—or its post-human descendants—spreading 'natural' or artificial sentience across galaxies. Outside of thinking about x-risks, his interests include effective altruism, animal welfare, and rewatching the movie Interstellar an unhealthy number of times.

  • Jai Patel

    Jai is an MPhil student in the Ethics of AI, Data and Algorithms with an interdisciplinary background in Philosophy, Politics & Economics. He wants to bridge the world of politics and institutional decision making with the weird and wacky world of AI safety, Existential Risk Studies and all other allied groups of bright, future-oriented people. Alongside his academic interests in artificial consciousness and participatory approaches to AI governance, he enjoys exploring innovative pedagogy and social impact AI, having recently founded an AI citizens advice startup. When not pondering the wild futures of our cosmic ant colony, he can be found in the best bar in Cambridge at St Edmund's College.

  • Keir Reid

    Keir is a master's student from Edinburgh, Scotland. He has a bachelor's in History and is now studying Political Thought and Intellectual History at Cambridge. His research focus is primarily on the writings of Aldous Huxley, but he is also interested in philosophy, psychology, and literature. His time outside of academia is largely spent playing ice hockey for the university team.

  • Giorgio Scolozzi

    Giorgio is a professional with experience in AI Strategy consulting. He has returned to university to attend the MSt in AI Ethics and Society at Cambridge, focusing on AI safety by looking at how socio-technical evaluations can support governance of advanced AI. He contributes to working groups in international standard setting organisations, advocating for increased normative alignment efforts. In his free time, he enjoys endurance sports and making pizza for friends.

  • Zafar Shaikhli

    Zafar Shaikhli is an LLM student at the University of Cambridge and a lawyer in the Netherlands. He is very fascinated in the future of humanity and the obstacles humanity will face in the near future. He is particularly interested in the role of AI in this context, seeing it as a double-edged sword: the most pertinent threat we face, yet also possibly our greatest tool for advancement. Outside academia, he is passionate about martial arts, including wrestling and boxing.

  • Mia Shaw

    Mia is a second-year philosophy student interested in the intersection between X-Risk studies and morality. Considerate of the profound impact human decisions can have on the future, she is keen on understanding the diverse perspectives on progress, the good life, and meaning. Her interest lies in fostering inclusive strategies that address existential threats whilst acknowledging the wide spectrum of beliefs and values worldwide, maintaining a sensitivity to this variation. To this end, she is particularly concerned with the threats posed by AI, biological agents and nuclear/automated methods of conflict.

  • Simon Thomas

    Simon is a third year PhD student in AI for Environmental Risk. His PhD work focuses on how the risk of hurricane storm surges can be calculated in past and future climates. He spent the last summer working at a "catastrophe modelling" firm, but is interested in discussing the x-risks that lie beyond market interest. He enjoys running and dog-walking.

  • Ediz Ucar

    Ediz is a recent Computer Science graduate from St Catharine's. He is currently working at a Cyber security company (Entrust). He became interested in AI Safety and xrisks, more broadly, over the past year or so. Ediz thinks that the most critical part of AI safety is looking at out of distribution generalisation. Ediz has also become interested in game theoretic and coordination problems with respect to x risks.

  • Andrew Yang

    Andrew is a Part III Mathematics master's student studying Statistics and Theoretical Computer Science. He is primarily interested in AI Safety, particularly the risks associated with the development of a general human-level intelligence. Climate change, nuclear conflict, and pandemics are three other risks humanity faces that also have the ability to accelerate out of control. He believes AI holds the most pressing risk, and also believes that achieving secure and controlled AI will be at least useful for tackling the other risks, if not necessary. In his leisure time, he enjoys reading, badminton, and playing the piano.

Weekly programming is subject to change based on speaker availability.

FAQs

  • Ideal candidates for CERI Futures Fellowship will have a keen interest in the long-term outlook of human civilization, a commitment to ethical foresight, and a drive to contribute meaningfully to discussions about humanity’s shared future. Participants will leave the program with a network of like-minded peers and tools to make a lasting impact on the positive trajectory of humanity.

    The Fellowship is open to individuals in and around Cambridge, UK. Anyone may apply, whether you’re an undergraduate, postgraduate, faculty member, or unaffiliated with the University of Cambridge.

    We are committed to building a diverse cohort of fellows. Evidence suggests that underprivileged individuals tend to underestimate their abilities. We do not want the application process to dissuade potential candidates and we strongly encourage interested students to apply regardless of gender, race, ethnicity, nationality, ability, etc.

  • No prior knowledge or experience in existential risk studies or futures studies is required.

  • We are aiming for a group of 10-20 highly motivated individuals.

    Selections for the Fellowship are competitive. We are looking for applicants willing to engage with ideas of humanity’s longterm future, raise questions and uncertainties, and think critically and unconventionally.

  • CERI Futures Fellowship will provide avenues for further opportunities in high-impact work in existential risk, future studies, and other disciplines. Fellows will have access to career support, mentorship, and a network of highly motivated peers.

  • Fellows should expect to spend 4-6 hours a week on preparatory work and synchronous events. Fellows are expected to attend all events and complete a final deliverable.

    The Fellowship begins on Tuesday, January 30. Mandatory programing is held on Tuesday nights; some weeks may have additional programming.

  • Weekly meetings and seminars will be held at the Meridian Office in Central Cambridge.