All newest publication and drafts are on PhilPapers Scientific publications in English:
- Turchin, Alexey, and Brian Patrick Green. Aquatic refuges for surviving a global catastrophe. Futures89 (2017): 26−37. https://www.sciencedirect.com/science/article/pii/S0016328716303494
- Turchin, Alexey, and David Denkenberger. "Global catastrophic and existential risk communication scale." Futures(2018), https://www.sciencedirect.com/science/article/pii/S001632871730112X
- Batin, M., Turchin, A., S. Markov., Zhila, A., & Denkenberger, D. (2017). Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence. Informatica, 41(4) http://www.informatica.si/index.php/informatica/article/view/1797
- Turchin, Alexey, and David Denkenberger. Military AI as a convergent goal of the self-improving AI. In edited volume: Artificial intelligence safety and security, CRC, 2018 https://philpapers.org/rec/TURMAA-6
- Turchin, Alexey, and David Denkenberger. "Making a Back Up on the Moon: Surviving Global Risks Through Preservation of Data About Humanity for the Next Earth Civilization". Acta Astronautica.V.160. May 2018, Pages 161−170. https://www.sciencedirect.com/science/article/pii/S009457651830119X
- Turchin, Alexey, and David Denkenberger. Classification of Global Catastrophic Risks Connected with Artificial Intelligence. AI & Society, 2018. https://link.springer.com/article/10.1007/s00146−018−0845−5
- Turchin A., Green B. Islands as refuges for surviving global catastrophes. Accepted in Foresight. 2018.
- Turchin A. Principles of classification of global risks prevention plans. Accepted in Human Prospect.
- Turchin A. Risks of downloading alien AI via SETI. Journal of the British Interplanetary Society 71 (2): 71−79. 2018.
- Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy. Long-Term Trajectories of Human Civilization. Forthcoming in Foresight, DOI 10.1108/FS-04−2018−0037
- Turchin A. Assessing the future plausibility of catastrophically dangerous AI. Futures, 2018. https://www.sciencedirect.com/science/article/pii/S0016328718301319, https://doi.org/10.1016/j.futures.2018.11.007
- Turchin A. Forever and Again: Necessary Conditions for "Quantum Immortality" and its Practical Implications. Journal of Evolution and Technology 28 (1). 2018. https://jetpress.org/v28.1/turchin.html 4.
- Alexey Turchin, David Denkenberger, Brian Patrick Green. Global Solutions vs. Local Solutions for the AI Safety Problem. Accepted in Big Data and Cognitive Computing
A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World
Important blog posts: S-risks prevention via indexical uncertainty The resurrection of the dead via multiverse-wide acausual cooperation "Cheating Death in Damascus" Solution to the Fermi Paradox The map of "Levels of defence" in AI safety Informational hazards and the cost-effectiveness of open discussion of catastrophic risks The Map of Shelters and Refuges from Global Risks (Plan B of X-risks Prevention) Plan of Action to Prevent Human Extinction Risks The Map of Global Warming Prevention The map of organizations, sites and people involved in x-risks prevention The Map of Impact Risks and Asteroid Defense [Draft] Fighting Aging as an Effective Altruism Cause Message to Any Future AI: "There are several instrumental reasons why exterminating humanity is not in your interest" Simulations Map: what is the most probable type of the simulation in which we live? Fermi paradox of human past, and corresponding x-risks Identity map Does immortality imply eternal existence in linear time? My mother is now in cryostasis Mini-map of s-risks Cryopreservation of Valia Zeldin Could declining interest to the Doomsday Argument explain the Doomsday Argument?
What AI Safety Researchers Have Written About the Nature of Human Values
Guide: How to achieve hypnagogia Presentations
: https://www.slideshare.net/avturchin/ Scientific articles in Russian:
A.V. Turchin. On the possible causes of the underestimation of risks destruction of human civilization // Problems of risk management and safety: the works of the Institute of Systems Analysis, Russian Academy of Sciences. — T. 31. — M .: KomKniga 2007.
A.V. Turchin. Natural disasters and the anthropic principle// Problems of risk management and safety: the works of the Institute of Systems Analysis, Russian Academy of Sciences. — T. 31. — M .: KomKniga 2007.pp. 306−332.
A.V. Turchin. The problem of sustainable development and the prospects for global catastrophes // Social studies and the present. 2010. № 1. S. 156−163 Books in Russian
Other texts Translation of articles about x-risks into Russian UFO as global risk
- War and 25 other Scenarios of End of the World, M. Europe, 2008.
- Structure of the Global Catastrophe, М, URSS, 2010.
- Futurology, Moscow, Binom, 2012, together with Michael Batin.