If Anyone Builds It, Everyone Dies : Why Superhuman Ai Would Kill Us All, Har...

Great Book Prices Store
(351284)
Registrado como vendedor profesional
USD25,96
Aproximadamente22,34 EUR
Estado:
Como nuevo
2 disponibles13 vendidos
Otros usuarios quieren este artículo. 21 usuarios tienen este artículo en seguimiento.
Este artículo es popular. 13 ya se han vendido.
Envío:
Gratis USPS Media MailTM.
Ubicado en: Jessup, Maryland, Estados Unidos
Entrega:
Entrega prevista entre el mar. 21 oct. y el lun. 27 oct. a 94104
Las fechas previstas de entrega (se abre en una nueva ventana o pestaña) incluyen el tiempo de manipulación del vendedor, el código postal de origen, el código postal de destino y la hora de aceptación, y dependen del servicio de envío seleccionado y de que el pago se haya hecho efectivoel pago se haya hecho efectivo (se abre en una nueva ventana o pestaña). Los plazos de entrega pueden variar, especialmente en épocas de mucha actividad.
Devoluciones:
14 días para devoluciones. El comprador paga el envío de la devolución..
Pagos:
    Diners Club

Compra con confianza

Garantía al cliente de eBay
Si no recibes el artículo que has pedido, te devolvemos el dinero. Más informaciónGarantía al cliente de eBay - se abre en una nueva ventana o pestaña
El vendedor asume toda la responsabilidad de este anuncio.
N.º de artículo de eBay:357564680146
Última actualización el 08 oct 2025 22:44:13 H.EspVer todas las actualizacionesVer todas las actualizaciones

Características del artículo

Estado
Como nuevo: Libro en perfecto estado y poco leído. La tapa no tiene desperfectos y si procede, con ...
Book Title
If Anyone Builds It, Everyone Dies : Why Superhuman Ai Would Kill
ISBN
9780316595643
Categoría

Acerca de este producto

Product Identifiers

Publisher
Little Brown & Company
ISBN-10
0316595640
ISBN-13
9780316595643
eBay Product ID (ePID)
27075653312

Product Key Features

Number of Pages
272 Pages
Language
English
Publication Name
If Anyone Builds It, Everyone Dies : Why Superhuman Ai Would Kill Us All
Publication Year
2025
Subject
Intelligence (Ai) & Semantics, Public Policy / Science & Technology Policy
Type
Textbook
Author
Eliezer Yudkowsky, Nate Soares
Subject Area
Political Science, Computers
Format
Hardcover

Dimensions

Item Weight
16.4 Oz
Item Length
9.6 in
Item Width
6.4 in

Additional Product Features

Intended Audience
Trade
Reviews
"The definitive book about how to take on 'humanity's final boss'--the hard-to-resist urge to develop superintelligent machines--and live to tell the tale."-- Jaan Tallinn, philanthropist, cofounder of the Center for the Study of Existential Risk, and cofounder of Skype, "A clarion call...Everyone with an interest in the future has a duty to read what [Yudkowsky] and Soares have to say."-- The Guardian, "A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."-- Scott Alexander, founder, Astral Codex Ten, "A serious book in every respect. In Yudkowsky and Soares's chilling analysis, a super-empowered AI will have no need for humanity and ample capacity to eliminate us. If Anyone Builds It, Everyone Dies is an eloquent and urgent plea for us to step back from the brink of self-annihilation."-- Fiona Hill, former senior director, White House National Security Council, "Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."-- Emmett Shear, former interim CEO of OpenAI, " If Anyone Builds It, Everyone Dies isn't just a wake-up call; it's a fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches: unchecked superhuman AI poses an existential threat. It's a sobering reminder that humanity's future depends on what we do right now."-- Mark Ruffalo, actor, "Essential reading for policymakers, journalists, researchers, and the general public. A masterfully written and groundbreaking text, If Anyone Builds It, Everyone Dies provides an important starting point for discussing AI at all levels."-- Bart Selman, professor of computer science, Cornell University, "Once only the realm of sci-fi, superintelligence is almost at our doorstep. We don't know for sure what is going to happen when it arrives, but I'm glad we at least have this book raising the tough questions that needs to be asked while the rest of the industry buries its head in the sand."-- Liv Boeree, philanthropist and poker champion, "Everyone should read this book. There's a 70% chance that you--yes, you reading this right now--will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."-- Daniel Kokotajlo, AI Futures Project, " If Anyone Builds It, Everyone Dies is an urgent, well-reported and persuasive warning about the grave danger humanity faces from reckless AI development."-- Alex Winter, actor and filmmaker, "Fascinating and downright frightening...argues that AI companies' unchecked charge toward superhuman AI will be disastrous, lays out some theoretical scenarios detailing how it could lead to our annihilation and suggests what might be done to change our doomed trajectory...[Yudkowsky and Soares] make a pretty convincing case that we are playing with fire."-- AARP, "[Yudkowsky and Soares's] diagnosis of AI's potential pitfalls evinces a sustained engagement with the subject...they have a commendable willingness to call BS on big Silicon Valley names, accusing Elon Musk and Yann LeCun, Meta AI's chief scientist, of downplaying real risks."-- San Francisco Chronicle, "The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster."-- Stephen Fry, "A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."-- Scott Alexander, creator, Astral Codex Ten, "Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong."-- Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge, "If you want to be able to assess the risk posed by AI, you will need to understand the worst-case scenario. This book is an exceptionally lucid and rigorous account of how very wrong humankind's quest for a general AI could go. You have been warned!"-- Christopher Clark, Regius Professor of History, University of Cambridge, "The best no-nonsense, simple explanation of the AI risk problem I've ever read."-- Yishan Wong, Former CEO of Reddit, "A fire alarm for anyone shaping the future. Whether one agrees with its conclusions or not, the book demands serious attention and reflection."-- Booklist (starred review), "The authors present in clear and simple terms the dangers inherent in 'superintelligent' artificial brains that are 'grown, not crafted' by computer scientists. A quick and worthwhile read for anyone who wants to understand and participate in the ongoing debate about whether and how to regulate AI."-- Joan Feigenbaum, Grace Murray Hopper Professor of Computer Science, Yale University, "You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact."-- Grimes, musician, "This book outlines a thought-provoking scenario of how the emerging risks of AI could drastically transform the world. Exploring these possibilities helps surface critical risks and questions we cannot collectively afford to overlook."-- Yoshua Bengio, Full Professor, Université de Montréal; Co-President and Scientific Director, LawZero; Founder and Scientific Advisor, Mila - Quebec AI Institute, "Everyone should read this book. There's a 70% chance that you--yes, you reading this right now--will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."-- Daniel Kokotajlo, OpenAI whistleblower and executive director, AI Futures Project, "A shocking book that captures the insanity and hubris of efforts to create thinking machines that could kill us all. But it's not over yet. As the authors insist: 'where there's life, there's hope.'"-- Dorothy Sue Cobble, Distinguished Professor Emerita, Labor Studies, Rutgers University, "A stark and urgent warning delivered with credibility, clarity, and conviction, this provocative book challenges technologists, policymakers, and citizens alike to confront the existential risks of artificial intelligence before it's too late. Essential reading for anyone who cares about the future."-- Emma Sky, senior fellow, Yale Jackson School of Global Affairs, " If Anyone Builds It, Everyone Dies is a sharp and sobering read. As someone who has spent years pushing for responsible AI policy, I found it to be an essential warning about what's at stake if we get this wrong. Yudkowsky and Soares make the case with clarity, urgency, and heart."-- Joely Fisher, National Secretary-Treasurer, SAG-AFTRA, "The most important book of the decade. This captivating page-turner, from two of today's clearest thinkers, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking."-- Max Tegmark, author of Life 3.0: Being Human in the Age of AI, "If we build superintelligent machines without guardrails, we're not just risking jobs or art, we're risking everything. This book doesn't exaggerate. It tells the truth. If we don't act, we may not get another chance."-- Frances Fisher, actor, "The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster."-- Stephen Fry, actor, "A.I. is coming, whether we want it or not. It's too late to stop it, but not too late to keep this handy survival guide close and start demanding real guardrails before the edges start to fray."-- Patton Oswalt, actor, "An incredibly serious issue that merits -- really demands -- our attention. You don't have to agree with the prediction or prescriptions in this book, nor do you have to be tech or AI savvy, to find it fascinating, accessible, and thought-provoking."-- Suzanne Spaulding, former undersecretary, Department of Homeland Security, " If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can."-- Tim Urban, cofounder, Wait But Why, " If Anyone Builds It, Everyone Dies makes a compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action."-- Jon Wolfsthal, former special assistant to the president for national security affairs
Synopsis
The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies. "The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit, INSTANT NEW YORK TIMES BESTSELLER The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies. "The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit

Descripción del artículo del vendedor

Información de vendedor profesional

Certifico que todas mis actividades de venta cumplirán todas las leyes y reglamentos de la UE.
Acerca de este vendedor

Great Book Prices Store

97,5% de votos positivos1,4 millones artículos vendidos

Se unió el feb 2017
Suele responder en 24 horas
Registrado como vendedor profesional
Visitar tiendaContactar

Valoraciones detalladas sobre el vendedor

Promedio durante los últimos 12 meses
Descripción precisa
4.9
Gastos de envío razonables
5.0
Rapidez de envío
5.0
Comunicación
4.9

Votos de vendedor (397.864)

Todas las valoracionesselected
Positivas
Neutras
Negativas
    • e***s (1004)- Votos emitidos por el comprador.
      Mes pasado
      Compra verificada
      the book itself was fine, but the contents was a little over my head.
    • 9***s (635)- Votos emitidos por el comprador.
      Mes pasado
      Compra verificada
      Good seller fast shipping
    • eBay automated feedback- Votos emitidos por el comprador.
      Mes pasado
      Order delivered on time with no issues
    Ver todos los votos