Share
Wise Decision Maker Guide
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Disaster Avoidance Experts

Greetings Decision maker,


55% of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications. 


And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.


However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. This week I share a proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.


To learn more, check out this blog.


Read Blog

Prefer video to text? See this video based on the blog:

#161: How to Rein in the AI Threat? Let the Lawyers Loose.

If you prefer audio, listen to this podcast based on the blog:

Podcast: How to Rein in the AI Threat? Let the Lawyers Loose.

Make Your Voice Heard

Vote in this LinkedIn poll to contribute to the conversation. I will use the responses to inform my articles in Harvard Business Review, Fortune, and Entrepreneur.

Poll: How has being remote, whether part-time or
full-time, affected your career growth?

Your Testimonials


You and others who gain value from Disaster Avoidance Experts services and thought leadership occasionally share testimonials about your experience, such as the one below. You can read more testimonials here.

Photo of Harish Phadke

Dr. Gleb Tsipursky provided a truly outstanding virtual training on unconscious bias and future-proofing via emotional and social intelligence for the Reckitt North American Health Leadership Team. Exceeding our expectations, Dr. Gleb customized his groundbreaking, behavioral science-driven training content to integrate our initiatives, policies, and case studies at Reckitt, expertly targeting our evolving needs.


We are delighted to have met Dr. Gleb, and look forward to future opportunities to keep working with him on a training series for the organization. I highly recommend him for anyone who wants to get a rapid grasp of highly relevant topics which influence human behavior in the prevailing challenging times.


Harish Phadke, Business Manager to President of North American Health at Reckitt

Schedule a Free Consultation

What's Up With Me


Worries about AI causing serious - even existential - risks can seem like an esoteric subject, but it’s not esoteric for me. In two weeks, I’ll be flying to Sacramento for the wedding of my close friend, Max Harms, who works at the Machine Intelligence Research Institute, a think tank that researches AI safety with a focus on the existential dangers of AI. Through Max, I’ve learned a great deal about the real threat posed by advanced AI systems. My biggest realization was that advanced AI doesn’t have to be hostile to humans to present an existential risk: it’s not like The Terminator. Any advanced AI whose goals are not perfectly aligned with humanity will inevitably seek to sideline humanity in order to prevent humanity from blocking the AI’s goals - and the best sidelining, of course, is destruction. And seemingly easy fixes don’t work, otherwise incredibly smart people like my friend wouldn’t spend their lifetimes working on this issue.


One of the things that helped me understand the seriousness of the threat of AI was the science fiction story that Max wrote (outside of his work at MIRI). The Crystal Trilogy imagines how an artificial general intelligence might evolve, learn to interact with people, and attempt to evade humans' best efforts to rein it in. The first book is titled Crystal Society, and you can read it here.


P.S. If you’re in Sacramento in early October, let me know, maybe we can meet up.

Book cover: Crystal Society by Max Harms

Would love to get your feedback on what you found most useful about this edition of the “Wise Decision Maker Guide” - simply reply to this email.


Decisively Yours,

Dr. Gleb

photo of Gleb Tsipursky

Dr. Gleb Tsipursky

CEO of Disaster Avoidance Experts

PS: Are we connected on LinkedIn? If not, please add me.

Did you miss out on reading any of my bestselling books?

Book cover: Never Go With Your Gut
Book cover: The Blindspots Between Us
Book cover: Returning to the Office and Leading Hybrid and Remote Teams

Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters (Career Press, 2019)

The Blindspots Between Us: How to Overcome Unconscious Cognitive Bias and Build Better Relationships(New Harbinger, 2020)

Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage(Intentional Insights, 2021)

Please forward this email to a colleague or friend who might find it helpful.

Protect yourself from decision disasters by signing up for the
free Wise Decision Maker Course, which includes 8 weekly video-based modules.


Let's be safe! 👍
Please mark my email address resources@DisasterAvoidanceExperts.com
as safe following
 these guidelines to prevent my emails from accidentally going to spam.


Missed the last email? Read it here! 😅

Unsubscribe or Update Your Preferences

Disaster Avoidance Experts is a social enterprise dedicated to promoting science-based truth-seeking and wise decision-making. All profits are donated to Intentional Insights, an educational 501(c)(3) nonprofit organization, and its Pro-Truth Pledge project.

You're getting this email because you indicated that you wanted Dr. Gleb's resources.


Email Marketing by ActiveCampaign