21. Century Zeitgeist

Avoiding Existential Threats of Exponential Technologies: 3 Different Approaches

As the exponentially growing technologies create bigger and bigger disruption in existing markets, the necessity to be ready for this transformation and foresee the short term future became much more important for enterprises, governments and people itself. More & more platforms are being initiated for formulating models to solve the problems generated and spread globally by the Internet and other exponential Technologies surf on it such as IoT, AI, Blockchain, Cybersecurity, Robotics, 3D Printing, Biotech, AR/VR and Drones. It is for sure that the huge enterprises and also midmarket companies will experience disruption, but that doesn’t have to mean an interruption of business for them. Amazon disrupted retail, Uber disrupted cabs and limos, Twitter disrupted an entire presidential election. Disruption has societal implications, forcing businesses to change from the outside-in to meet consumer needs and keep up with competition.

The disruption of digitalism is similar to the move from the Agricultural Revolution to the Industrial Revolution, Geoffrey Moore says, an organizational theorist and author of Crossing the Chasm.  Instead of new farming techniques enabling a larger and healthier population, the Digital Revolution is increasing business efficiency, easing communication and giving consumers push-button gratification. So there is a fine line between the two possible consequences of digital disruption, either you predict the trends, get prepared and surf on the wave org et drawn under it.

Digital disruption does not always result in business transformations, but comes with much more critical issues. Humans using exponentially growing technologies against other humans is a huge challenge to tackle. AI, nuclear, digital biology, drones and other critical Technologies should be developed at the safe hands as they constitute great danger in the hands of wrong people. For example, rapid developments in biotechnology and genetic engineering will pose novel risks and opportunities for humanity in the decades to come.  Arms races or proliferation with advanced bioweapons could pose existential risks to humanity, while advanced medical countermeasures could dramatically reduce these risks.  Human enhancement technologies could radically change the human condition. We can just increase the number of examples in different technologies and different domains. Nowadays, there are several NGO’s and institutions trying to urge people for the coming dangers of exponential technologies and develop ethical frames for possible outcomes. In this article we will analyze different point of views of three different organizations.

The Future of Life Institute:

The Future of Life Institute (FLI) is a conglomerate of technology experts and other famous figures working to ensuring the safety of humanity’s future against existential threats. Most probably you came across Elon Musk’s collaboration with FLI in the news headlines. Musk warns the world about misuse of technology, particulatly AI. FLI is currently focusing on keeping artificial intelligence beneficial and they are also exploring ways of reducing risks from nuclear weapons and biotechnology.  For the leadership team of the institute, including Max Tegmark, a physics professor from MIT, known as “Mad Max” for his unorthodox ideas and passion for adventure, humans using technology against other humans is a consistent and regular threat, with the nuclear arms race being both a historical example and a continuing threat. FLI’s overall mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

FLI is based in the Boston area, and welcomes the participation of scientists, students, philanthropists, and others nearby and around the world. Here is a video highlighting their activities from their first year of operation.

One of the founders of the institute, Jaan Tallinn, Co-founder, Skype says: ‘’We have technology to thank for all the ways in which today is better than the stone age, and technology is likely to keep improving at an accelerating pace. We are a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity.’’ He adds that with less powerful technologies such as fire, we learned to minimize risks largely by learning from mistakes. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, planning ahead is a better strategy than learning from mistakes, so we support research and other efforts aimed at avoiding problems in the first place.

Singularity University :

Singularity University (SU) chosed its mission to prepare global leaders & organizations for the future, explore the opportunities and implications of exponential technologies and connect to a global ecosystem that is shaping the future and solving the world’s most urgent problems. They call these problems as global grand challenges in energy, food, healthcare, space, water, security, governance, prosperity, learning and so on…

The pitch of the founders Ray Kurzweil and Peter Diamandis was simple: Forget accredited graduate schools and think big at SU. Google co-founder Larry Page and futurist Ray Kurzweil could be among your lecturers in the Graduate Studies Program at Singularity, named for the notion that humans will someday merge with machines. You’d work in a kind of combination think tank and startup incubator, trying to address challenges as grand as renewable energy and space travel. Kurzweil announced the program during a TED Talk in 2009, adding that the Singularity team had leased its campus from NASA, just east of the agency’s historic Hangar One in Mountain View, Calif.

SU takes a slightly different approach from FLI. SU’s programs and events equip executives with the mindset, tools, and resources to successfully navigate their transformational journey to the future. SU is powered by their world class faculty, practitioners, and global network of alumni, partners, and impact startups. They transform how leaders think about the future and guide them in building new capabilities to get there successfully. Explaning the future impacts of exponential Technologies, SU Faculty urges you to use these Technologies in order to solve global grand challenges.

SU is a profit oriented organization with huge investors and shareholders. That is why there are serious concerns about the sincerety of their current operations and the strength of bonds to its founding principles. It is founded in such a perspective that their enterprise solutions enable organizations to better understand and anticipate the potential impact of exponential technologies and trends and to take action. It was supposed to be a think tank. However, so far SU has been limited to an event organization company, where they organize different executive programs for teaching leadership, and innovation models and tools. With guidance from their global network of experts, they help mostly Fortune 500 organizations reinvent themselves to better navigate the uncertain future ahead, not focusing on potential threads and opportunities of exponential technologies to the humanity itself.

Future of Humanity Institute:

Oxford based Future of Humanity Institute (FHI) brings a much more academic perspective to the mission of mitigation of existential threads caused by exponential Technologies. They also heavily focus on regulation changes. FHI’s big picture research focuses on the picture of planet -term consequences of our actions today, and the complicated dynamics that are bound to shape our future in significant ways. A key aspect to this is the study of existential risks – events that endanger the survival of Earth-originating, intelligent life or that threaten to drastically and permanently destroy our potential for realising a valuable future. FHI’s focus within this area lies in the impact of future technology capabilities and impacts (including the possibility and impact of Artificial General Intelligence or ‘Superintelligence’), existential risk assessment, anthropics, population ethics, human enhancement ethics, game theory, and consideration of the Fermi paradox. Many of the core concepts and techniques within this field originate from research by FHI scholars, they are already having a practical impact, such as in the effective altruism movement.

FHI works closely with Deepmind and other leading actors in the development of artificial intelligence. In addition to working directly on the technical problem of safety with AI systems, FHI examines the broader strategic, ethical, and policy issues to reduce the risks of long-term developments in machine intelligence. Given that the actual development of AI systems is shaped by the strategic incentives of nations, firms, and individuals, FHI researchs norms and institutions that might support the safe development of AI. For example, being transparent about different parts of the AI research process differently shapes incentives for making safety a priority in AI design.

As part of FHI’s work on AI, they participate as members of the Partnership on AI to advise industry and research partners and work with governments around the world on aspects of long-run AI policy. FHI has worked with or consulted for the UK Prime Minister’s Office, the United Nations, the World Bank, the Global Risk Register, and a handful of foreign ministries.

Besides AI, as a second focus, FHI’s biotechnology research group conducts cutting-edge research on the impacts of advanced biotechnology and their impacts on existential risk and the future of humanity.  In addition to research, the group regularly advises policymakers: for example, FHI researchers have consulted with the US President’s Council on Bioethics, the US National Academy of Sciences, the Global Risk Register, the UK Synthetic Biology Leadership Council, as well as serving on the board of DARPA’s SafeGenes programme and directing iGEM’s safety and security system.

As you can deduce from the missions and current operations of above institutions, they have different approaches towards the same goal. What if we decided to establish a similar institute in Turkey, what should be the mission, organizational structure, core focus and the business model? Maybe BN can lead such an effort by establishing necessary stakeholder relationships and guiding them to meet on a common ground for this purpose.







Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Related posts
21. Century Zeitgeist

Can Tech Solve California Homelessness

21. Century Zeitgeist

No More Factory Model for Universities!

21. Century Zeitgeist

A Short Summary of Zuck’s Interview with Harari : ‘’What the heck, I scored on my own net.’’

21. Century Zeitgeist

Thinking About The “Alt-Tech”

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Leave a Reply

Your email address will not be published. Required fields are marked *