Participate in the Evolution of Intelligence

By Glen Sharp  owner of Sharp Innovation Solutions

Summary (TLDR)

Developing intelligence is too important to delegate it totally to our AI systems or to a few “experts”. In the upcoming Age of Intelligence, it is possible (and essential for our success) for there to be broad participation.

In his book, Average is Over, Tyler Cowen forecasts that modern economies are delaminating into two groups: a small minority of highly educated people who can work collaboratively with automated systems will become a wealthy aristocracy; the vast majority will earn little or nothing, surviving on low-priced goods created by the first group, living in shantytowns working with highly automated production systems.

With so much at stake with the evolution of intelligence, it is important that power is distributed so all of society can prosper. A critical example of our time are the principles behind the open web.

“The surest way to deal with the age-old problem of the corrosive nature of state power is to create a society of insightful and healthy minds, a citizenry that is strong, happy, and free—especially free from the fear of not having power and the fear of losing power.”

  • Thich Nhat Hanh

In this essay, “concentrated elite” should replace “state” because the issue extends beyond the power of nation-states and includes other concentrations of power like global corporations.

Broad participation in the development of intelligence is necessary and we need suggestions on how it will be accomplished by those who wish to do so. Humans have made a lot of progress in developing our intelligence even though we still can’t adequately define all its dimensions. If managed properly this progress will continue to accelerate as there is still almost unlimited potential still to be realized. Complex new powerful capabilities enabled by AI and combined with other advanced technologies pose the risk of a catastrophic disaster. We need to promote understanding of our tools and learn to manage the risks. Artificial Intelligence trends have produced many dystopian scenarios, especially from the idea that our tools will independently evolve to replace human intelligence. Intelligence Augmention, where human and artificial intelligence evolves synergistically, is a better way to gain experience and manage risks. It could also be a way to reach optimum potential while providing the most benefits to society. Intelligence Augmentation requires a broadly educated society using new education paradigms for nurturing all types of intelligence. A critical success factor is using diverse and broadly based intelligence sources (e.g. people and Artificial Narrow Intelligence ANI) actively participate in the decision-making for our better future. This article is an introduction to what I expect to be a series of articles encouraging participation in the evolution of intelligence.

Figure 1 Human Evolution and Development of Tools

Human Evolution

We are the most intelligent animal on the planet but we are realizing the limitations of the thinking of our species. Most people are at least somewhat familiar with human evolution and the history of the development of tools. The real question is what lessons have we learned and applied in the development of our species and culture? We have undoubtedly made tremendous progress in some areas by beginning to solve age-old problems such as famine, plague, and war. This has allowed us to rise above mere survival, but even our greatest information technology inventions have held us back by their limited capabilities, especially in how we interface with them. As suggested in Figure 1 our tools in some ways have made us less human as we progressed through the industrial age and into the early days of the information technology revolution. Working conditions have been unhealthy and our ways of communicating with our tools have been limited and unnatural (e.g. keyboard data entry – especially qwerty). 

Types of Intelligence

We are entering a new phase of our evolution which could justifiably be called the Age of Intelligence. Figure 2 shows the types of intelligence that will be covered in the sections to follow and how they relate to intelligence augmentation which is being proposed as an area of focus.

Figure 2 Types of Intelligence and how they relate to Intelligence Augmentation

Human Intelligence

The definition of intelligence is controversial because there are so many.[5] Some groups of psychologists have suggested the following definitions:

From “Mainstream Science on Intelligence” (1994), an op-ed statement in the Wall Street Journal signed by fifty-two researchers (out of 131 total invited to sign):[6]

“A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.”

It has long been known that intelligence is not restricted to what can be quantified by a measurement like the Intelligence Quotient. While there is not a universally agreed-upon definition of intelligence, there does seem to be a consensus that there are many types of intelligence. Examples of some aspects of intelligence that are being considered are:

  • Logical
  • Emotional
    (Emotion and caring is a recurring Star Trek cliché of what can be a human strength but it is very important nonetheless).
  • Social
    (Current social networks while highly used have created very undesirable societal impacts)
  • Mathematical
  • Visualization
  • Pattern recognition
  • …[more to follow]

Clearly, this is an active area of research with much more to be documented and discovered. More to come in future articles.

Contrary to what some people might believe when reading about how ancient different parts of the human brain are, human intelligence in the broader context is still evolving through our inventions. An old but monumental example is the invention of the printing press in order to create records of our learning for future generations. We have been incredibly enriched by the ability to access the thoughts of geniuses from hundreds of years ago that are still relevant today.

A key idea relevant here is that humans possess unique attributes or types of intelligence that will be difficult for computers to replicate completely for some time.  I plan to write more about this subject.  If you want some examples as they relate to the future of education for the age of acceleration check out Leveraged Learning. In chapter 2 of this book, Danny Iny explores how our world is changing (think of examples like artificial intelligence, automation, intelligent appliances, and driverless cars), and what education must offer in order for us to stay relevant.

Brain research is in the early stages of development and there is much that is not yet understood.

In the fields of social sciences, new discoveries are being made every day.

In psychology, for example, there was a long history where the field focussed only on abnormalities. Positive psychology, “the scientific study of positive human functioning and flourishing on multiple levels that include the biological, personal, relational, institutional, cultural, and global dimensions of life” was only introduced in 1998 so to date it has only existed for less than 25 years.

One advancement in psychology that is particularly relevant to the topic of development of intelligence was introduced by Carole Dweck, a Stanford University Psychology Professor, who defined fixed and growth mindsets in a 2012 interview:

In a fixed mindset, students believe their basic abilities, their intelligence, and their talents, are just fixed traits. They have a certain amount and that’s that, and then their goal becomes to look smart all the time and never look dumb. In a growth mindset, students understand that their talents and abilities can be developed through effort, good teaching and persistence. They don’t necessarily think everyone’s the same or anyone can be Einstein, but they believe everyone can get smarter if they work at it and use tools to augment their capabilities.

Development of intelligence should help us make better decisions for the long term benefit of society as a whole. Never underestimate or give up on human intelligence, especially yours.

Artificial Intelligence

There are many different types or forms of AI since AI is a broad concept. The WaitbutWhy blog post identify three major AI calibre categories:

1) Artificial Narrow Intelligence (ANI): 

Sometimes referred to as Weak AI, Artificial Narrow Intelligence is an AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

2) Artificial General Intelligence (AGI): 

Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. We could consider this an extension of the Turing test, developed by Alan Turing in 1950, which is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. Creating AGI is a much harder task than creating ANI, and it is generally agreed that it has not yet been accomplished. In this context, Professor Linda Gottfredson describes AGI as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” 

3) Artificial Superintelligence (ASI): 

Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence could range from a computer that’s just a little smarter than a human to one that could be many times smarter—across the board. Speculation about ASI is the reason AI’s future becomes controversial.

Up to now, humans have only developed the lowest calibre of AI—ANI— but we have achieved it in several ways in games such as chess, Go, and Jeopardy. ANI  is becoming more common as a component of applications that are used every day. The AI Revolution has been portrayed as the road from ANI, through AGI, to ASI—a road that potentially could create massive change.

The narrative in accomplishments like ANI outperforming human grandmasters in games like chess suggests that humans are well on their way to being replaced but that is only part of the story which in isolation is misleading. In recent chess tournaments, humans working with computers (augmented intelligence) have outperformed the best artificial intelligence without human assistance. However, this hasn’t been publicized nearly as much as the more sensational news that humans are being superseded. 

In actuality, a new form of chess competition with multiple combinations of intelligence is emerging. Recent results would rank the teams like this: 

  1. A chess grandmaster is good; 
  2. A top ANI computer program is better;
  3. A chess grandmaster playing with a laptop could be even better.
  4. But even that laptop-equipped grandmaster can be beaten by relative newbies if the amateurs are extremely skilled at integrating machine assistance. 

“Human strategic guidance combined with the tactical acuity of a computer,” a long-running human world chess champion (1985-1993) Kasparov concluded, “was overwhelming.” Better yet, it turned out these smart amateurs could even outplay a supercomputer on the level of Deep Blue. Human entrants with computer skills in a freestyle chess tournament open to all forms of intelligence trounced a version of Hydra, the most powerful chess computer in existence at the time; indeed, it was probably faster and stronger than Deep Blue itself. Hydra’s owners let it play entirely by itself, using raw logic and speed to fight its opponents. A few days after the advanced chess event, Hydra destroyed the world’s seventh-ranked grandmaster in a man-versus-machine chess tournament. But the amateurs with advanced computer skills, Cramton and Stephen, beat Hydra. They did it using their own talents and regular old Dell and Hewlett-Packard computers, of the type you probably had sitting on your desk in 2005, with software you could buy for sixty dollars. All of which brings us back to our original question here: Which is smarter at chess—humans or computers? Neither. It’s the two together (augmented intelligence), working side by side that achieves the best results.

However much we have been impressed by the recent progress achieved in artificial intelligence and been excited by the potential of our new information technology capabilities, we need to be equally excited and invested in potential progress in the critical success factors for developing a modern safe intelligent society.

I don’t think it is the right model to adopt the defeatist attitude that the age of humans will be over soon and we should sit back and let automated information systems and AI-powered robotic systems operate without collaboration or intervention. We have yet to explore the synergy that is possible in developing all forms of intelligence working together.

The original paper that predicted a technological singularity that started the dire predictions of the end of the human era included 4 possibilities of which 3 were alternatives to ASI:

  • The development of computers that are “awake” and superhumanly intelligent (ASI).
  • Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entities.
  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  • Biological science may find ways to improve upon the natural human intellect.

Given that technology can produce beneficial or disastrous consequences it is important to apply ethical considerations to create a future we all want to live in. Who is going to benefit from these different scenarios and who will be marginalized or exploited? The impact on society as a whole can’t be an afterthought.

There are a lot of dystopian views on the directions AI could go wrong, but more could be done to help show how people should take action to create the alternative future where AI is part of the solution to human progress and happiness.

Do we have an adequate understanding of the possible risks before we proceed with some critical applications? Where can we reduce the risk by experiential learning on what is prudent to delegate and how?

What roles should we have between ourselves, our social systems, and our tools? What parallel development needs to take place to keep things in balance and reduce risk?

Of course, there are types of intelligence that computers can already do better than humans. Some of them are obvious and some are not. More on that in upcoming articles. Even for applications that are the best fit for automation, whether it is appropriate for these to be done autonomously depends on the scope of the solution and the potential adverse effects.

An example of using artificial intelligence and robotics in a bad way is a Tokyo department store project which has implemented a robot receptionist. While this project might bring some public relations attention to the company for doing something different, it is very difficult for a robot to emulate a human well. It can also be creepy, potentially manipulative, and deceptive.  Why not just use a genuine human receptionist to provide a caring human touch? This seems like technology development for its own sake trying to solve a problem for which it is not appropriate. When considering the full lifetime costs are there any benefits and even if there are what is lost by taking an inhuman approach?

Another example is a calling assistant which Google is trying to make sound more natural so people won’t be able to recognize that it is automated. Is this really necessary to provide the function it is designed for or would it be better for people to know what type of system they are interacting with?

Yet another bad example of combining human and machine, in my opinion, is in the current approach to the development of autonomous vehicles. The scenario is that autonomous vehicles are getting better and a high percentage of the time they can operate without human intervention. In some illegal trials by Uber in San Francisco (they decided they didn’t need to get permits for their trial), a human is along for the ride and is expected to intervene in the edge cases where the system needs help. This sounds like a recipe for failure to me because people don’t respond well to suddenly intervene at high speed after having been lulled into a state of complacency and boredom. This type of design uses humans as a gap filler for situations they are ill-suited to perform and likely become a scapegoat when a failure occurs. If it is classified as operator error, this might be thought as a way to deflect questions on how the autonomous ANI was designed. It was made worse by Uber flaunting legal regulations because they believe these don’t apply to the future they are creating. This involves two bad things:

1. Inappropriate roles for humans and machines and

2. bypassing governance regulations because they put limits on experimentation and so-called “progress”.

In Canada, it has been an issue that tech companies like Uber and Facebook have sometimes not cooperated with the government because it seems they believe that they don’t need to cooperate with and shouldn’t be governed by regulations and laws. For example, Facebook leaders ignored a subpoena to attend a Canadian organized world conference on social network regulation recently. One of the Facebook lawyers also argued that Facebook users have no expectation of privacy so Facebook can not be charged for breach of privacy laws.

One rationale used against regulation is that regulators aren’t respected because they don’t necessarily understand the technology as well as technocrats. That needs to be turned around by making companies understand that part of their obligations to society is educating themselves of possible societal impacts, educating their customers so they can make well-informed decisions, and educating regulators so governance can be effective. Everyone from the general public to all the stakeholders need to be properly informed to participate effectively in decision making.

The complexity of AI (or other technology) also shouldn’t be used as an excuse. A good counter-example has been provided by IBM in their approach to quantum computing which is at least as difficult to understand as AI. On their quantum computing website they have a crowdsourcing approach to engaging broad participation in quantum computing applications which includes open communication and free access to do experiments with a quantum computer. What I particularly like is a series of videos explaining quantum computing at several levels:

  • child
  • Teen
  • College student 
  • A grad student in computer science
  • Professional in the field of quantum computing

This approach facilitates the involvement of a wide range of perspectives that potentially could contribute to the success of this new paradigm in computing.

Intelligence Augmentation

As we have seen with the human-computer chess example, intelligence augmentation is a powerful approach for the development of intelligence which includes incremental automation of intelligence. It combines human intelligence and specific types of artificial specialized intelligence (aka ANI) as shown in figure 2.

Humans and computers need to continue learning to work together. We need to synergize the evolution and collaboration of human and computer intelligence. As identified earlier, some AI practitioners consider this “weak AI” because their vision is that AI should do it all without human intervention. Human-machine collaboration and synergy seem to be more practical and safe if you consider the limitations of either/or scenarios and the societal impacts of humans not having a productive role.

We are in the early stages of developing more capable interfaces for humans to interact with computers more effectively through methods such as gestures, voice, augmented reality, virtual reality, and many more enhancements that can be combined. We are just beginning to enter a stage where humans can interact humanely and naturally with computers that don’t impose their limitations on us. In my opinion, it is premature and unwise to think that this stage of progress should be skipped over to move directly to fully autonomous machines as a general objective for all types of systems, especially sensitive risky ones. There is much to be learned by developing expertise from human-machine interaction, collaboration, and cooperation before investing too heavily in total automation excluding humans.

Elon Musk has a Neuralink project studying integrated human-machine synergy brain interfaces. This could have performance benefits over other types of interfaces but this might not be the most important bottleneck or issue that needs to be solved for advanced collaboration between humans and AI-driven machines. This type of approach also raises issues regarding privacy ( the dangers of mind-reading anyone) in what is already a highly surveilled society. We need to get good at dealing with the simpler forms of privacy invasion before creating scenarios where even more sinister types of invasion are enabled.

Whether intelligent systems including both humans and computers are tightly or loosely coupled and how exactly that manifests itself, is only one of many options. It is not representative of augmented intelligence as a whole.

In my view, intelligence augmentation which includes progress of both human and computer intelligence advancing together in complementary ways is a worthwhile vision of the future.

Security to Avoid Catastrophic Disaster

We need to control the evolution of intelligence. Security and privacy deal with control. 

Security is the development of intelligence focused on managing risk to enable benefits and minimize abuse. In the most extreme instances, security is intended to protect us against threats that could threaten our survival as a species. Security should play a big role in the evolution of intelligence.

Noted security analyst, Bruce Schneier, has explained that the evolution of our society is influenced by 3 levels:

  1. Technology
  2. Business
  3. Governance and law
  1. Technology
    Technology is an enabler of new capabilities that can be used in good or evil ways. Technology is neutral. We need to not just irresponsibly enable use cases because we are curious and we can. We need to protect against bad use cases to safely enjoy the benefits of good use cases.
  2. Business
    Businesses determine what product and service options people have for a price. New social norms are set by business models that describe the mechanisms for businesses to achieve a profit.
  3. Governance and Law
    Governance is about establishing rules and laws in the best interest of the domain being governed. Governance is primarily done locally, by region, and by country. In relatively rare instances the scope of governance is global but progress has been made in this direction.

Our surveillance society is becoming based on technologically enabled large scale data gathering to feed artificial intelligence engines. The questions that need to be answered are:

  • Who are the real customers?
  • What (Who) is the product?
  • Who is benefiting?
  • Who is being manipulated, exploited, or left out?
  • What is in the best interests of society as a whole in the long term?

I am advocating seeing these influences as a reflection of the evolution of multiple dimensions of intelligence for which we need to develop competencies.

Applying these questions to, Facebook and Google, who have business models based on advertising revenue and are two of the largest companies involved in intelligence research:

  • Who are the real customers?
    Businesses wanting potential customer data to more effectively sell their products.
  • What (Who) is the product?
    Private data of the general population
  • Who is benefiting?
    Businesses
  • Who is being manipulated, exploited, or left out?
    The general population 
  • What is in the best interests of society as a whole in the long term?
    Privacy of data so people can make informed choices and not be manipulated.

The ultimate measure of success for security (and other) policies is whether they change human behaviour from what is undesired to what is desired. Critical success factors are:

  • Who decides on the policies
  • How are they enforced? 
  • How is monitoring accomplished?
  • How effectively can interventions be done?
  • What processes do we have to test against unintended consequences?

Trust is another fundamental security concept that is relevant to the evolution of intelligence. 

There need to be open AI standards using principles that increase trust.

A fundamental security principle that was developed for cryptography in 1883 can also be applied to AI.  Auguste Kerckhoff’s principle stated:

A cryptosystem should be secure even if everything about the cryptosystem is public knowledge AND only the key is secret. 

The idea is that proprietary systems (security by obscurity) are less robust and trustworthy than open standards that can be audited by a large community of independent geeks who like to solve complex puzzles. Crowdsourced research and testing is a good way to determine if the technology has flaws or previously unforeseen issues.

There are many examples (to be provided in future articles) of systems that have failed because they were no longer trusted. In this context, the development of inscrutable AI implementations whose algorithms and operations are not well explained or understood does not bode well for their future. These AI implementations may inevitably fail. It could be very difficult to fix faults before extensive damage is done. They may not be able to recover their reputation and fixing faults this way is very expensive financially and considering total societal costs.

Powerful Capabilities Create New Risks

 How did we get into this predicament? How do we use the new technological capabilities we are developing for the greater good and avoid disaster? 

Let’s take two examples:

  1. The Chinese have a Social Credit policy of rewarding or punishing citizens is based on Face-id tracking, Internet information monitoring, and other tracking of behaviour. In Chinese society what in the western world are considered fundamental human rights (e.g. freedom of movement) are privileges the state controls. It isn’t fully automated yet, but it provides a view of the implications when there isn’t privacy and big data can feed at a minute level AI decisions on rewards and punishments. AI  can be considered a major component in the latest arms race between the US and Chinese superpowers.
  2. Elimination of jobs
    There are many predictions that AI could cause large segments of society to have their jobs eliminated and possibly have little chance of finding suitable replacement employment. Others have faith that new types of jobs will be created but with no idea, if the quantity will be sufficient to replace the lost jobs. There will be radical disruption because the new jobs are likely to have many different qualifications and types of education required. This could have major effects on decision making, distribution of wealth, what people will do with idle time, mental health, etc. Mental health is already a growing problem in our affluent society. How much worse would it get if a large majority of society has no employment and lose their sense of purpose?

Powerful new technology creates new risks because the need for control against abuse is more crucial. Previous examples like nuclear bombs, genetic modification impacts, etc are simple when compared to an Internet of Things (IoT) which is a network embedded with intelligence that could be AI controlled. Security history has shown that the cross impacts between separately designed systems are the most dangerous. This is further complicated if little understood machine learning could be in control of ongoing revisions of decisions.

Today the top organizations AI development are companies like Google, Facebook, Amazon, Microsoft, and Apple. Google and Facebook are facing privacy issues (partially due to the interests being served in their business models) and there are demands by governments for increased regulation to control their monopolistic power. Apple should be commended for recognizing, designing for, and acting upon the importance of privacy. Their weakness, however, may be in their culture of secrecy because open trust standards may become required for success in the Age of Intelligence.

Governance and regulation will need to step up to provide limits to regulate business practices for the public good.

Disruption of Education

Education is presumably about developing our intelligence to make better decisions to create a better life. It should be about more than “reading, riting, and rithmetic”. It is a commonly expressed idea that some of the most important types of intelligence for life success are not even covered in school.

Education has evolved from generalized learning for self-improvement to be heavily oriented towards occupational training. It was before my time that a general university education was proof of diligence and intelligence to qualify for almost any type of better job. In high school my passion was physics but it was more prudent for the career I wanted to get a degree in electrical engineering and computer science with a minor in organizational behaviour. Now the pace of change and new types of requirements with emerging AI impacts are making the old education models and institutions providing a degree for a job obsolete. For a variety of reasons, traditional education institutions may not be able to adapt to the realities of the new environment. Eventually, jobs themselves may become obsolete or at least rare. Vocational training at the beginning of a career is being replaced by the need for lifelong learning from new sources.

Perhaps education can evolve to include all types of learning both human and machine. Humanities subjects may be recognized in the future to be even more relevant to society than previously considered when Asimov’s rules for robots are found to be incomplete. People and computer deep learning to deal with the changing mix and combinations of subjects previously considered unrelated will be needed. Understanding and building accountability and intervention mechanisms for machine learning are sure to be a growing subject.

We will also need to learn from case studies such as these examples:

  • Microsoft bot that was turned evil
  • Biased algorithms that discriminate and create momentum towards undesirable futures.

Reconsideration of Fundamental Models

Some of our fundamental models that have been essential to our society’s progress are getting old and the circumstances on which they were originally based may no longer apply based on the new realities of the modern world.

Concepts like:

  • Scheduling of time
  • Jobs
  • Money and wealth distribution
  • Growth
  • Warfare
  • Democracy
  • Power

Each of these topics is too large to be discussed here but they are mentioned as examples of fertile ground for applying new and broader applications of intelligence. If our fundamental concepts and rules for our institutions are based on theories that need review and updating how can we properly progress without doing that. It is never-ending but we need to build in flexibility for rapid change given that our theories will be subject to the need for ever more rapid revision. We need forums where intelligent discussions can be made with an openness to new ideas and revisions. This is especially crucial when we are entering a stage of history where we are looking to base algorithms and enable high-performance automation of these concepts to an unprecedented extent. As just one example, behavioural economics is providing new insights and enabling new types of research that show how naïve and oversimplified previous economic theories about economic decisions really were.

The evolution of intelligence requires that our models, algorithms, and concepts evolve too. Planning and being proactive are critical to our success because debugging flaws in the execution phase are already too late when chain reactions propagate so fast and the consequences can be so extreme. Previous assumptions need to be reconsidered. Results-based analysis needs to be applied because the stakes involved with possible mistakes are higher than they have ever been and will continue to rise.

Let’s use decision-making as an example. High technology is enabling gaming of our political and economic systems and concentrations of power are destabilizing society as a whole. There are known flaws in our democratic rules that are being exploited yet these critical institutions have not kept up with the pace of change regarding challenges that are facing them. 

Predictive models are part of the wisdom we need to prosper with newly developed superpowers that need to be regulated for the benefit of humankind.

Prototyping and Scenario Testing

While it doesn’t seem that way, we are still in the early stages of the information technology and social network revolutions. Software development is still very primitive from the advancements that can be expected and are required in the next 50 years.

Security testing is a new domain of the Common Body of Knowledge of the security profession which was introduced in 2015. It is a very immature and urgent area of development for the safe evolution of intelligence.

Considering the possible catastrophic results of failure when mistakes are made with the new powerful  technologies (like new forms of intelligence as they are applied to information technology, genetics, surveillance, nuclear energy, etc) are enhanced we need to get a whole lot better at:

  • Governance
  • Policy
  • Privacy
  • Scenario analysis
  • Prototyping and system development
  • Control and oversight systems
  • Testing

Experience has shown that new technologies introduce unforeseen consequences and are used in unforeseen ways. We need to expect and plan for that.

Considerable imagination and foresight based on human and computer tendencies need greater consideration and monitoring to respond quickly.

While the concept of prototyping before production is well established, a radical improvement in quality and degree is essential.

A couple of examples are worth highlighting:

  • The stages of Internet maturation and how it might apply to AI
  • Taxonomy of Pathways to Dangerous AI (including 55 scenarios) 

I think using the stages of Internet maturation as a model for safeguarding AI development is actually quite useful but it is scarce on details about crucial points that are essential for success. For example, in the first stage about standardization, it is stated that trust standards need to be established. As described previously, I couldn’t agree more that ways of establishing trust are crucial but there are no details explaining what trust standards are so it seems they have yet to be invented.

Taking Action

We have covered the impacts of power with regards to intelligence at 3 levels:

  1. Technology
  2. Business
  3. Government

Testing and controlling the application of all types of AI technology.

Establishing new ethical business models and industry regulations for the greater good.

Industry regulation models and other challenges of governance.

As stated at the beginning of this essay we need to create an environment where we can benefit from new manifestations of intelligence in such a way as to be free from the fear of not having power and the fear of losing power.

Power begins with volition, our deepest intention. The ability to attain any goal is absolutely contingent on the condition and quality of our collective minds and our thinking tools. A wholesome intention combined with a lucid mind supplemented with tested trustworthy tools is the prerequisite for genuine power to determine the evolution of intelligence.

We need to develop explicit policies regarding the safe use of AI in applications. AI developers themselves have been requesting this guidance.

Security capabilities need to be developed to provide checks and balances in AI development.

Controlled rollouts, early warning monitoring, and rapid intervention are necessary to avoid catastrophic failures and reduce their harm.

How do we Prepare for the Future? Some preliminary thoughts to consider:

  • Learn more about Intelligence Augmentation by reading, taking courses, watching videos, listening to podcasts, participating in discussion forums, etc.
  • Create, review, share, and revise mental models (e.g. crowdsourced wisdom)
  • Use principles like Building a Second Brain for intelligence augmentation
  • Support ethical business models by voting with your investments and purchases
  • Vote for politicians who will establish regulations to control unethical business practices 
  • Ask Questions, especially about fundamental premises and assumptions
  • Participate in Science Fiction simulation games
  • Watch provocative shows like Black Mirror on possible unforeseen results from new technology
  • Stimulate your imagination, creativity, making, and innovation
  • Participate in new social networks and communities that encourage collaboration and reasoned debate
  • Imagine Scenarios and learn about updated simulation and predictive models 
  • Determine what is Important based on your values
  • Taking action to experiment with, test, and provide feedback on new products
  • Experiment with and provide feedback on new types of augmented automation

“The best way to predict the future is to invent it.” – Alan Kay, computing pioneer

If you want to learn more about the evolution of intelligence and how you can participate, subscribe to the Intelligence Augmentation newsletter brought to you by Sharp Innovation Solutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top