Copy
View this email in your browser

 

A Periodic Newsletter on
Breakthroughs in Strategic Foresight 
             November  14, 2020               



 
Prof. William Halal
 

 

The Presidential Election Confirms Our Forecast
 

TechCast is pleased to note that the US presidential election results confirm our forecast that Joe Biden would win over Donald Trump. Further, it was a narrow victory, just as our results suggested. It is also interesting to note the tendency of American elections to correct for imbalances. In this case, an overly harsh president was replaced by one who compensates by being compassionate and agreeable, possibly to a fault.
 
This highlights the power of our research method of collective intelligence. A good example is this study on AI and Humans. Background information roughed out the issues to be forecast, and comments from our readers improved the background data and clarified the framing of the issue. A sample of readers then provided estimates. Although there is always wide variance in these judgments, a  sample of roughly 20 people washed out these differences in the aggregate. The net result is this issue presenting authoritative estimates of the collective knowledge and judgment of our population of readers, a well-informed group of thought leaders.
 
 

 

AI vs. Humans: Fascinating and Insightful Results

 

The TechCast Team has completed its analysis of the data, and the results are fascinating and unusually insightful. As we like to say about our newsletter, you will want to keep this issue and refer to it when puzzling over the profound impact of AI.
 
Estimates and comments were contributed by Dennis Bushnell, Margherita Abe, Jacques Malan, Clayton Rawlings, John Freedman, Milind Chitale, Michael Lee, Willian Mostia, Chris Garlick, Angus Hooke, Jose Cordeiro, Adolfo Castilla, Carlos Scheel and Mark Sevening.  We are grateful for this continuing fine work.
 


Estimates of AI Use: Toward a Theory of  AI vs. Humans
 

The graph summarizes our data illustrating the complex relationship between AI and human intelligence (HI). The contrast between objective functions and subjective functions is especially striking. Notice the two horizontal red lines defining trends running through these data points.  Objective functions average about 80 % adoption while the subjective functions are almost half of that – about 42 %. Clearly, our respondents think there is a significant difference. One can also see a downward slope shown by the red line titled “overall trend,” suggesting a general tendency for AI to become less useful with higher-order functions.

 
These results are especially meaningful because they support the idea presented in our background analysis (a “hypothesis”) – subjective functions play a special role in the AI vs HI relationship. We draw on the results in the graph as well as selected comments to outline a rough theory, or framework, consisting of the following three principles:
 
1. Objective aspects of HI are likely to be fully automated by AI.
 
These results confirm that objective aspects of HI are likely to be fully automated by AI in the near future. This is what is often called the “routine” work  that easily lends itself to being replaced by machines. Here are a few comments that advocate the powers of AI:
 
  • John Freedman concurs about the ease of automating objective HI:
    “Sensory experiences themselves are purely physicochemical processes that can be assayed mechanically and assessed by AI. I believe that in 100 years almost all of every function will be able to be done by AI. It will far exceed HI capabilities, just as all other human-engineered inorganic systems ('machines') have far exceeded human capabilities. Just as Steve Jobs famously viewed the computer as "the bicycle for the mind," now  "AI is the rocket ship for the mind.”
     
  • Angus Hooke makes the extreme case well:
    “Every human characteristic and quality has evolved, some over billions of years (e.g. sight, sound), some over millions of years (e.g. emotion, empathy) and some over thousands of years. The process of evolution can, in principle, be replicated. AI will eventually be able to reproduce all the characteristics and qualities we associate with humans, and much more. If the time frame of this project were 5 decades, I would put zero percent for HI and 100 percent for AI in all the [cognitive] categories. But I don’t think there will be many “natural” humans; the merger of man and machine will be complete. The end (for natural humans) is nigh!… Homo sapiens will, by choice, be replaced by Homo Cumulatus (augmented humans).”
     
  • Jose Cordeiro offers much  the  same viewpoint:
    “I am not worried about AI, but I am really worried about human stupidity. Sadly, humans are naturally stupid, and that is why we need to augment ourselves with AI. AI is advancing exponentially, and soon AGI will be smarter than humans. We are becoming enhanced humans. Just like we use today computers and cell phones, we will soon connect our brain to the cloud. We will augment our neo-cortex connecting to a growing exo-cortex!”
     
  • Dennis Bushnell thinks AI will rule:
    “In two decades, considering the ways that AI and machines, including quantum, are developing … AI will be used for 100% of all 9 functions.  No unique HI WILL BE COMPETITIVE, Cost and capability-wise. AI is used for future projections NOW!“
 
 
2. Subjective functions are less likely to be automated as they are hard to define and simulate.
 
As noted in the graph, things get more complex in the subjective domain. Contrary to those who are convinced that AI will sweep through all aspects of HI, many authorities agree that there is something illusive about human consciousness that defies quantification. A study that attempted to have AI create songs is telling. Here’s what the leaders of this project had to say:
 
“There's a mismatch between what AI produces and how we think. It’s like having a quirky human collaborator that isn't that great at songwriting but very prolific,” says Sandra Uitdenbogerd, a computer scientist at RMIT University in Melbourne and a member of Uncanny Valley. “We choose the bits that we can work with.”
 

Here’s what a few of our contributors said:
 
  • Milind Chitale:
    “AI has no "Soul", it has no social parameters that HI is trained in… [Humans] will have to guide AI, so projects will be colored in the shades of the [people] behind them”
     
  • Adolfo Castilla:
    “AI is not doing any really subjective task. Machine learning is the closest and it is no subjective at all. Robots simulating emotions or computer art creation are silly things, so far. Even more, companies and people working today in AI are not interested in subjectivity. Human Intelligence has to do with, not just with emotions, sentiments or spiritual ideas, but with imagination, creativity and invention. Also with entrepreneurship, purpose and strategy.”
     
  •  Michael Lee:
    “Human senses evolved over a million-year period and are highly attuned and evolved skills, integrated with the brain and central nervous system. Machines don't have a central nervous system or brain organ. The only for AI to progress beyond 10% at this level would be in a fusion of computer power and the human nervous system. You cannot simulate real-time holistic sensory perception in a machine because there is no such thing, in my view, as an artificial nervous system.” 
 
 
This explains why AI adoption levels in the subjective half of the graph are roughly half of those in the objective half. Subjective functions are hard to define and simulate, especially at the higher-order end of the cognitive spectrum. They can approximated, of course, but at the cost of inaccuracies, bias and other errors. More importantly, when subjective factors, such emotions, purpose, values and beliefs are introduced into AI programs, they invariably have to be specified by the human designers and users of the program.  For instance, You will always have to tell your car‘s GPS  navigation where you want to go.
 
This big question on my mind is, how to control all this AI? The answer seems to focus on designing and controlling the subjective functions. That's where the danger lies and where humans are crucial. To prevent the possibility of runaway AI system,  humans have to monitor the performance of AI systems, detect potential problems, and correct the subjective factors. A prominent example is the downing of two Boeing 737 Max airliners that crashed because the automatic flight control systems malfunctioned, and pilots had no way to override the system.
 
Some of our contributors make this point about the limitations of AI and the need for human control:
 
  • Dennis Bushnell:  
    “That is the crux of the matter going forward, who is in charge? Much of the discussion with regard to control is trying to ensure the AI does not go off on its own agenda, that the humans will be in charge. Given that the future is increasingly autonomous everything, they will certainly have the capability to wreak absolute havoc.”
     
  • Milind Chitale:
    “HI will guide and create a launch pad for AI to leap far beyond HI. However, if the launchpad creates falters, AI will not reach its desired strata, and the danger is that we may not even know! This is because AI leapfrogs the computing to a level we can never execute manually. When AI is executing tasks HI never ever before has solved, we have no basis to know if the generated solutions are indeed falling into the correct quadrant of possibilities. So, HI and its relationship to AI will forever be bent by the limitations of our HI capabilities in fine tuning AI! Humans will continue to monitor and try to correct any anomalies to AI while another huge group will be creating new tasks which has no end in the current state of development.”
     
3. AI systems and humans are likely to have close relationships, with humans in control.
 
The need to define, monitor and control AI systems implies a close relationship between AI and HI. This is often thought of as a symbiotic relationship, a merging of man and machine, and the collective intelligence of machines and humans. Ultimately, however, humans will have to maintain control over this relationship. This theme is evident in the following remarks:
 
  • Clayton Rawlings:
    “I anticipate AI and HI will fuse in the brain- computer interface. A massive paradigm shift when a chip is integrated in the human brain. We will be fully integrated with AI as a normal brain function.”
     
  • Bill Mostia: 
    The relationship between AI and humans could one of friendship, the same as with humans. If Asimov’s laws are in place, the machine would be as a subordinate, but this may not necessarily be a bad relationship. Some would move to jobs not done by AI, while some may be relegated to monitoring the AI or doing physical tasks that the AI cannot do.” 
     
  • Dennis Bushnell:
    “The obvious options are to continue to morph into cyborgs including direct brain/ machine interaction and brain chips; uploading into, becoming the machines." 
 
Even more interesting is the possibility of humans creating more powerful methods to insure effective control. One method would be the use of collective intelligence (CI) of not simply machines and humans, but of humans themselves. The pooling of diverse knowledge sources could dispel Jose Cordeiro’s fears about human stupidity: Here’s how one contributor expressed it: 
  • Adolfo Castilla:
    “AI never will be achieved in a computer, even a quantum computer. [It] could be achieved through a combination of men interconnected… in the network, the cloud and computers, all forming a collective intelligence…  hybrid man-machine solutions. Machines are never going to have power over men. Men, combined with machines, will produce superior and disruptive ideas."
     
  • Michael Lee:
    “At a Collective Intelligence level (CI), we have a collective memory bank, in libraries and even online, filled with human knowledge, as well as institutional memory in democratic institutions and value systems. I regard CI as a big part of the future of civilization. CI is destined to be a greater force than AI.”
 
Comments on Nine Cognitive Functions
 
1.  Perception, Awareness   Sensory experience through touch, sight, sound, smell, taste. 
  •  Margherita Abe:
    “HI's need to direct the observations.  HI has to tell AI what to look at/for. Once AI has this guidance it performs very well. I would not expect this to change much in the next few decades.”

     
  • Jacques Malan:
    “Touch, Sight and Sound will be completely AI driven with feedback to integrated HI. Humans will likely only hold on to a portion of smell & taste (even though AI would be able to do this perfectly well) as these are very subjective in e.g. the culinary arts and fashion (perfume). Expect HI and AI to both compete and integrate in the latter.“
     
  • John Freedman: 
    “Sensory experiences themselves are purely physicochemical processes that can be assayed mechanically and assessed by AI.  But in many cases the interplay of multiple factors (such as the 700 aldehydes in a strawberry, or mouthfeel of food, or interaction of acidity and sweetness in wine-tasting perception ) and intuitive components will leave a realm for humans.”

     
  • Michael Lee:
    “HI is likely to control this level of experience for rest of 21st century. Human senses evolved over a million-year period and are highly attuned and evolved skills, integrated with the brain and central nervous system. Machines don't have a central nervous system or brain organ. The only for AI to progress beyond 10% at this level would be in a fusion of computer power and the human nervous system. You cannot simulate real-time holistic sensory perception in a machine because there is no such thing, in my view, as an artificial nervous system.” 

2.  Learning, Memory  Information, knowledge or skill acquired through instruction or study.
  • Margherita Abe:
    "HI must direct the learning. This is especially important when HI directs deep learning.  It has to set tasks and parameters to direct AI's attention and focus.  HI does this by providing experiential work for AI to use. With proper direction AI learns and grows and remembers/stores what it learns and uses it. Although I am giving HI a small input here, it is a crucial input. I expect that this will remain at the current level."
     
  • Jacques Malan:
    "This is probably the first area ripe for complete integration. Expect direct download to the brain (a-la Matrix) to progress from Sci-Fi to Science. 

     
  • John Freedman:
    "Machines already far outperform humans in this area. Recursive self-improvement is how Deep Mind's Alpha Go beat the world's champion Go player, and machine learning  will surely outdo human learning in virtually every sphere. Once we understand the molecular basis of memory (recent experiments demonstrated molluscs can learn by injection of RNA), we may well be able to use the insights to design the most capable inorganic memory systems." 
     
  • Michael Lee:
    "Machine learning is powerful at this level of experience and has already surpassed human capacity on an individual level. How long would it take a super-computer to pass exams to be a doctor or lawyer? However, at a Collective Intelligence level (CI), we have a collective memory bank, in libraries and even online, filled with human knowledge, as well as "institutional" memory as for example in democratic institutions and value systems. I regard CI as a big part of the future of civilization. CI is destined to be a greater force than AI."

     
  • Carlos Scheel:
    "AI has the advantage, better forms of storage and retrieval. However, AI won´t have the capacity of discern what is fake of what is real… this is still a human capacity....I think...."

3.  Information, Knowledge, Understanding  Information, knowledge, etc. processed, encoded and stored for future action.
  • Margherita C. Abe:
    "AI systems that do facial recognition occasionally have a great deal of difficulty doing this accurately, so AI continues to require an HI in the background to evaluate its work. It seems that AI collects the data but some of this data is slanted or misconstrued by AI.  I think that this problem may continue for a while.  AI does not really mimic an HI toddler in how it learns. If it did so in the areas of object identification and recognition, it would quickly require much less input from HI."

     
  • Jacques Malan:
    "This will be taken over completely by AI, save for the aforementioned legacy systems. Humans will benefit from AI's greater ability to assimilate knowledge (both speed and volume) to gain understanding by feeding off the nearly instantaneous "summaries" provided by AI. Near complete integration."

     
  • Michael Lee:
    "Information and knowledge are easy to automate for AI. But understanding, which is synthesis of information, is difficult for AI. I see holistic synthesis of information and knowledge as the strength of HI and Collective Intelligence (CI), way, way better than AI."
     
  • Jose Cordeiro:
    "When understanding comes into the equation, I still think that we have a little advantage (vs Learning..) , but still below the efficiency of  the enormous capacity of  storage, sorting and retrieving algorithms that the automata have."
 
   

4.  Decision, Logic   A determination arrived at after consideration.
  • Margherita Abe:
    "I don't think that AI will do this with more autonomy until it is better at #3. HI cannot offer AI more autonomy until its understanding of the data it collects is more reliable. This is not a trivial issue."
     
  • Jacques Malan:
    "The real battleground for ongoing significance in the workplace (which we will lose eventually). AI will dominate Logic, but HI will be much better at the subtleties of the human condition and the disasterous negative impacts that pure logic may have on society. (Think Law vs. Justice.......) Though not integrated per se, HI & AI will co-operate."
     
  • Michael Lee:
    "AI is incredible at logic and real-time decisions based on logic. Far superior to HI."
     
  • Chris Garlick:
    "Logic and historic patters will become easier to predict,  historical information allows patterns to be determined for weather forecasting and will only strengthen as models become more data enriched."
     
  • Carlos Scheel:
    "Any  complex decision needs a  large amount of intuition and associative knowledge that the heuristic algorithms can not manage. For common decisions, AI will be better, but the world is becoming more and more complex, so I think we still have the advantage."
     

5.  Emotion, Empathy  Mental reaction of strong feelings: anger, fear, vicarious emotions of others.
  • Margherita Abe:
    "At present AI uses data from HI physical expressions and body language to determine responses (that YOUTUBE video) and it seems to work.  Notice that I am being a bit cagey about this.  This means that I saw it but I don't really think that this represents true empathy. I wonder what AI would do if a patient came to a session with a detailed description of an ongoing life event and asked for some sort of input.  What kind of response an HI therapist would offer would depend not only on body signals but comprehension of the content of the material being presented.  A truly empathic HI therapist could deal effectively with this.  I am not convinced that AI can right now...Maybe in a few decades? Or maybe not..."
     
  • Jacques Malan:
    "Another area where HI and AI will compete rather than integrate. Largely driven by a section of the human population that would still prefer the "human touch", despite the fact that this will be virtually indistinguishable from AI "faking it" at that point in time."
     
  • John Freedman:
    "This is limbic system and we are just beginning to understand it and learn how to program. With toddler-level emotional models just being pioneered now, full capacity for human emotion is a long way off. Basic, practical capacities for machines to apply emotions in decision-making will be with us relatively soon. 
     
  • Michael Lee:
    "I don't believe AI has self-awareness or consciousness so emotions can only be simulated but cannot be real without self-awareness. You manufacture emotions in AI but they will be programmed and not real, not sincere, because there is zero self-awareness."
     
  • Carlos Scheel:
    "This are  very critical  human characteristics, automata may have certain logical reactions, but the majority emotional responses are very human emotional. Empathy too, we can not have empathy with a machine."


6.  Purpose, Will, Choice  Ability to set a purpose and choose some action to attain it.
  • Margherita Abe:
    "HI's need to be in the background deciding what AI does implies that AI does not have "free will"and autonomous purpose, consisting in the ability to set its own goals.  I don't think that this will change over the next decade or so."
     
  • Jacques Malan:
    "Choice and purpose will be dominated by AI, and much sooner than humans realize. It is already pervasive in navigation and shopping applications (How often do you tell Wayze to take an alternative route?). Only Will will remain sacrosant (It is probably the most difficult of all the preceding to emulate, since it does not depend on reason or logic at all)".
     
  • John Freedman:
    "Straightforward algorithmic AI. The 10% left to humans will be gone when some of the tougher functions - such as #5 and #9 - are highly developed."
     
  • Michael Lee:
    "These are mechanical, logical characteristics not requiring self-awareness or consciousness."
     
  • Chris Garlick:
    "Purpose and choice are based on assumptions and inputs for the desired goals.  This will require significant inputs from humans for a while."
     
  • Carlos Scheel:
    "Some well structured strategies may have a good result using heuristic algorithms, genetic algorithms, etc.  I think on this issue the actions can be taken following established maps, so I think this ability can be replaced by AI techniques."

  
7.  Values and Beliefs  Ideas held in relative importance and considered true.
  • Margherita Abe:
    "Both of these traits imply that AI may act  autonomously.  I think that this may occur within a few decades for some AI activities...I'm thinking that self driving cars may represent an example of this entity, where an AI may need to make a decision in an emergency on the road. HI would still need to have  initially offered parameters for AI to use and guide AI in its early learning and mastery of this task. This task demands more synthetic ability from AI because it includes several of the more basic works as listed above. So this activity demands HI input in a more comprehensive manner initially than #1, #2, and #3 may. Hence my 50% for AI."
     
  • Jacques Malan:
    "Will" should likely be grouped with these two as they are inherently subjective. Unless and untill the singularity, this will be the exclusive domain of HI, though some value systems will be infiltrated by AI (against our better judgement), as is already happening with social media algorithms. Expect HUGE pushback to these from Humanity in general. Likely to lead to a new generation of "Luddites", who will shun AI, but not necessarily technology as such."
     
  • Michael Lee:
    "AI will lack sincerity because of the absence of self-awareness and consciousness but beliefs could be programmed just like methods of human indoctrination."
     
  • Carlos Scheel: 
    "No way, I hope, this is one of our few differences vs. automata, unless the programmers are superior beings than us, otherwise this are still our strengths."
 
8.  Imagination, Curiosity, Creativity, Intuition  Novel ideas and knowledge gained without sensory input.
  • Margherita Abe:
    "This function is at present held by HI with little competition or assistance from AI. Guidance of AI by HI will remain significant in the next few decades. This area demands enormous  autonomy from AI to be graded higher. It also demands a higher level of understanding of its inputs (like my comment about the HI toddler vs AI)."
     
  • Jacques Malan:
    "Save for Intuition, AI will be able to emulate all of these sufficiently well to compete and (preferably) complement HI. Important - does NOT require integration! From a scientific point of view, collaboration in this sphere would be the ultimate engine of progress, where human intuition can feed off almost limitless possibilities created by AI and provide the guidance for each "next step". (Think of the progress currently made with drug discovery)."
     
  • John Freedman:
    "This is already well-developed but does not pass the Turing test in that fiction and poetry written by computers sometimes has (seemingly) nonsensical or non sequitur components. Given a few decades we will be hard-pressed to exceed machine performance in 'imaginative' functions, including art, though there may always be a niche for the quaintly human (just as there is still a niche for paper newspapers and books)."
      
  • Michael Lee:
    "Novel ideas and knowledge gained without sensory input. For the artist, imagination and sense perception are one. I would say this level of experience will be dominated by humans for the foreseeable future."
     
  • Chris Garlick:
    "Similar to values, sensor inputs may aid in predicting intuition and imagination but still will require significant human input for the next decade or so and false positives may be non-sensical and limit innovation."
     
  • Jose Cordeiro:
    "It  is possible, and from repetition and logical conclusions and new inputs  AI algorithms can create new products, etc, but this depends on the designers, if they provide the machine with enough experiences so that the automata may create things BY THEMSELVES,  that humans never can create, is a possible  alternative.And I think this may have happened already, with some innovations that even the same programmers or designers accept that without the help of machines they would never could arrive to a breakthrough  innovation, so I will give AI some points.But again, this assume that the AI algorithm has been programmed by a superior intelligence."

 
9.  Vision, Dreams, Peak Experience, Future Framing  Guiding thought, altered state of consciousness formed without sensory input. 
  • Margherita Abe:
    "I don’t think AI can master a free standing global evaluation of future global societal events and trends, eg -- the fate of democratic process or any attempt to model a future society or any aspects of future society in a global manner. My guess is that type of vision and dreaming would be left to HI almost exclusively for more than a few decades in the future."
     
  • Jacques Malan:
    "The holy grail for a singularity. Long term, HI will probably hold on to the Dreamworld only. In the interim, AI will also benefit from the abstraction from HI to guide vision and framing."
     
  • John Freedman:
    "Future framing - forged in the evolutionary fires of Africa - is what makes us human. I would like to think we will always be able to outperform a machine on this. That is, an android or cyborg Steve Jobs or Elon Musk would not be as good as the real thing. This overlaps with #8 so will be dependent on development of creative ability."
     
  • Michael Lee:
    To have dreams and visions and a sense of time you need a brain. This level of experience will be dominated by humans for the foreseeable future. As a digital entity, AI doesn't fully exist in time so cannot know past, present and future in any meaningful sense. I see humans dominating here."
     
  • Chris Garlick:
    "Vision, and dreams will depend heavily on human intelligence and inputs, and re-evaluation until sensor information can use intuition to filter or synthesis human emotions and environmental influences."
     
  • Jose Cordeiro:
    "Machines can  generate thousand of new possibilities, and maybe we must define a new kind of “machine  consciousness.” But based on the human definition of these terms…visions, dreams, etc., without  EXTERNAL SENSORIAL INPUTS, I don't think  AI algorithms can prevail over our personal dreams, and conscious states."
 
 


  
Letters
 
TechCast always encourages letters, comments and suggestions. 

 
Artificial Intelligence and its Five Battlegrounds
By Rajiv Malhotra

 
Artificial Intelligence is only partially visible, just like an iceberg. Though its positive side is well understood for making machines smarter, my book, Five Battlegrounds of AI, focuses on how it is also making people cognitively dumber and psychologically dependent.

The book argues that this AI-driven revolution will have an unequal impact on different segments of humanity and speaks for the underdogs being impacted adversely. There will be new winners and losers, new haves and have-nots, resulting in an unprecedented concentration of wealth and power.

After analyzing society’s vulnerabilities to the impending tsunami, the book raises troubling questions that provoke immediate debate: Is the world headed toward digital colonization by USA and China? Will depopulation become eventually unavoidable?

The book organizes the social and political impact of AI into the following five arenas, each of which is a battleground between competing interests. In each arena, AI creates new tensions or exacerbates existing ones, and disrupts the prevailing equilibriums:


1. Economy, industry, education, and jobs. I offer a refutation of the popular view that AI will not adversely impact jobs because it will create more new jobs than it will destroy.
 
2. Geopolitics and military – USA vs China. This battleground is well understood by military and national security experts, namely, that AI is the next frontier for superpower confrontations. However, the general public has not yet connected the dots.
 
3. The moronization of the masses who bow down to the digital deities. People are giving up their private data and this is resulting in a tectonic shift—the transfer of agency from individuals to digital networks. A small elite have increasing control in formulating the rules of social justice, appointing the referees, and adjudicating the public discourse.
 
4. The crash of civilization. Artificial emotions and gratifications are causing a loss of selfhood and the downgrading of humans relative to algorithms. Will the future humans, augmented with AI, relate to us the way we relate to animals?
 
5. Stress-testing the Indian nation-state. India is my case study to analyze the devastating the impact on the largest concentration of humanity. This is especially poignant because India’s leaders in various domains are operating on the assumption that India is the next big superpower on the horizon. I offer a sobering counter thesis.
 
Five Battlegrounds of AI is a wakeup call to action, compelling public intellectuals to be better informed and more engaged. It educates the social segments most at risk and wants them to demand a seat at the table where policies on Artificial Intelligence are being formulated.

To join my mailing list and stay connected with this discussion, please email me: RajivMalhotra2007@gmail.com
 
https://itskorea.kr/english/main.do


 

Keynote on Autonomous Vehicles Conference

 


TechCast gave a keynote speech at  an international conference on transportation featuring autonomous cars on November  2, 2020. The conference was organized by the Intelligent Transport Society of South Korea.

TechCast's Bill Halal organized his talk around the five principles of Global Consciousness. 
 








 
TechCast Briefs Angel Investors

TechCast founder, William Halal, kicked off the annual meeting of the Angel Capital Association’s Virtual Summit May 12 with his keynote on The Technology Revolution.  Among his many points, Bill outlined how AI is causing today’s move beyond knowledge to an Age of Consciousness, and that business is now altering corporate consciousness to include the interests of all stakeholders. Angel investors are concerned about the social impacts of their companies, so this news was well received, especially as Bill stressed this historic change could be a competitive advantage.

Click here for the presentation
 


 

 TechCast at the Armed Forces Communication and Electronics Association

 
Halal also spoke at the annual AFCEA conference on the topic of AI, noting TechCast’s forecast that AI is expected to automate 30% of routine knowledge work about 2025 +3/-1 years and General AI is likely to arrive about 2040. Expanding on the same theme delivered at ACA, Bill explained how today’s shifting consciousness is likely to transform, not only business, but also government, the military and all other institutions.
 
 

 
We Invite Your Ideas
 
TechCast offers exciting new possibilities to use our unequaled talent and resources for creative projects. I invite you to send me your questions, fresh ideas, articles to publish, consulting work, research studies, or anything interesting on the tech revolution.
 
Email me at Halal@GWU.edu and I'll get back to you soon. Have your friends and colleagues sign up for this newsletter at www.BillHalal.com.

Thanks, Bill
William E. Halal, PhD  
The TechCast Project 
George Washington University
 

 
Bill's Blog is published by:

The TechCast Project www.TechCastProject.com

Prof. William E. Halal, Founder
George Washington University

Prof. Halal can be reached at Halal@GWU.edu

The TechCast Project is an academic think tank that pools empirical background information and the knowledge of high-tech CEOs, scientists and engineers, academics, consultants, futurists and other experts worldwide to forecast breakthroughs in all fields. Over 20 years of leading the field, we have been cited by the US National Academies, won awards, been featured in the Washington Post and other media, and consulted by corporations and governments around the world. TechCast and its wide range of experts  are available for consulting, speaking and training in all aspects of strategic foresight.
 
Elise Hughes, Editor

Copyright © 2020 The TechCast Project. All rights reserved.
 
Facebook
Twitter
LinkedIn
ResearchGate
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.