The Illusion of Innovation: A Critical Examination of Microsoft's Acquisition of OpenAI and the Resurgence of Expert Systems

Following the money... 

OpenAI front and center with Microsoft, the new paymasters running things from the rear.

In the dynamic landscape of digital technology, the concept of artificial intelligence (AI) has often been heralded as a revolutionary force. The acquisition of OpenAI by Microsoft, a transaction shrouded in the guise of a $10 billion investment, demands a critical examination. This essay aims to dissect the underlying motives and implications of this acquisition, drawing parallels with the resurgence of expert systems from the 1980s, and scrutinizing the role of AI as a corporate strategy tool rather than a technological advancement. 

The Facade of Investment and Innovation  

At the core of this acquisition lies a strategic maneuver by Microsoft, ostensibly presented as a significant investment in AI. However, a closer analysis reveals a more calculated business strategy aimed at bolstering Microsoft's stock price and leveraging their Azure server space. This move is less about fueling innovation and more about creating a revenue stream under the guise of advancing AI technology. The $10 billion figure, widely publicized, serves more as a marketing ploy than a genuine investment in AI's future

In dissecting the nature of this investment, it becomes apparent that Microsoft's intentions are far from altruistic. The framing of this acquisition as a 'cash injection' into OpenAI cleverly masks the true intent: to profit from the burgeoning AI market. By offering computing layers and Azure server space, Microsoft positions itself to reap financial benefits from the AI applications developed by OpenAI (the recent app store on OpenAI). This strategy, while shrewd from a business standpoint, raises questions about the ethical implications of such a move and this is in the form of a ‘shim’. 

OpenAI: A Shim for Legal and Ethical Challenges  

Microsoft's acquisition of OpenAI is a maneuver with multifaceted implications. At its core, it strategically places OpenAI in the role of a 'shim'—a term borrowed from computing, denoting a piece of code that facilitates compatibility between different software versions or operating systems, akin to how Wine enables Windows applications to function on Linux. This metaphorical 'shim' is poised to absorb the impact of potential legal challenges, especially those related to plagiarism or other AI-centric misdemeanors. Furthermore, this tactical move by Microsoft exposes Sam Altman and his team to a heightened risk of facing the repercussions of any legal or ethical transgressions, particularly in instances of plagiarism. This calculated positioning by Microsoft reflects a keen foresight into the intricate legal terrain that envelops AI, where the reprocessing of inputs is often presented as novel outputs—a phenomenon starkly illustrated by the controversies involving Getty Images, Stable Diffusion as well as groups of non-fiction writers.

The concept of using OpenAI as a legal shield is particularly concerning. It suggests a scenario where Microsoft can conveniently distance itself from any controversies or legal challenges that may arise from OpenAI's operations. This maneuver effectively leaves Altman and his associates exposed to legal liabilities, while Microsoft remains insulated. The ethical ramifications of such a strategy are profound, raising questions about the responsibility and accountability of large corporations in the field of AI. 

The Influence of Effective Altruism and Its Decline  

The departure of key figures associated with the Effective Altruism movement, notably Tasha McCauley and Helen Toner, from OpenAI, signals a shift in the organization's ethical compass (With Helen Toner a graduate of Georgetown, it feels like there is some governmental observations here considering the University’s propensity for producing many an officer of US intelligence – but that’s another angle to review later on). Their exit leaves Altman in a precarious position, further entrenching the notion that OpenAI's original mission of AI for the greater good has been overshadowed by corporate interests and profit motives. 

The influence of the Effective Altruism group on Altman's decisions cannot be understated. Their departure marks a significant turning point in the ethos of OpenAI, moving away from a focus on the broader societal benefits of AI towards a more profit-driven approach. This shift is emblematic of a larger trend in the tech industry, where ideals and ethics often give way to the pressures of profitability and market dominance. 

Resurgence of Expert Systems: A Historical Parallel  

Edward Feigenbaum (sitting), director of the Computation Center, with members of the Board of Directors of the Computation Center in 1966.

The current fascination with AI, particularly in its manifestation through tools like ChatGPT, echoes the enthusiasm for expert systems in the 1980s. These systems, developed in programming languages like LISP, were once heralded as revolutionary tools for medical diagnosis and other applications. However, they were fundamentally pattern recognition systems employing skip logic, a far cry from the sentient AI often portrayed in popular media. This historical parallel serves as a reminder that what is often marketed as groundbreaking innovation may be a repackaging of existing technology under a new, more marketable guise. However, when we once used these machines, then laser discs and CD-ROMs, these closed systems meant that they were a one-time purchase and the search run on them were done under the single purchase. Now, with these logic processes of recognising a user’s input as they type to formulate a human-like response, each search now carries a dollar costing that Microsoft reaps under their Azure compute layer costs. The processes are similar, yet the monetisation of modern search is now being formulated by Microsoft - no wonder Google are pulling the rug from every service and ramping up the advertising to the detriment of the likes of Workspace for Education. 

The comparison with expert systems is particularly apt when considering the current state of AI. Much like the expert systems of the past, today's AI technologies, including those developed by OpenAI, are largely based on sophisticated algorithms and pattern recognition capabilities. While these systems have certainly advanced in terms of complexity and application, the core principle remains the same. This raises important questions about the nature of innovation in the tech industry and whether we are truly witnessing a new era of AI or simply a continuation of past trends under a different name. 

The Misrepresentation of AI's Capabilities  

The portrayal of AI as a near-miraculous technology by certain segments of the media further muddies public understanding. Phrases like "P-Doom" or “P(doom)” and oversimplified explanations of AI's functioning, as seen in some media reports, contribute to a distorted perception of AI's capabilities. The reality is that AI, including tools like ChatGPT, primarily operates as a sophisticated pattern recognition system, far from the human-like intelligence often ascribed to it. It is also limited to the text input from the output of humans and reaching a plateau - just like it did in the 1980s. For example, the ‘hallucinations’ that AI oftens has or the abject forgetfulness of the request, means it’s long way from the expectation of say, Excel formulas: It does what you ask 99.9% of the time. 

This misrepresentation extends beyond mere inaccuracies in reporting; it carries significant consequences for how the public perceives and interacts with AI, as well as influencing policy decisions. The portrayal of AI as an almost omniscient entity fosters unrealistic expectations, potentially leading policymakers and the general public down a path of ill-informed choices. Moreover, this skewed representation tends to overshadow the inherent limitations and challenges in AI development, such as data biases and the risk of misuse. Here, we circle back to the realm of effective altruism, where regulatory frameworks are shaped more by governmental directives (as exemplified by the Center for Security and Emerging Technology (CSET) at Georgetown University) than by the innovative impulses of the industry. This is particularly pertinent in public-facing contexts, where the populace may struggle to grapple with the unfiltered and potentially harmful outputs from tools like ChatGPT operating in 'honest mode'. In this scenario, the media plays a pivotal role, often prioritizing the creation of a palatable perception over conveying the nuanced reality of AI while simultaneously creating clicks and revenue through their favourite emotion readers must feel: fear.

The Role of Media and Public Perception  

In the realm of media portrayal, AI is frequently depicted as a groundbreaking and almost mystical technology. This representation obscures the reality that many of the advancements hailed as innovative are, in fact, refinements or extensions of existing technologies. Such a narrative is advantageous to corporations like Microsoft, which stand to gain from the heightened excitement and intrigue surrounding AI. This approach not only furthers their business goals but also overshadows a more critical and nuanced discourse on the ethical and societal ramifications of AI.

The psychology of marketing and the human tendency towards the 'forbidden fruit' or the 'wet paint, do not touch' syndrome is evident in this context. The warning sign informs passersby of the wet paint, yet paradoxically, it often incites the very action it seeks to prevent. The allure of the forbidden or the dangerous is a deeply ingrained aspect of human psychology, as seen in behaviors like smoking and overeating. Despite clear warnings of the health risks, these activities continue to attract millions. This analogy extends to the realm of AI, particularly in the context of health and wellness. AI technologies like ChatGPT can provide personalized meal plans or health advice, intersecting with the lucrative health industry, a major player in online advertising.

The ethical conundrum arises when considering the potential for advertisers to subtly integrate their products into AI-generated content. However, it is noteworthy that ChatGPT, as of now, remains conspicuously free of any direct advertising. This absence raises questions about the underlying motivations and potential future directions for such AI technologies in the context of consumerism and advertising ethics. The critical examination of these aspects is essential to understand the broader implications of AI in our society and the role of media in shaping our perceptions and interactions with this transformative technology.

The Ethical Implications of AI Development  

The ethical implications of AI development, particularly in the context of Microsoft's acquisition of OpenAI, are multifaceted. On one hand, there is the potential for AI to contribute positively to society, such as in healthcare and other sectors. On the other hand, the commodification of AI for corporate gain raises serious ethical concerns. The prioritization of profit over ethical considerations can lead to the exploitation of AI technology in ways that may be harmful to society. 

The departure of figures associated with the Effective Altruism movement from OpenAI is indicative of a shift away from a focus on the ethical use of AI towards a more profit-driven approach. This raises concerns about the future direction of AI development and the potential for misuse of this technology. It also highlights the need for a more robust ethical framework to guide the development and application of AI. 

The Round Up  

The acquisition of OpenAI by Microsoft, once divested of its promotional gloss, unveils a strategic corporate gambit rather than a bona fide advancement in the realm of artificial intelligence. This development, echoing the fervor surrounding expert systems in the 1980s, underscores the recurrent theme of technological exuberance and the tendency to rebrand antiquated technologies as novel innovations. With the diminishing sway of effective altruism within OpenAI, supplanted by corporate and governmental interests, it becomes increasingly evident that this acquisition is driven more by fiscal and legal stratagems than by a commitment to the progression of AI. In this milieu, the necessity for a discerning viewpoint becomes paramount, distinguishing between authentic innovation and astute marketing, while comprehending the true capabilities and constraints of AI technology. It is crucial to recall that the internet, conceived in the 1960s, was designed with an understanding that as a network expands and its users grow in number and apparent autonomy, the network paradoxically becomes more susceptible to control. This principle, known as the law of large numbers, is equally applicable to the burgeoning era of search technology. In this context, it is acknowledged that those who dominate the realm of search effectively wield control over the internet and, to a significant extent, the decision-making processes of its users. In other words: our questioning will subside, our attention will be consumed and will become thoughts will be modified and monetised unless we’re aware of deliberate intention.

Written with assistance from ChatGPT4 for grammar and thesaurus + Plugins for real-time web search while using previous writing as a base for tone and style.

Footnotes and links from research on this essay

1. Microsoft's Investment in OpenAI : Microsoft announced a significant investment in OpenAI, which was widely reported as a strategic move in the AI industry. This investment was not just a simple cash injection but a deeper strategic alignment between the two companies. [Source: Nasdaq Article on Microsoft's Investment](https://www.nasdaq.com/articles/microsofts-$10-billion-bet-on-openai-is-now-up-in-the-air.-heres-why-thats-great-news-for) 

2. Influx of OpenAI Employees to Microsoft : Following the investment, there was a notable movement of OpenAI employees to Microsoft, indicating a deeper integration of OpenAI's operations and personnel into Microsoft's ecosystem. [Source: MSN Article on OpenAI Employees Joining Microsoft](https://www.msn.com/en-us/money/other/microsoft-prepares-for-influx-of-openai-employees/ar-AA1kjyoY) 

3. OpenAI's Leadership Changes and Microsoft's Role : The leadership changes at OpenAI, including the departure of Sam Altman, and Microsoft's subsequent hiring of key OpenAI personnel, suggest a significant shift in the power dynamics and strategic direction of OpenAI. [Source: Seeking Alpha Article on OpenAI and Microsoft](https://seekingalpha.com/article/4653240-openai-hoisted-by-its-own-petard-microsoft-big-win) 

4. Microsoft's Acquisition Strategy : Microsoft's approach to acquiring OpenAI talent and intellectual property highlights a strategic move to bolster its AI capabilities and market position. [Source: MSN Article on Microsoft Hiring Former OpenAI CEO Sam Altman](https://www.msn.com/en-us/money/careersandeducation/microsoft-hires-former-openai-ceo-sam-altman/ar-AA1kdn2B) 

5. OpenAI's Valuation and Microsoft's Stake : The valuation of OpenAI and Microsoft's significant stake in the company underscore the financial and strategic importance of this partnership for Microsoft. [Source: Nasdaq Article on Microsoft's Investment](https://www.nasdaq.com/articles/microsofts-$10-billion-bet-on-openai-is-now-up-in-the-air.-heres-why-thats-great-news-for) 

6. Microsoft's $10 Billion Investment in OpenAI : Microsoft announced a $10 billion investment in OpenAI, a significant move in the technology industry. This investment was not just a straightforward cash injection but also included cloud compute purchases, giving Microsoft substantial leverage over OpenAI. [Source: Nasdaq Article - "Microsoft's $10 Billion Bet on OpenAI Is Now Up in the Air. Here's Why That's Great News for the Stock."](https://www.nasdaq.com/articles/microsofts-$10-billion-bet-on-openai-is-now-up-in-the-air.-heres-why-thats-great-news-for) 

7. Fraction of Investment Wired to OpenAI : Only a fraction of the $10 billion investment has been wired to OpenAI, with a significant portion in the form of cloud compute purchases. This arrangement indicates a strategic move by Microsoft to maintain control and leverage over OpenAI. [Source: Yahoo News - "OpenAI has received just a fraction of Microsoft’s $10 billion investment"](https://news.yahoo.com/openai-received-just-fraction-microsoft-202808814.html) 

8. Microsoft's Leverage and Rights to OpenAI's IP : Microsoft's investment terms grant it certain rights to OpenAI's intellectual property. This means that even if their relationship were to deteriorate, Microsoft could still run OpenAI's current models on its servers, ensuring continued benefit from the investment. [Source: Yahoo News - "OpenAI has received just a fraction of Microsoft’s $10 billion investment"](https://news.yahoo.com/openai-received-just-fraction-microsoft-202808814.html) 

9. Microsoft's Integration of OpenAI's Products : Over the past year, Microsoft has integrated OpenAI’s products into its offerings, from Windows to Microsoft Office to GitHub. This integration has a significant impact on Microsoft's bottom line, demonstrating the strategic importance of the OpenAI investment. [Source: Yahoo News - "OpenAI has received just a fraction of Microsoft’s $10 billion investment"](https://news.yahoo.com/openai-received-just-fraction-microsoft-202808814.html) 

10. Microsoft's Potential Gain from OpenAI's Leadership Changes : The recent leadership changes at OpenAI, including the ouster of CEO Sam Altman, have led to speculation about Microsoft's potential gains. Microsoft's investment and integration of OpenAI's technology into its platforms position it to benefit significantly from these changes. [Source: Seeking Alpha - "OpenAI: Hoisted By Its Own Petard (And Microsoft's Big Win)"](https://seekingalpha.com/article/4653240-openai-hoisted-by-its-own-petard-microsoft-big-win) 

11. Shift in OpenAI's Ethical Compass After Departure of Key Figures : An article on MSN discusses OpenAI's governance shift, balancing innovation and ethical oversight. This could provide insights into the changes in OpenAI's approach to ethics and innovation post-leadership changes. [Read more on MSN](https://www.msn.com/en-us/money/other/openai-s-governance-shift-balancing-innovation-and-ethical-oversight/ar-AA1kuUfb). 

12. Ethical Concerns in AI Development Post-OpenAI Leadership Changes : An article on DMNews addresses the ethical debate stirred by the termination of OpenAI's CEO. It discusses the implications of this leadership change on the ethical direction of AI development. [Read more on DMNews](https://www.dmnews.com/openai-ceo-termination-stirs-ethical-debate/). 

13. OpenAI's Leadership Shakeup Amidst Groundbreaking AI Development : An MSN article provides insights into the significant leadership upheaval at OpenAI and its implications for AI development. [Read more on MSN](https://www.msn.com/en-us/news/technology/openai-s-leadership-shakeup-amidst-groundbreaking-ai-development/ar-AA1kozx7). 

14. OpenAI Welcomes New Leadership Amid Strategic Partnerships and AI Advancements : This MSN article discusses the new leadership at OpenAI following Sam Altman's departure and its impact on strategic partnerships and AI advancements. [Read more on MSN](https://www.msn.com/en-us/money/careersandeducation/openai-welcomes-new-leadership-amid-strategic-partnerships-and-ai-advancements/ar-AA1k7YbI). 

15. What Led to the OpenAI Leadership Shakeup and What It Means for the Future of AI : PBS Newshour provides an analysis of the events leading to the OpenAI leadership shakeup and its potential impact on the future of AI. [Read more on PBS](https://www.pbs.org/newshour/show/what-led-to-the-openai-leadership-shakeup-and-what-it-means-for-the-future-of-ai). 

16. P-Doom in AI Context : The term "P-Doom" in the AI context refers to the probability of a catastrophic scenario involving AI. A detailed explanation and discussion on this topic can be found in an article titled ["'What's your p(doom)?': How AI could be learning a deceptive trick ..."](https://www.abc.net.au/news/2023-07-15/whats-your-pdoom-ai-researchers-worry-catastrophe/102591340) from ABC News. 

17. AI Misalignment and Deception : Concerns about AI being potentially deceptive and misaligned with human intentions are discussed in various articles. One such article is ["AI doom, AI boom and the possible destruction of humanity"](https://venturebeat.com/ai/ai-doom-ai-boom-and-the-possible-destruction-of-humanity/) on VentureBeat. 

18. AI Alignment Problem : The AI Alignment Problem, which involves the challenge of ensuring AI systems act in line with complex human values, is a significant topic in AI ethics and safety. An article discussing this in detail is ["p(doom), the AI Alignment Problem, and the Future of your Product"](https://www.linkedin.com/pulse/pdoom-ai-alignment-problem-future-your-product-ramon-chen-eysbc) on LinkedIn. 

19. Existential Risk of AI : The existential risks posed by AI, including the potential for human extinction, are a topic of concern among AI researchers and ethicists. An article that discusses this is ["Precise P (doom) isn't very important for prioritization or"](https://www.lesswrong.com/posts/c7fDt27pBdDDrEaZo/precise-p-doom-isn-t-very-important-for-prioritization-or) on LessWrong. 

20. AI and Human Extinction : The risk of AI leading to human extinction, either through direct action or as a result of misaligned objectives, is a topic of debate and research in the field of AI safety.  

21. Catalpha's Blog on Packaging Redesign : This source discusses the concept of redesigning packaging to refresh a product's appeal, which can be analogous to repackaging old technology as new breakthroughs. It highlights how redesigning offers the opportunity to rethink and update, potentially kickstarting product sales. This parallels the idea of presenting existing technology (like expert systems) in a new format (modern AI applications). [Read more](https://blog.catalpha.com/3-stories-of-packaging-redesign-that-led-to-success). 

22. Yonder Consulting on Breakthrough Innovation : This article lists examples of breakthrough innovations, some of which involve new technology based on existing models or new business models utilizing existing technology. This aligns with the notion of Microsoft's strategy with OpenAI, where existing AI technology is repackaged and integrated into Microsoft's business model. [Read more](https://yonderconsulting.com/6-examples-of-breakthrough-innovation/). 

23. World Economic Forum on Technology Transformation : This article provides a long-term perspective on the history of technology, illustrating how technological advancements often build upon or repurpose existing technologies. This context is useful for understanding how AI, as seen in the Microsoft-OpenAI deal, is not entirely a novel invention but rather an evolution of previous technologies like expert systems. [Read more](https://www.weforum.org/agenda/2023/02/this-timeline-charts-the-fast-pace-of-tech-transformation-across-centuries/). 

24: Law of Large Numbers In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of independent identical trials should be close to the expected value and tends to become closer to the expected value as more trials are performed.[1]: https://en.wikipedia.org/wiki/Law_of_large_numbers 

25: Decoding Intentions Artificial Intelligence and Costly Signals: Andrew Imbrie, Owen Daniels, Helen Toner. October 2023.

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? AI technologies are evolving rapidly and enable a wide range of civilian and military applications. Private sector companies lead much of the innovation in AI, but their motivations and incentives may diverge from those of the state in which they are headquartered. As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition. https://cset.georgetown.edu/publication/decoding-intentions/